What got lost in translation when Alan Turing translated Ada Lovelace's 'Babbage's Engine cannot originate anything' into a question about computers' ability to surprise him?
Computing machinery, surprises and originality
Professor in Law and Ethics, University of Birmingham
Fellow, Alan Turing Institute & Mozilla
Turing considers whether computers can ‘surprise’ him in the context of his enquiry into whether machines can think. To dis-ambiguate this inquiry, Turing proposes to replace the ‘can machines think?’ with a different question: are there imaginable, `digital’ computers that would do well in an ‘imitation game’? The purpose of that game is to fool a human observer, whose aim is to tell the computer from the human in a question and answer game.
Among the arguments that may be raised to dispute the idea that computers could ever do well at that game, one may point at various ‘disabilities’ of computers. Having surveyed the latter, Turing considers whether Lady Lovelace raises a different sort of argument when, describing Babbage’s ‘Analytical Engine’ (in 1843), she noted that the latter ‘has no pretension to originate anything. It can do whatever we know how to order it to perform’ (Turing, 1950, p. 12). Turing’s translation of the above insight -about the (in)ability to originate something- in terms of an (in)ability to surprise him is peculiar. Interestingly, Turing switches to a rather different take on Lovelace’s originality insight when he suggests that ‘learning machines’ could be another way of responding to her ‘computers lack the capacity to originate’ objection.
This learning-focused response had the potential to bring to light two important distinctions:
1. Operational v. creative autonomy, and the capacity to ‘originate’ something
While a pupil who has mastered long divisions may be said to have ‘operational’ autonomy in that respect, creative autonomy presupposes an ability to interpret and re-appropriate a set of expectations. To understand why the ability to ‘originate’ something presupposes creative -not just operational- autonomy, the following example helps:
When AlphaGo came up with a ‘new’ move (Silver, Huang, Maddison, Guez, & others, 2016), one that had never been considered before, did it ‘originate’ anything? The move itself was merely one of the x-many moves compliant with the rules of Go. Its significance stemmed from its challenging strategic assumptions widely shared among Go experts. The extent of AlphaGo’s operational autonomy (which stemmed from a sophisticated learning process) combined with the vast search space (something like 10^170) increased its ability to challenge the expectations of even the most learned Go experts. None of them had anticipated the value of ‘move 37’. This anticipation failure forced Go experts to reconsider their understanding of the game. In that sense it was a ‘generative’ move, not a move that should count as ‘original’: it only required operational autonomy. Were other members of the public surprised by this new move itself? No. If they were surprised, it was by that system’s ability to surprise human experts: this forced them to reconsider their understanding of what ‘digital machinery’ could do.
2. Trivial surprise v. experience of ‘wonder’
Turing’s pointing out that his hurried calculations are frequently proven wrong, in a way that surprises him, also highlights the importance of distinguishing between trivial and non-trivial surprises. The term ‘surprise’ can be used quite prosaically, to refer to any event or input that we fail to anticipate. In that mundane sense, we cannot but be surprised all the time, given our limited ability to store data and process it. Yet some surprises are less trivial: they come closer to what Fisher describes as an experience of ‘wonder’, which may lead us to question our understanding of ourselves or what surrounds us (Fisher, 1998). These non-trivial surprises can stem from a variety of things, from rainbows to artworks, via human encounters. Only some of these are the product of originality endeavours (and hence creative autonomy).
Taken together, these two distinctions help delineate what is missed in Turing’s ‘surprise’ translation of Lovelace’s originality insight. Not only is it uncommon for surprises to track originality endeavours, the type of autonomy that would be required on the part of ‘digital computers’ for originality and surprise to intersect in that way goes far beyond the operational autonomy that can be achieved by ‘learning machines’ (whether Turing’s or today’s). Aside from its doubtful desirability, such a leap in our digital machinery’s interpretive capabilities is unlikely to be adequately captured in any game-based test, whether Turing’s or otherwise.
Department / Institute / Centre