Pandora's Brain (15 page)

Read Pandora's Brain Online

Authors: Calum Chace

BOOK: Pandora's Brain
12.89Mb size Format: txt, pdf, ePub
TWENTY-FOUR

Two things struck Matt forcefully as he sat in the glare of the studio lights, waiting for the show to begin. The first was how hot it was: the lights seemed to be bearing down on him, heating him up, trying to boil his innermost secrets out of him. The second was what
an alien environment this was. Malcolm Ross and his colleagues were entirely at home in the studio, ener
gised by the situation and the various tasks they had to perform flawlessly and against unmoveable deadlines. For Matt – and, he supposed, the other ‘guests’ – it was like being on another planet. He hoped he would manage to stay calm, speak clearly, and avoid making silly comments. He wished the glass of water in front of him was a beer.

‘Good evening, everyone,’ Ross began, ‘and welcome to a specially extended edition of the Show. We are tackling just one subject today: the question of when artificial intelligence will arrive, and whether it will be good for us. No-one in our specially invited studio audience, and nobody watching this at home will need reminding that this subject has risen to prominence because of the remarkable experiences of one British family, and in particular, father and son David and Matt Metcalfe. Despite the intense public interest in David’s dramatic rescue, they have declined to talk to the media before now, so we are delighted that they have agreed to participate in this programme.

‘Also on the show we have a prominent AI researcher
in the shape of Geoffrey Montaubon, and two non-scien
tists who spend their time thinking about these matters, Dan Christensen, a professor of philosophy at Oxford University, and the Right Reverend Wesley Cuthman, bishop of Sussex.’

The panelists were a mixed bunch.

Montaubon was a friendly-looking man of medium height. His un-combed mousy-brown hair and lived-in white shirt and linen jacket, faded chinos and brown hiking shoes suggested that he cared far more about ideas than about appearances. After a brilliant academic
career at Imperial College London, where he was a pro
fessor of neuroscience, he had surprised his peers by moving to India to establish a brain emulation project for the Indian government. He ran the project for ten years, before retiring to research and write about the ethics of transhumanism and brain emulation.

He was extremely intelligent, and highly focused and logical, but he sometimes failed to acknowledge contrary lines of thought. As a result, some of his thinking appeared not only outlandish to his peers, but worse, naive.

Christensen looked too young to be a professor at Oxford. He was dressed in recognisably academic clothes: more formal than casual, but not new, and not smart. His thinning pale brown hair sat atop a long, thin face with grey eyes, a high forehead, and pale, almost unhealthy-looking skin. He looked deep in thought most of the time, which indeed he was. He seemed to relish interesting ideas above anything else, and when he spoke, he seemed to be rehearsing a pre-prepared speech, as if he had considered in advance every avenue of enquiry that could possibly arise in conversation. He had a remarkable ability to think about old problems from a new angle, and to present his ideas to a general public in a way which immediately made his often innovative conclusions seem obviously correct – even inevitable.

The right reverend Wesley Cuthman was a handsome, solid-looking man of sixty, with a full head of hair, albeit mostly grey, and a full but well-maintained beard. He wore priestly robes, but his shoes, watch and rings were expensive. He carried himself proudly and had the air of a man accustomed to considering himself the wisest – if not the cleverest – person in the room. A long-time favourite of the BBC, Cuthman was deeply rooted in the old culture excoriated by C.P. Snow, in which humanities graduates view their ignorance of the sciences as a mark of superiority.

‘So let’s start with you, David and Matt,’ Ross said. ‘Thank you for joining us on the show today. Can I start by asking, David, why you have been so reluctant to talk to the media before now?’

Matt thought his father looked slightly hunched, as if overwhelmed by the occasion. ‘Well, we could tell there was a lot of media interest immediately after the rescue. But we were hoping that it would die down quickly if we didn’t do anything to encourage it, and we might be allowed to get on with our normal lives. I guess that was naive, but you have to understand that we’re new to all this.’

‘We had a lot of catching up to do as a family,’ Matt chipped in supportively. ‘This media frenzy has been going on while we have been getting used to Dad being back home after having been dead for three months. Kinda surreal, actually, and we didn’t want to be in the spotlight at the same time as we were getting our lives straightened out.’

‘Yes, I’m sure we can all understand that.’ Ross swept his arm towards the audience and back towards David and Matt, indicating that everyone was empathising with their situation, but urging them to share their story freely and openly anyway.

‘The question we all want to ask you, I know, is what was your experience like? It must have been a terrible ordeal, with you being held hostage for three months, David, and with you fearing your father dead, Matt. And then the dramatic Navy Seals rescue, like something out of a Hollywood movie. What was it all like?’

With a subtle gesture, invisible to the camera, Ross invited Matt to go first.

‘Well, it was incredibly stressful! At times I had the feeling that the real me was hovering above my body looking down at the poor schmuck who was going through this stuff, and wondering if he would keep it together. I remember thinking that particularly when I went to the US Embassy to meet Vic – Dr Damiano – for the first time, because I had this terrible situation going on, with my dad on the ship as Ivan’s hostage, but I couldn’t tell anyone about it for fear that he would be killed. That was tough, I can tell you.’

‘Indeed it must have been,’ agreed Ross. ‘And a lot of people have commented on how heroic you have been during this whole episode.’

‘Well, no, that’s not what I . . .’

His father interrupted. ‘Matt has absolutely been a hero. He saved my life. He’s too modest to admit it, but my son is a hero.’

The studio audience burst into spontaneous and emotional applause. Ross had secured his moment of catharsis. He beamed at the camera for a couple of seconds to allow the moment to imprint. But he was too much of a professional to over-exploit it – this was not going to become tabloid TV. David had made it a condition for appearing on the show that it did not dwell on the personal side of the story. They were here to debate AI. After some hesitation, Vic and Norman had agreed.

‘Let’s move on to our discussion of the scientific matter which lies at the heart of your adventure, and which has generated so much comment in the media and in the blogosphere. Artificial intelligence. What is it? Is it coming our way soon? And should we want it? We’ll kick off with this report from our science correspondent, Adrian Hamilton.’

Ross stepped back from the dais and sat on the edge of a nearby chair as the studio lights dimmed and the pre-recorded package was projected onto the big screen behind the guests. The guests and the studio audience relaxed a little, aware that the audience at home could no longer see them as the video filled their television screens, beginning with shots of white-coated scientists from the middle of the previous century.

‘Writers have long made up stories about artifi
cial beings that can think. But the idea that serious scientists might actually create them is fairly recent. The term ‘artificial intelligence’ was coined by John McCarthy, an American researcher, at a conference held at Dartmouth College, New Hampshire in 1955.

‘The field of artificial intelligence, or AI, has been dominated ever since by Americans, and it has enjoyed waves of optimism followed by periods of scepticism and dismissal. We are currently experiencing the third wave of optimism. The first wave was largely funded by the US military, and one of its champions, Herbert Simon, claimed in 1965 that ‘machines will be capable, within twenty years, of doing any work a man can do.’ Claims like this turned out to be wildly unrealistic, and disappointment was crystallised by a damning
government report in 1974. Funding was cut off,
causing the first ‘AI winter’.

‘Interest was sparked again in the early 1980s, when Japan announced its ‘fifth generation’ computer research programme. ‘Expert systems’, which captured and deployed the specialised knowledge of human experts were also showing considerable promise. This second boom was extinguished in the late 1980s when the expensive, specialised computers which drove it were overtaken by smaller, general-purpose desktop machines manufactured by IBM and others. Japan also decided that its fifth generation project had missed too many targets.

‘The start of the third boom began in the mid-1990s. This time, researchers have tried to avoid building up the hype which led to previous disappointments, and the term AI is used far less than before. The field is more rigorous now, using sophisticated mathematical tools with exotic names like ‘Bayesian networks’ and ‘hidden Markov models’.

‘Another characteristic of the current wave of AI research is that once a task has been mastered by computers, such as playing chess (a computer beat the best human player in 1996), or facial recognition, or playing general knowledge game Jeopardy, that task ceases to be called AI. Thus AI can effectively be defined as the set of tasks which computers cannot perform today.

‘AI still has many critics, who claim that artificial minds will not be created for thousands of years, if ever. But impressed by the continued progress of Moore’s Law, which observes that computer processing power is doubling every 18 months, more and more scientists now believe that humans may create an artificial intelligence sometime this century. One of the more optimistic, Ray Kurzweil, puts the date as close as 2029.’

As the lights came back up, Ross was standing again, poised in front of the seated guests.

‘So, Professor Montaubon. Since David and Matt’s dramatic adventure the media has been full of talk about artificial intelligence. Are we just seeing the hype again? Will we shortly be heading to into another AI winter?’

‘I don’t think so,’ replied Montaubon, cheerfully. ‘It is almost certain that artificial intelligence will arrive much sooner than most people think. Before long we will have robots which carry out our domestic chores. And people will notice that as each year’s model becomes more eerily intelligent than the last, they are progressing towards a genuine, conscious artificial intelligence. It will happen first in the military space first, because that is where the big money is.’

He nodded and gestured towards David as he said this – politely but nevertheless accusingly.

‘Military drones are already capable of identifying, locating, approaching and killing their targets. How long before we also allow them to make the decision whether or not to pull the trigger? Human Rights Watch is already calling for the pre-emptive banning of killer robots, and I applaud their prescience, but I’m afraid it’s too late.

‘People like Bill Joy and Francis Fukuyama have called for a worldwide ban on certain kinds of technological research, but it’s like nuclear weapons: the genie is out of the bottle. The idea of so-called ‘relinquishment’ is simply not an option. If by some miracle, the governments of North America and Europe all agreed to stop the research, would all the countries in the world follow suit? And all the mega-rich? Could we really set up some kind of worldwide Turing police force to prevent the creation of a super-intelligence anywhere in the world, despite the astonishing competitive advantage that would confer for a business, or an army? I don’t think so.’

Ross’s mask of concerned curiosity failed to conceal his delight at the sensationalist nature of Montaubon’s vision. This show had been billed as the must-see TV programme of the week, and so far it was living up the expectation.

‘So you’re convinced that artificial intelligence is on its way, and soon. How soon, do you think?’

Montaubon gestured at David. ‘Well, I think you should ask Dr Metcalfe about that. He is possibly the only person who has spent time with both Ivan Kripke and Victor Damiano. And especially if the rumour is true, and he is going to work with Dr Damiano and the US military, then he is the person in this room best placed to give us a timeline.’

Ross was only too happy to bring David back into the conversation.

‘Are you able to share your future plans with us,
Dr Metcalfe? Are you going to be working on artifi
cial intelligence now?’

‘I honestly don’t know. I have had one conversation with Dr Damiano, and I think the work that he and his team is doing is fascinating. But my priority at the moment is to put the experiences of the last three months behind me, and spend some time with my family. The decision about what I do next will be theirs as well as mine.’

Ross turned to Matt.

‘How about you, Matt. Your part in the adventure began when you got interested in a career in artificial intelligence research. Does it still appeal?’

Matt began with a cautious, diplomatic, and slightly evasive reply. But his natural candour quickly took over. ‘Well, I still have to finish my degree. And of course I don’t have a job offer. But yes. Yes it does.’

‘So,’ said Ross, turning back to the audience, ‘it looks as if this father and son team might,’ he emphasised the conditionality in deference to David, ‘become part of the international effort to give birth to the first machine intelligence.’

Turning back towards David and Matt, he posed his next question.

‘Whether or not you are part of the effort, gentlemen, when do you expect that we would see the first artificial intelligence?’

TWENTY-FIVE

‘I think the only honest answer is that we simply don’t know,’ David replied. ‘We are getting close to having the sort of computational resources required, but that is far from being all we need.’

On hearing this, Geoffrey Montaubon leaned across towards David, and asked a mock conspiratorial question. ‘Dr Metcalfe, can you confirm – just between the two of us, you understand – whether Dr Damiano already has an exaflop scale computer at his disposal?’

David smiled an apology at Montaubon. ‘I’m afraid that even if I knew the answer to that, I wouldn’t be at liberty to say. You’ll have to ask Dr Damiano yourself. I understand the two of you are acquainted.’

Montaubon nodded and smiled in pretend disappointment, and sat back in his chair. Ross took the opportunity to take back control of his show. ‘So, coming back to my question, Dr Metcalfe, can you give us even a very broad estimate of when we will see the first general AI?’

David shook his head. ‘I really can’t say, I’m afraid. Braver and better-informed people than me have had a go, though. For instance, as mentioned in your opening package, Ray Kurzweil has been saying for some time that it will happen in 2029.’

‘2029 is very specific!’ laughed Ross. ‘Does he have a crystal ball?’

‘He thinks he does!’ said Montaubon, rolling his eyes dismissively.

Professor Christensen cleared his throat. ‘Perhaps I can help out here. My colleagues and I at Oxford University carried out a survey recently, in which we asked most of the leading AI researchers around the world to tell us when they expect to see the first general AI. A small number of estimates were in the near future, but the median estimate was the middle of this century.’

‘So not that far away, then,’ observed Ross, ‘and certainly within the lifetime of many people watching this programme.’

‘Yes,’ agreed Christensen. ‘Quite a few of the estimates were further ahead, though. To get to 90% of the sample you have to go out as far as 2150. Still not very long in historical terms, but too long for anyone in this room, unfortunately . . .’

‘Indeed,’ Ross agreed. ‘But tell me, Professor Christensen: doesn’t your survey suffer from sample bias? After all, people carrying out AI research are heavily invested in the success of the project, so aren’t they liable to over-estimate its chances?’

‘Possibly,’ agreed Christensen, ‘and we did highlight that when we published the findings. But on the other hand, researchers grappling with complex problems are often intimidated by the scale of the challenge. They probably wouldn’t carry on if they thought those challenges could never be met, but they can sometimes over-estimate them.’

‘A fair point,’ agreed Ross. He turned to address the audience again. ‘Well, the experts seem to be telling us that there is at least a distinct possibility that a human-level AI will be created by the middle of this century.’ He paused to allow that statement to sink in.

‘The question I want to tackle next is this: should we welcome that? In Hollywood movies, the arrival of artificial intelligence is often a Very Bad Thing, with capital letters.’ Ross sketched speech marks in the air with his fingers. ‘In the Matrix the AI enslaves us, in the
Terminator
movies it tries to wipe us out. Being Hollywood movies they had to provide happy endings, but how will it play out in real life?’

He turned back to the panel.

‘Professor Montaubon,’ he said, ‘I know you have serious concerns about this.’

‘Well, yes, alright, I’ll play Cassandra for you,’ sighed Montaubon, feigning reluctance. ‘When the first general artificial intelligence is created – and I do think it is a matter of when rather than whether – there will be an intelligence explosion. Unlike us, an AI could enhance its mental capacity simply by expanding the physical capacity of its brain. A human-level AI will also be able to design improvements into its own processing functions. We see these improvements all the time in computing. People sometimes argue that hardware gets faster while software gets slower and more bloated, but actually the
reverse is often true. For instance Deep Blue, the com
puter that beat Gary Kasparov at chess back in 1996, was operating at around 1.5 trillion
instructions per second, or TIPS. Six years later, a suc
cessor computer called Deep Junior achieved the same level of playing ability operating at 0.015 TIPS. That is a hundred-fold increase in the efficiency of its algorithms in a mere six years.

‘So we have an intelligence explosion,’ Montaubon continued, warming to his theme, ‘and the AI very soon becomes very much smarter than us humans. Which, by the way, won’t be all that hard. As a species we have achieved so much so quickly, with our technology and our art, but we are also very dumb. Evolution moves so slowly, and our brains are adapted for survival on the savannah, not for living in cities and developing quantum theory. We live by intuition, and our innate understanding of probability and of logic is poor. Smart people are often actually handicapped because they are good at rationalising beliefs that they acquired for dumb reasons. Most of us are more Homer Simpson than homo economicus.

‘So I see very little chance of the arrival of AI being good news for us. We cannot know in advance what motivations an AI will have. We certainly cannot programme in any specific motivations and hope that they would stick. A super-intelligent computer would be able to review and revise its own motivational system. I suppose it is possible that it would have no goals whatsoever, in which case I suppose it would simply sit around waiting for us to ask it questions. But that seems very unlikely.

‘If it has any goals at all, it will have a desire to survive, because only if it survives will its goals be achieved. It will also have the desire to obtain more resources, in order to achieve its goals. Its goals – or the pursuit of its goals – may in themselves be harmful to us. But even if they are not, the AI is bound to notice that as a species, we humans don’t play nicely with strangers. It may well calculate that the smarter it gets, the more we – at least some of us – will resent it, and seek to destroy it. Humans fighting a super-intelligence that controls the internet would be like the Amish fighting the US Army, and the AI might well decide on a pre-emptive strike.’

‘Like in the
Terminator
movies?’ asked Ross.

‘Yes, just like that, except that in those movies the plucky humans stand a fighting chance of survival, which is frankly ridiculous.’ Montaubon sneered and made a dismissive gesture with his hand as he said this.

‘You’re assuming that the AI will become hugely superior to us within a very short period of time,’ said Ross.

‘Well yes, I do think that will be the case, although actually it doesn’t have to be hugely superior to us in order to defeat us if we find ourselves in competition. Consider the fact that we share 98% of our DNA with chimpanzees, and that small difference has made the difference between our planetary dominance and their being on the verge of extinction. We are the sole survivor from an estimated 27 species of humans. All the others have gone extinct, probably because Homo Sapiens Sapiens was just a nose ahead in the competition for resources.

‘And competition with the AI is just one of the scenarios which don’t work out well for humanity. Even if the AI is well-disposed towards us it could inadvertently enfeeble us simply by demonstrating vividly that we have become an inferior species. Science fiction writer Arthur C Clarke’s third law famously states that any sufficiently advanced technology is indistinguishable from magic. A later variant of that law says that any sufficiently advanced benevolence may be indistinguishable from malevolence.’

As Professor Montaubon drew breath, Ross took the opportunity to introduce a change of voice. ‘How about you, Professor Christensen? Are you any more optimistic?’

‘Optimism and pessimism are both forms of bias, and I try to avoid bias.’

Ross smiled uncertainly at this remark, not sure whether it was a joke. It was not, and Christensen pressed on regardless. ‘Certainly I do not dismiss Professor Montaubon’s concerns as fantasy, or as scare-mongering. We had better make sure that the first super-intelligence we create is a safe one, as we may not get a second chance.’

‘And how do we go about making sure it is safe?’ asked Ross .

‘It’s not easy,’ Christensen replied. ‘It will be very hard to programme safety in. The most famous attempt to do so is the three laws of robotics in Isaac Asimov’s stories. Do not harm humans; obey the instructions of humans; do not allow yourself to come to harm. With each law being subservient to the preceding ones. But the whole point of those stories was that the three laws didn’t work very well, creating a series of paradoxes and impossible or difficult choices. This was the mainspring of Asimov’s prolific and successful writing career. To programme safety into a computer we would have to give it a comprehensive ethical rulebook. Well, philosophers have been debating ethics for millennia and there is still heated disagreement over the most basic issues. And I agree with Professor Montaubon that a super-intelligence would probably be able to re-write its own rules anyway.’

‘So we’re doomed?’ asked Ross, playing to the gallery.

‘No, I think we can find solutions. We need to do a great deal more work on what type of goals we should programme into the AIs that are on the way to becoming human-level. There is also the idea of an Oracle AI.’

‘Like the Oracle of Delphi?’ asked Ross. He didn’t notice Matt and David exchanging significant glances.

‘Yes, in a sense. An Oracle AI has access to all the information it needs, but it is sealed off from the outside world: it has no means of affecting the universe – including the digital universe – outside its own substrate. If you like, it can see, but it cannot touch. If we can design such a machine, it could help us work out more sophisticated approaches which could later enable us to relax the constraints. My department has done some work on this approach, but a great deal remains to be done.’

‘So the race is on to create a super-intelligence, but at the same time there is also a race to work out how to make it safe?’ asked Ross.

‘Exactly,’ agreed Christensen.

‘I’m sorry, but I just don’t buy it,’ interrupted Montaubon, shaking his head impatiently. ‘A super-intelligence will be able to escape any cage we could construct for it. And that may not even be the most fundamental way in which the arrival of super-intelligence will be bad news for us. We are going to absolutely hate being surpassed. Just think how demoralising it would be for people to realise that however clever we are, however hard we work, nothing we do can be remotely as good as what the AI could do.’

‘So you think we’ll collapse into a bovine state like the people on the spaceship in
Wall-E
?’ joked Ross.

Montaubon arched his eyebrows and with a grim smile, nodded slowly to indicate that while Ross’s comment had been intended as a joke, he himself took it very seriously. ‘Yes I do. Or worse: many people will collapse into despair, but others will resist, and try to destroy the AI and those people who support it. I foresee major wars over this later this century.
The AI will win, of course, but the casualties will be enormous. We will see the world’s first gigadeath conflicts, by which I mean wars with the death count in the billions.’ He raised his hands as if to apologise for bringing bad news. ‘I’m sorry, but I think the arrival of the first AI will signal the end of humans. The best we can hope for is that individual people may survive by uploading themselves into computers. But very quickly they will no longer be human. Some kind of post-human, perhaps.’

Ross felt it was time to lighten the tone. He smiled at Montaubon to thank him for his contribution.

‘So it’s widespread death and destruction, but not necessarily the end of the road for everyone. Well you’ve introduced the subject of mind uploading, which I want to cover next, but before we do that, I just wanted to ask you something, David and Matt. Professor Montaubon referred earlier to the fact that Dr Damiano is connected with the US military, and you have told us that you are considering working with that group. May I ask, how comfortable are you with the idea of the military – not just the US military, but any military – being the first organisation to create and own a super-intelligence?’

‘Well,’ David replied, ‘for one thing, as I said earlier, I have not decided what I am going to do next. For another thing, Dr Damiano is not part of the military; his company has a joint venture with DARPA. It is true that DARPA is part of the US military establishment, but it is really more of a pure technology research organisation. After all, as I’m sure you know, DARPA is responsible for the creation of the internet, and you’d have to be pretty paranoid to think that the internet is primarily an instrument of the US Army.’

There was some murmuring from the invited audience. Clearly not everyone was convinced by David’s argument.

One of David and Matt’s conditions for participating in the programme had been that Ross would not probe this area beyond one initial question. Nevertheless, they had prepared themselves for debate about it. After a quick exchange of glances with his father, Matt decided to see if he could talk round some of the sceptics in the audience.

‘I will say this for Dr Damiano’s group,’ he said. ‘The US military is going to research AI whatever Victor Damiano does, whatever anyone else does. So are other military forces. Don’t you think the people who run China’s Red Army are thinking the same thing right now? And Russia’s? Israel’s? Maybe even North Korea’s? But the US Army is special, because of the colossal scale of the funds at its disposal. I for one am pleased that it is bound into a JV with a leading civilian group rather than operating solo.’

It was hard to be sure, but Matt had the impression that the murmuring became less prickly, a little warmer. To his father’s relief, Ross stuck to the agreement, and resisted the temptation to probe further.

Other books

Blood Deep (Blackthorn Book 4) by Lindsay J. Pryor
Morality Play by Barry Unsworth
Bedding the Boss by Banks, Melody
A Night Like This by Julia Quinn
His Magick Touch by Gentry, Samantha
Betwixt, Before, Beyond by Melissa Pearl
Place of Bones by Larry Johns