When the man had left, Robert opened the envelope.
“Bad news?” Helen asked.
He shook his head. “Not a death in the family, if that’s what you meant. It’s from John Hamilton. He’s challenging me to a debate. On the topic ‘Can A Machine Think?’”
“What, at some university function?”
“No. On the BBC. Four weeks from tomorrow.” He looked up. “Do you think I should do it?”
“Radio or television?”
Robert reread the message. “Television.”
Helen smiled. “Definitely. I’ll give you some tips.”
“On the subject?”
“No! That would be cheating.” She eyed him appraisingly. “You can start by throwing out your electric razor. Get rid of the permanent five o’clock shadow.”
Robert was hurt. “Some people find that quite attractive.”
Helen replied firmly, “Trust me on this.”
#
The BBC sent a car to take Robert down to London. Helen sat beside him in the back seat.
“Are you nervous?” she asked.
“Nothing that an hour of throwing up won’t cure.”
Hamilton had suggested a live broadcast, “to keep things interesting,” and the producer had agreed. Robert had never been on television; he’d taken part in a couple of radio discussions on the future of computing, back when the Mark I had first come into use, but even those had been taped.
Hamilton’s choice of topic had surprised him at first, but in retrospect it seemed quite shrewd. A debate on the proposition that “Modern Science is the Devil’s Work” would have brought howls of laughter from all but the most pious viewers, whereas the purely metaphorical claim that “Modern Science is a Faustian Pact” would have had the entire audience nodding sagely in agreement, while carrying no implications whatsoever. If you weren’t going to take the whole dire fairy tale literally, everything was “a Faustian Pact” in some sufficiently watered-down sense: everything had a potential downside, and this was as pointless to assert as it was easy to demonstrate.
Robert had met considerable incredulity, though, when he’d explained to journalists where his own research was leading. To date, the press had treated him as a kind of eccentric British Edison, churning out inventions of indisputable utility, and no one seemed to find it at all surprising or alarming that he was also, frankly, a bit of a loon. But Hamilton would have a chance to exploit, and reshape, that perception. If Robert insisted on defending his goal of creating machine intelligence, not as an amusing hobby that might have been chosen by a public relations firm to make him appear endearingly daft, but as both the ultimate vindication of materialist science and the logical endpoint of most of his life’s work, Hamilton could use a victory tonight to cast doubt on everything Robert had done, and everything he symbolized. By asking, not at all rhetorically, “Where will this all end?”, he was inviting Robert to step forward and hang himself with the answer.
The traffic was heavy for a Sunday evening, and they arrived at the Shepherd’s Bush studios with only fifteen minutes until the broadcast. Hamilton had been collected by a separate car, from his family home near Oxford. As they crossed the studio Robert spotted him, conversing intensely with a dark-haired young man.
He whispered to Helen, “Do you know who that is, with Hamilton?”
She followed his gaze, then smiled cryptically. Robert said, “What? Do you recognize him from somewhere?”
“Yes, but I’ll tell you later.”
As the make-up woman applied powder, Helen ran through her long list of rules again. “Don’t stare into the camera, or you’ll look like you’re peddling soap powder. But don’t avert your eyes. You don’t want to look shifty.”
The make-up woman whispered to Robert, “Everyone’s an expert.”
“Annoying, isn’t it?” he confided.
Michael Polanyi, an academic philosopher who was well-known to the public after presenting a series of radio talks, had agreed to moderate the debate. Polanyi popped into the make-up room, accompanied by the producer; they chatted with Robert for a couple of minutes, setting him at ease and reminding him of the procedure they’d be following.
They’d only just left him when the floor manager appeared. “We need you in the studio now, please, Professor.” Robert followed her, and Helen pursued him part of the way. “Breathe slowly and deeply,” she urged him.
“As if you’d know,” he snapped.
Robert shook hands with Hamilton then took his seat on one side of the podium. Hamilton’s young adviser had retreated into the shadows; Robert glanced back to see Helen watching from a similar position. It was like a duel: they both had seconds. The floor manager pointed out the studio monitor, and as Robert watched it was switched between the feeds from two cameras: a wide shot of the whole set, and a closer view of the podium, including the small blackboard on a stand beside it. He’d once asked Helen whether television had progressed to far greater levels of sophistication in her branch of the future, once the pioneering days were left behind, but the question had left her uncharacteristically tongue-tied.
The floor manager retreated behind the cameras, called for silence, then counted down from ten, mouthing the final numbers.
The broadcast began with an introduction from Polanyi: concise, witty, and non-partisan. Then Hamilton stepped up to the podium. Robert watched him directly while the wide-angle view was being transmitted, so as not to appear rude or distracted. He only turned to the monitor when he was no longer visible himself.
“Can a machine think?” Hamilton began. “My intuition tells me:
no
. My heart tells me:
no
. I’m sure that most of you feel the same way. But that’s not enough, is it? In this day and age, we aren’t allowed to rely on our hearts for anything. We need something scientific. We need some kind of proof.
“Some years ago, I took part in a debate at Oxford University. The issue then was not whether machines might behave like people, but whether people themselves might
be
mere machines. Materialists, you see, claim that we are all just a collection of purposeless atoms, colliding at random. Everything we do, everything we feel, everything we say, comes down to some sequence of events that might as well be the spinning of cogs, or the opening and closing of electrical relays.
“To me, this was self-evidently false. What point could there be, I argued, in even conversing with a materialist? By his own admission, the words that came out of his mouth would be the result of nothing but a mindless, mechanical process! By his own theory, he could have no reason to think that those words would be the truth! Only believers in a transcendent human soul could claim any interest in the truth.”
Hamilton nodded slowly, a penitent’s gesture. “I was wrong, and I was put in my place. This might be self-evident to
me
, and it might be self-evident to
you
, but it’s certainly not what philosophers call an ‘analytical truth’: it’s not actually a nonsense, a contradiction in terms, to believe that we are mere machines. There might, there just
might
, be some reason why the words that emerge from a materialist’s mouth are truthful, despite their origins lying entirely in unthinking matter.
“There might.” Hamilton smiled wistfully. “I had to concede that possibility, because I only had my instinct, my gut feeling, to tell me otherwise.
“But the reason I only had my instinct to guide me was because I’d failed to learn of an event that had taken place many years before. A discovery made in 1930, by an Austrian mathematician named Kurt Gödel.”
Robert felt a shiver of excitement run down his spine. He’d been afraid that the whole contest would degenerate into theology, with Hamilton invoking Aquinas all night – or Aristotle, at best. But it looked as if his mysterious adviser had dragged him into the twentieth century, and they were going to have a chance to debate the real issues after all.
“What is it that we
know
Professor Stoney’s computers can do, and do well?” Hamilton continued. “Arithmetic! In a fraction of a second, they can add up a million numbers. Once we’ve told them, very precisely, what calculations to perform, they’ll complete them in the blink of an eye – even if those calculations would take you or me a lifetime.
“But do these machines
understand
what it is they’re doing? Professor Stoney says, ‘Not yet. Not right now. Give them time. Rome wasn’t built in a day.’” Hamilton nodded thoughtfully. “Perhaps that’s fair. His computers are only a few years old. They’re just babies. Why should they understand anything, so soon?
“But let’s stop and think about this a bit more carefully. A computer, as it stands today, is simply a machine that does arithmetic, and Professor Stoney isn’t proposing that they’re going to sprout new kinds of brains all on their own. Nor is he proposing
giving
them anything really new. He can already let them look at the world with television cameras, turning the pictures into a stream of numbers describing the brightness of different points on the screen … on which the computer can then perform
arithmetic
. He can already let them speak to us with a special kind of loudspeaker, to which the computer feeds a stream of numbers to describe how loud the sound should be … a stream of numbers produced by more
arithmetic
.
“So the world can come into the computer, as numbers, and words can emerge, as numbers too. All Professor Stoney hopes to add to his computers is a ‘cleverer’ way to do the arithmetic that takes the first set of numbers and churns out the second. It’s that ‘clever arithmetic’, he tells us, that will make these machines think.”
Hamilton folded his arms and paused for a moment. “What are we to make of this? Can
doing arithmetic
, and nothing more, be enough to let a machine
understand
anything? My instinct certainly tells me no, but who am I that you should trust my instinct?
“So, let’s narrow down the question of understanding, and to be scrupulously fair, let’s put it in the most favorable light possible for Professor Stoney. If there’s one thing a computer
ought
to be able to understand – as well as us, if not better – it’s arithmetic itself. If a computer could think at all, it would surely be able to grasp the nature of its own best talent.
“The question, then, comes down to this: can you
describe
all of arithmetic,
using
nothing but arithmetic? Thirty years ago – long before Professor Stoney and his computers came along – Professor Gödel asked himself exactly that question.
“Now, you might be wondering how anyone could even
begin
to describe the rules of arithmetic, using nothing but arithmetic itself.” Hamilton turned to the blackboard, picked up the chalk, and wrote two lines:
If x+z = y+z
then x = y
“This is an important rule, but it’s written in symbols, not numbers, because it has to be true for
every
number, every x, y and z. But Professor Gödel had a clever idea: why not use a code, like spies use, where every symbol is assigned a number?” Hamilton wrote:
The code for “a” is 1.
The code for “b” is 2.
“And so on. You can have a code for every letter of the alphabet, and for all the other symbols needed for arithmetic: plus signs, equals signs, that kind of thing. Telegrams are sent this way every day, with a code called the Baudot code, so there’s really nothing strange or sinister about it.
“All the rules of arithmetic that we learned at school can be written with a carefully chosen set of symbols, which can then be translated into numbers. Every question as to what does or does not
follow from
those rules can then be seen anew, as a question about numbers. If
this
line follows from
this
one,” Hamilton indicated the two lines of the cancellation rule, “we can see it in the relationship between their code numbers. We can judge each inference, and declare it valid or not, purely by doing arithmetic.
“So, given
any
proposition at all about arithmetic – such as the claim that ‘there are infinitely many prime numbers’ – we can restate the notion that we have a proof for that claim in terms of code numbers. If the code number for our claim is x, we can say ‘There is a number p, ending with the code number x, that passes our test for being the code number of a valid proof.’”
Hamilton took a visible breath.
“In 1930, Professor Gödel used this scheme to do something rather ingenious.” He wrote on the blackboard:
There DOES NOT EXIST a number p meeting the following condition: p is the code number of a valid proof of this claim.
“Here is a claim about arithmetic, about numbers. It has to be either true or false. So let’s start by supposing that it happens to be true. Then there
is no
number p that is the code number for a proof of this claim. So this is a true statement about arithmetic, but it can’t be proved merely by
doing
arithmetic!”
Hamilton smiled. “If you don’t catch on immediately, don’t worry; when I first heard this argument from a young friend of mine, it took a while for the meaning to sink in. But remember: the only hope a computer has for understanding
anything
is by doing arithmetic, and we’ve just found a statement that
cannot
be proved with mere arithmetic.
“Is this statement really true, though? We mustn’t jump to conclusions, we mustn’t damn the machines too hastily. Suppose this claim is false! Since it claims there is no number p that is the code number of its own proof, to be false there would have to be such a number, after all. And that number would encode the ‘proof’ of an acknowledged falsehood!”
Hamilton spread his arms triumphantly. “You and I, like every schoolboy, know that you can’t prove a falsehood from sound premises – and if the premises of arithmetic aren’t sound, what is? So
we
know, as a matter of certainty, that this statement is true.