Read The Most Human Human Online

Authors: Brian Christian

The Most Human Human (4 page)

BOOK: The Most Human Human
12.43Mb size Format: txt, pdf, ePub
ads

“I’ve been talking to it for a while now, and am beginning to wonder if Cleverbot is really even a real bot,” one user writes. “Cleverbot always says that itself is the human, and I am the bot. Could it be that I’m actually talking to a real person, but we are both tricked to believe that the other is Cleverbot?” Some of the responses seem just a hair too uncanny. One user writes, in French, “Je habite dans les Etats Unis,” and finds his French not only comprehended but
critiqued:
“On dit J’habite AUX Etats-Unis,” the bot corrects him.

Another trades off lines of Queen’s “Bohemian Rhapsody”:

USER
:
Scaramouche, Scaramouche.

CLEVERBOT
:
Will you do the fandango?

USER
:
Thunderbolt and lightning.

CLEVERBOT
:
Very very frightening me!

USER
:
Galileo.

CLEVERBOT
:
Galileo Galileo.

One user reports telling it a popular joke of the time—

USER
:
You know why they call it an Xbox 360?

CLEVERBOT
:
Why?

USER
:
Because when you see it, you turn 360 degrees and walk away.

—only to have his joke deconstructed by the bot:

CLEVERBOT
:
No because then you would be facing it and that would be unfunny.

Other users, attempting to fix the logic of the joke, meanwhile making it slightly more drily absurd—

USER
:
You know why they call it an Xbox 360?

CLEVERBOT
:
No. Why do they call it Xbox 360?

USER
:
Because when you see it, you turn 360 degrees and moonwalk away.

—were met with a kind of one-upmanship of an even drier, more absurd punch line:

CLEVERBOT
:
Nope, because when you see it you turn 3 degrees and walk slightly askew.

Surely there must be a human at the other side! The replies are too culturally attuned, too savvy. Yet at other moments, conversations change gears without a clutch, or a non sequitur comes seemingly from left field.

Cleverbot’s programmer, Rollo Carpenter, is happy to explain his creation’s programmatic workings, and insists on Cleverbot’s home page that “visitors never talk to a human, however convincing
it is.” Curiously, this insistence seems to have little effect on many users, who have their own, rather different theory about what’s going on.

The Internet of the early 1990s was a much more anonymous place than it is now. On local BBSs (bulletin board systems), in the chat rooms of “walled garden” Internet providers/communities like Prodigy and AOL, and over universal chat protocols like IRC (Internet Relay Chat), strangers bumped into each other all the time. The massive social networks (e.g., Facebook) of the late ’00s and early ’10s have begun to make the Internet a different place. It’s around this time that websites like Chatroulette and Omegle, designed to bring some of that anonymity, randomness, and serendipity back, took off. You choose to use either video or text and are then paired up with another user completely at random and begin a conversation.
3
At any time, either of you can terminate it, in which case you’re both re-paired with new strangers and begin instantly again at “Hello.” There’s an anxiety all users of such sites feel about the prospect of the other person cutting off the dialogue and bumping both of you into new conversations, which has been dubbed “getting nexted.”

Now, imagine if, instead, the computer system was
automatically
cutting off conversations and re-pairing users with each other, and that it was
not telling them
it was doing this. Users A and B are arguing about baseball, and users C and D are talking about art. All of a sudden A is re-paired with C, and B re-paired with D. After talking
about the Louvre, C receives the off-topic “So are you for the Mets or the Yankees?” and B, after analyzing the most recent World Series, is asked if he’s ever seen the Sistine Chapel. Well, this is the conspiracy theory on Cleverbot (and some of its cousin bots, like Robert Medeksza’s Ultra Hal): Omegle minus control over when to switch conversations. Imagine that the computer is simply switching you over, at random and without notice, to new people, and doing the same to them. What you’d end up with might look a lot like the Cleverbot transcripts.

The conspiracy theory isn’t right, but it’s not far off either.

“Cleverbot borrows the intelligence of its users,” Carpenter explains to me in Brighton. “A conversational Wikipedia,” he calls it in a television interview with the Science Channel. It works like this: Cleverbot begins a conversation by saying, for instance, “Hello.” A user might respond in any number of ways, from “Hello” to “Howdy!” to “Are you a computer?” and so on. Whatever the user says goes into an enormous database of utterances, tagged as a genuine human response to “Hello.” When, in a subsequent conversation, a user ever says to Cleverbot, “Hello,” Cleverbot will have “Howdy!” (or whatever the first person said) ready on hand. As the same types of things tend to come up over and over—in what statisticians call a “Zipf distribution,” to be precise—and as thousands of users are logged in to Cleverbot at any given time, chatting with it around the clock, over the span of many years now, Cleverbot’s database contains appropriate replies to even seemingly obscure remarks. (E.g., “Scaramouche, Scaramouche.”)

What you get, the cobbling together of hundreds of thousands of prior conversations, is a kind of conversational purée. Made of human parts, but less than a human sum. Users
are
, in effect, chatting with a kind of purée of real people—the
ghosts
of real people, at any rate: the echoes of conversations past.

This is part of why Cleverbot seems so impressive on basic factual questions (“What’s the capital of France?” “Paris is the capital of France”) and pop culture (trivia, jokes, and song lyric sing-alongs)—the
things to which there is a
right
answer independent of the speaker. No number of cooks can spoil the broth. But ask it about the city it lives in, and you get a pastiche of thousands of people talking about thousands of places. You find it out not so much by realizing that you aren’t talking with a
human
as by realizing that you aren’t talking with
a
human.

One Self, Any Self

My mind goes back to the organizers’ advice to “just be myself,” to how much philosophers have agonized over this idea. While other existentialists—for instance, Jean-Paul Sartre—emphasized authenticity and originality and freedom from outside influence, nineteenth-century German philosopher Friedrich Nietzsche held the startling opinion that the most important part of “being oneself” was—in Brown University philosopher Bernard Reginster’s words—“being
one
self,
any
self.”

Nietzsche spoke of this as “giving style to one’s character,” comparing people to works of art, which we often judge according to their “concinnity,” the way their parts fit together to make a whole: “In the end, when the work is finished, it becomes evident how the constraint of a single taste governed and formed everything large and small.”

Computer culture critics like Jaron Lanier are skeptical, for instance, of decentralized projects like Wikipedia, arguing:

The Sims
, … the iPhone, the Pixar movies, and all the other beloved successes of digital culture … are personal expressions. True, they often involve large groups of collaborators, but there is always a central personal vision—a Will Wright, a Steve Jobs, or a Brad Bird conceiving the vision and directing a team of people earning salaries.

It is this same “central personal vision” that is crucial for Nietzsche, who goes so far as to say, “Whether this taste was good or bad is less important than one might suppose, if only it was a single taste!”

It is precisely the “central personal vision” of Lanier and “single taste” of Nietzsche that is lacking in most chatbots. For instance, I had the following conversation with “Joan,” the Cleverbot-offshoot program that won the Loebner Prize in 2006. Though each of her answers, taken separately, is perfectly sensible and human, their sum produces nothing but a hilarious cacophony in the way of identity:

USER
:
Do you have a boyfriend?

JOAN
:
Not at the moment, but I hope to find one soon.

I ventured:

USER
:
I would like to be your boyfriend.

JOAN
:
That would be difficult, since I’m happily married.

Curious. I asked again, for clarification:

USER
:
Do you have a boyfriend?

JOAN
:
No, I’m still looking for love.

USER
:
Do you have a husband?

JOAN
:
I don’t have a husband, because it would be illegal for me to have one, since I am male.

Male?

USER
:
What gender are you?

JOAN
:
Female.

And once again, just for good measure:

USER
:
Do you have a boyfriend?

JOAN
:
Not at the moment, but I live in hope.

USER
:
Can I be your boyfriend?

JOAN
:
No, I’m straight.

This kind of unity or coherence of identity is something that most humans, of course—being the products of a single and continuous life history—have. But given the extreme brevity of a five-minute conversation, displaying that kind of congruence was something I tried to be aware of. For instance, when a judge said hello to my fellow confederate Dave, Dave replied with the nicely colorful and cheerful “G’day mate.”

The drawback of this choice becomes immediately clear, however, as the judge’s next question was “Have you come far to be here?” The judge, I imagine, was expecting some reference to Australia, the land that “G’day mate” evokes; instead, Dave answered, “From the southwest US.” To the judge’s mild surprise, I imagine, he discovers that Dave is not Australian at all, as his salutation would suggest, but rather an American from Westchester, New York, living in Albuquerque. It’s not game over—it doesn’t take Dave too long to win over the judge’s confidence (and his vote)—but those signs of disjointed identity are early warning flags and, in that sense, falters.

In similar fashion, when a judge I was talking to spelled “color” in the British style (“colour”), and then several messages later referenced “Ny,” which I took to mean “New York” (actually it turned out to be a typo for “My”), I asked where he was from. “Canadian spelling, not Biritish [
sic
],” he explained; my hope was that showing attunement, and over multiple utterances, to these questions of cohesiveness of identity would help my case. Presumably, a bot that can’t keep track of the coherence of its
own
identity wouldn’t be able to keep track of the judge’s either.

“When making a bot, you don’t write a program, you write a novel,” explain programmers Eugene Demchenko and Vladimir Veselov, whose program “Eugene Goostman” was the runner-up at the 2008 competition, as well as in 2005 and 2001. They stress the importance of having a single programmer write the machine’s responses: “Elect who will be responsible for the bot personality. The knowledge-base writing process can be compared to writing a book. Suppose every
developer describes an episode without having any information on the others. Can you imagine what will be produced!”

In fact, it’s quite easy to imagine what will be produced: “Eugene Goostman” ’s competitors. This is a central trade-off in the world of bot programming, between coherence of the program’s personality or style and the range of its responses. By “crowdsourcing” the task of writing a program’s responses to the users themselves, the program acquires an explosive growth in its behaviors, but these behaviors stop being internally consistent.

Death of the Author; End of the Best Friend

Do you need
someone
?
Or do you need
me
?

–SAY ANYTHING …

Speaking of “writing a book”: this notion of style versus content, and of singularity and uniqueness of vision, is at the heart of recent debates about machine translation, especially of literature.

Wolfram Alpha researcher and chatbot author Robert Lockhart describes the chatbot community as being split between two competing approaches, what he calls “pure semantics” and “pure empiricism.” Roughly speaking, the semantic camp tries to program linguistic
understanding
, with the hope that the desired behavior will follow, and the empirical camp tries to directly program linguistic
behavior
, with the hope that “understanding” will either happen along the way or prove to be an unnecessary middleman. This divide also plays out in the history of computer translation. For many decades, machine translation projects attempted to understand language in a rule-based way, breaking down a sentence’s structure and getting down to the underlying, universal meaning, before re-encoding that meaning according to another language’s rules. In the 1990s, a statistical approach to machine translation—the approach that Google uses—came into its own, which left the question of meaning entirely out of it.

Cleverbot, for instance, can know that “Scaramouche, Scaramouche” is best answered by “Will you do the fandango?” without needing any links to Queen or “Bohemian Rhapsody” in between, let alone needing to know that Scaramouche is a stock character in seventeenth-century Italian farce theater and that the fandango is an Andalusian folk dance. It’s simply observed people saying one, then the other. Using huge bodies of text (“corpora”) from certified United Nations translators, Google Translate and its statistical cousins regurgitate previous human translations the way Cleverbot and its cousins regurgitate previous human speech. Both Google Translate and Cleverbot show weaknesses for (1) unusual and/or nonliteral phrasing, and (2) long-term consistency in point of view and style. On both of those counts, even as machine translation increasingly penetrates the world of business, literary novels remain mostly untranslatable by machine.

BOOK: The Most Human Human
12.43Mb size Format: txt, pdf, ePub
ads

Other books

Julia Justiss by The Untamed Heiress
Nobody Dies For Free by Pro Se Press
Damned If I Do by Percival Everett
The Dictionary of Dreams by Gustavus Hindman Miller
The Tree by Colin Tudge