Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (32 page)

BOOK: Machines of Loving Grace
8.7Mb size Format: txt, pdf, ePub
ads

This may be science fiction, but in the real world, this territory had become familiar to Liesl Capper almost a decade earlier.
Capper, then the CEO of the Australian chatbot company My Cybertwin, was reviewing logs from a service she had created called My Perfect Girlfriend with growing horror.
My Perfect Girlfriend was intended to be a familiar chatbot conversationalist that would show off the natural language technologies offered by Capper’s company.
However, the experiment ran amok.
As she read the transcripts from the website, Capper discovered that she had, in effect, become an operator of a digital brothel.

Chatbot technology, of course, dates back to Weizenbaum’s early experiments with his Eliza program.
The rapid growth of computing technology threw into relief the question of the relationship between humans and machines.
In
Alone Together: Why We Expect More from Technology and Less from Each Other,
MIT social scientist Sherry Turkle expresses discomfort with technologies that increase human interactions with machines at the expense of human-to-human contact.
“I believe that sociable technology will always disappoint because it promises what it can’t deliver,” Turkle writes.
“It promises friendship but can only deliver ‘performances.’
Do we really want to be in the business of manufacturing friends that will never be friends?”
20
Social scientists have long described this phenomenon as the false sense of community—“pseudo-gemeinschaft”—and it
is not limited to human-machine interactions.
For example, a banking customer might value a relationship with a bank teller, even though it exists only in the context of a commercial transaction and such a relationship might be only a courteous, shallow acquaintanceship.
Turkle also felt that the relationships she saw emerging between humans and robots in MIT research laboratories were not genuine.
The machines were designed to express synthetic emotions only to provoke or elucidate specific human emotional responses.

Capper would eventually see these kinds of emotional—if not overtly sexual—exchanges in the interactions customers were having with her Perfect Girlfriend chatbots.
A young businesswoman who had grown up in Zimbabwe, she had previously obtained a psychology degree and launched a business franchising early childhood development centers.
Capper moved to Australia just in time to face the collapse of the dot-com bubble.
In Australia, she first tried her hand at search engines and developed Mooter, which personalized search results.
Mooter, however, couldn’t hold its own against Google’s global dominance.
Although her company would later go public in Australia, she left in 2005, along with her business partner, John Zakos, a bright Australian AI researcher enamored since his teenage years with the idea of building chatbots.
Together they built My Cybertwin into a business selling FAQbot technology to companies like banks and insurance companies.
These bots would give website users relevant answers to their frequently asked questions about products and services.
It proved to be a great way for companies to inexpensively offer personalized information to their customers, saving money by avoiding customer call center staffing and telephony costs.
At the time, however, the technology was not yet mature.
Though the company had some initial business success, My Cybertwin also had competitors, so Capper looked for ways to expand into new markets.
They tried to turn My Cybertwin into a program that created a software avatar that would interact with other people
over the Internet, even while its owner was offline.
It was a powerful science-fiction-laced idea that yielded only moderately positive results.

Capper has been equivocal and remains uncommitted about whether virtual assistants will take away human jobs.
In interviews, she would note that virtual assistants don’t directly displace workers and would focus instead on mundane work her Cybertwins do for many companies, which she argued freed up humans to do more complex and ultimately more satisfying work.
At the same time, Zakos attended conferences, making assertions that when companies ran A-B testing that compared the way the Cybertwins responded to text-based questions to the way humans in call centers responded to text-based questions, the Cybertwins outperformed the humans in customer satisfaction.
They boasted that when they deployed a commercial system on the website of National Australia Bank, the country’s largest bank, more than 90 percent of visitors to the site believed that they were interacting with a human rather than a software program.
In order to be convincing, conversational software on a bank website might need to answer about 150,000 different questions—a capability that is now easily within the range of computing and storage systems.

Despite their unwillingness to confront the human job-displacement question, the consequences of Capper and Zakos’s work are likely to be dramatic.
Much of the growth of the U.S.
white-collar workforce after World War II was driven by the rapid spread of communications networks: telemarketers, telephone operators, and technical and sales support jobs all involved giving companies the infrastructure to connect customers with employees.
Computerization transformed these occupations:
call centers moved overseas and the first generation of automated switchboards replaced a good number of switchboard and telephone operators.
Software companies like Nuance, the SRI spin-off that offers speaker-independent voice recognition, have begun to radically transform customer call centers and airline reservation systems.
Despite consumers’ rejection of “voicemail hell,” system technology like My Cybertwin and Nuance will soon put at risk jobs that involve interacting with customers via the telephone.
The My Cybertwin conversational technology might not be good enough to pass a full-on Turing test, but it was a step ahead of most of the chatbots that were available via the Internet at the time.

Capper believes deeply that we will soon live in a world in which virtual robots are routine human companions.
She holds none of the philosophical reservations that plagued researchers like Weizenbaum and Turkle.
She also had no problem conceptualizing the relationship between a human and a Cybertwin as a master-slave relationship.
21
In 2007 she began to experiment with programs called My Perfect Boyfriend and My Perfect Girlfriend.
Not surprisingly, there was substantially more traffic on the Girlfriend site, so she set up a paywall for premium parts of the service.
Sure enough, 4 percent of the people—presumably mostly men—who had previously visited the site were willing to pay for the privilege of creating an online relationship.
These people were told that there was nothing remotely human on the other end of the connection and that they were interacting with an algorithm that could only mimic a human partner.
Indeed, they were willing to pay for this service, even though already at the time there was no shortage of “sex chat” websites with actual humans on the other end of the conversation.

Maybe that was the explanation.
Early in the personal computer era, there was a successful text-adventure game publisher called Infocom whose marketing slogan was: “The best graphics are in your head.”
Perhaps the freedom of interacting with a robot relaxed the mind precisely because there was no messy human at the other end of the line.
Maybe it wasn’t about a human relationship at all, but more about having control and being the master.
Or, perhaps, the slave.

Whatever the psychology underpinning the interactions, it
freaked Capper out.
She was seeing more of the human psyche than she had bargained for.
And so, despite the fact that she had stumbled onto a nascent business, she backed away and shut down My Perfect Girlfriend in 2014.
There must be a better way of building a business, she decided.
It would turn out that Capper’s business sense was well timed.
Apple’s embrace of Siri had transformed the market for virtual agents.
The computing world no longer understood conversational systems as quirky novelties, but rather as a legitimate mainstream form of computer interaction.
Before My Perfect Girlfriend, Capper had realized that her business must expand to the United States if it was to succeed.
She raised enough money, changed the company’s name from My Cybertwin to Cognea, and set up shop in both Silicon Valley and New York.
In the spring of 2014, she sold her company to IBM.
The giant computer firm followed its 1997 victory in chess over Garry Kasparov with a comparable publicity stunt in which one of its robots competed against two of the best human players of the TV quiz show
Jeopardy!
In 2011, the IBM Watson system triumphed over Brad Rutter and Ken Jennings.
Many thought the win was evidence that AI technologies had exceeded human capabilities.
The reality, however, was more nuanced.
The human contestants could occasionally anticipate the brief window of time in which they could press the button and buzz in before Watson.
In practice, Watson had an overwhelming mechanical advantage that had little to do with artificial intelligence.
When it had a certain statistical confidence that it had the correct answer, Watson was able to press the button with unerring precision, timing its button press with much greater accuracy than its human competitors, literally giving the machine a winning hand.

The irony with regards to Watson’s ascendance is that IBM has historically portrayed itself as an augmentation company rather than a company that sought to replace humans.
Going all the way back to the 1950s, when it terminated its first formal foray into AI research, IBM has been unwilling to advertise
that the computers it sells often displace human workers.
22
In the wake of its Watson victory, the company portrayed its achievement as a step toward augmenting human workers and stated that it planned to integrate Watson’s technology into the health-care field as an intellectual aid to doctors and nurses.

However, Watson was slow to take off as a physicians’ advisor, and the company has broadened its goal for the system.
Today the Watson business group is developing applications that will inevitably displace human workers.
Watson had originally been designed as a “question-answering” system, making progress toward the fundamental goals in artificial intelligence.
With Cognea, Watson gained the ability to carry on a conversation.
How will Watson be used?
The choice faced by IBM and its engineers is remarkable.
Watson can serve as an intelligent assistant to any number of professionals, or it can replace them.
At the dawn of the field of artificial intelligence IBM backed away from the field.
What will the company do in the future?

Ken Jennings, the human
Jeopardy!
champion, saw the writing on the wall: “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines.
‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”
23

7
   
|
   
TO THE RESCUE

T
he robot laboratory was ghostly quiet on a weekend afternoon in the fall of 2013.
The design studio itself could pass for any small New England machine shop, crammed with metalworking and industrial machines.
Marc Raibert, a bearded roboticist and one of the world’s leading designers of walking robots, stood in front of a smallish interior room, affectionately called the “meat locker,” and paused for effect.
The room was a jumble of equipment, but at the far end seven imposing humanoid robots were suspended from the ceiling, as if on meat hooks.
Headless and motionless, the robots were undeniably spooky.
Without skin, they were cybernetic skeleton-men assembled from an admixture of steel, titanium, and aluminum.
Each was illuminated by an eerie blue LED glow that revealed a computer embedded in the chest that monitored its motor control.
Each of the presently removed “heads” housed another computer that monitored the body’s sensor control and data acquisition.
When they were
fully equipped, the robots stood six feet high and weighed 330 pounds.
When moving, they were not as lithe in real life as they were in videos, but they had an undeniable presence.

It was the week before DARPA would announce that it had contracted Boston Dynamics, the company that Raibert had founded two decades earlier, to build “Atlas” robots as the common platform for a new category of Grand Challenge competitions.
This Challenge aimed to create a generation of mobile robots capable of operating in environments that were too risky or unsafe for humans.
The company, which would be acquired by Google later that year, had already developed a global reputation for walking and running robots that were built mostly for the Pentagon.

Despite taking research dollars from the military, Raibert did not believe that his firm was doing anything like “weapons work.”
For much of his career, he had maintained an intense focus on one of the hardest problems in the world of artificial intelligence and robotics: building machines that moved with the ease of animals through an unstructured landscape.
While artificial intelligence researchers have tried for decades to simulate human intelligence, Raibert is a master at replicating the agility and grace of human movement.
He had long believed that creating dexterous machines was more difficult than many other artificial intelligence challenges.
“It is as difficult to reproduce the agility of a squirrel jumping from branch to branch or a bird taking off and landing,” Raibert argued, “as it is to program intelligence.”

The Boston Dynamics research robots, with names like LittleDog, BigDog, and Cheetah, had sparked lively and occasionally hysterical Internet discussion about the Terminator-like quality of modern robots.
In 2003 the company had received its first DARPA research contract for a biologically inspired quadruped robot.
Five years later, a remarkable video on YouTube showed BigDog walking over uneven terrain, skittering on ice, and withstanding a determined kick from a
human without falling.
With the engine giving off a banshee-like wail, it did not take much to imagine being chased through the woods by such a contraption.
More than sixteen million people viewed the video, and the reactions were visceral.
For many, BigDog exemplified generations of sinister sci-fi and Hollywood robots.

Raibert, who usually wears jeans and Hawaiian shirts, was unfazed by, and even enjoyed, his Dr.
Evil image.
As a rule, he would shy away from engaging directly with the media, and communicated instead through a frequent stream of ever more impressive “killer” videos.
Yet he monitored the comments and felt that many of them ignored the bigger picture: mobile robots were on the cusp of becoming a routine part of the way humans interact with the world.
When speaking on the record, he simply said that he believed his critics were missing the point.
“Obviously, people do find it creepy,” he told a British technical journal.
“About a third of the 10,000 or so responses we have to the BigDog videos on YouTube are from people who are scared, who think that the robots are coming for them.
But the ingredient that affects us most strongly is a sense of pride that we’ve been able to come so close to what makes people and animals animate, to make something so lifelike.”
1
Another category of comments, he pointed out, was from viewers who feigned shock while enjoying a sci-fi-style thrill.

The DARPA Robotics Challenge (DRC) underscored the desired spectrum of possibilities for the relationship between humans and robots even more clearly than the previous Grand Challenge for driverless cars.
It foreshadowed a world in which robots would partner with humans, dance with them, be their slaves, or potentially replace them entirely.
In the initial DRC competition in 2013, the robots were almost completely teleoperated by a human reliant on the robot’s sensor data, which was sent over a wired network connection.
Boston Dynamics built Atlas robots with rudimentary motor control capabilities like walking and arm movements and made them available to
competing teams, but the higher-level functions that the robots would need to complete specific tasks were to be programmed independently by the original sixteen teams.
Later that fall, when Boston Dynamics delivered the robots to the DRC, and also when they actually competed in a preliminary competition held in Florida at the end of the year, the robots proved to be relatively slow and clumsy.

Hanging in the meat locker waiting to be deployed to the respective teams, however, they looked poised to spring into action with human nimbleness.
On a quiet afternoon it evoked a scene from the 2004 movie
I, Robot,
where a police detective played by actor Will Smith walks, gun drawn, through a vast robot warehouse containing endless columns of frozen humanoid robots awaiting deployment.
In a close-up shot, the eyes of one sinister automaton focus on the moving detective before it springs into action.

D
ecades earlier, when Raibert began his graduate studies at MIT, he had set out to study neurophysiology.
One day he followed a professor back to the MIT AI Lab.
He walked into a room where one of the researchers had a robot arm lying in pieces on the table.
Raibert was captivated.
From then on he wanted to be a roboticist.
Several years later, as a newly minted engineer, Raibert got a job at NASA’s Jet Propulsion Laboratory in Pasadena.
When he arrived, he felt like a stranger in a strange land.
Robots, and by extension their keepers, were definitely second-class citizens compared to the agency’s stars, the astronauts.
JPL had hired the brand-new MIT Ph.D.
as a junior engineer into a project that proved to be stultifyingly boring.

Out of self-preservation, Raibert started following the work of Ivan Sutherland, who by 1977 was already a legend in computing.
Sutherland’s 1962 MIT Ph.D.
thesis project “Sketchpad” had been a major step forward in graphical and interactive
computing, and he and Bob Sproull codeveloped the first virtual reality head-mounted display in 1968.
Sutherland went to Caltech in 1974 as founding chair of the university’s new computer science department, where he was instrumental in working with physicist Carver Mead and electrical engineer Lynn Conway on a new model for designing and fabricating integrated circuits with hundreds of thousands of logic elements and memory—a 1980s advance that made possible the modern semiconductor industry.

Alongside his older brother Bert, Sutherland had actually come to robotics in high school, during the 1950s.
The two boys had the good fortune to be tutored by Edmund C.
Berkeley, an actuary and computing pioneer who had written
Giant Brains, or Machines That Think
in 1949.
In 1950, Berkeley had designed Simon, which, although it was constructed with relays and a total memory of four two-bit numbers, could arguably be considered the first personal computer.
2
The boys modified it to do division.
Under Berkeley’s guidance, the Sutherland brothers worked on building a maze-solving mouselike robot and Ivan built a magnetic drum memory that was capable of storing 128 two-bit numbers for a high school science project, which got Ivan a scholarship to Carnegie Institute of Technology.

Once in college, the brothers continued to work on a “mechanical animal.”
They went through a number of iterations of a machine called a “beastie,” which was based on dry cell batteries and transistors and was loosely patterned after Berkeley’s mechanical squirrel named Squee.
3
They spent endless hours trying to program the beastie to play tag.

Decades later, as the chair of Caltech’s computer science department in the 1970s, Sutherland, long diverted into computer graphics, had seemingly left robot design interests behind him.
When Raibert heard Sutherland lecture, he was riveted by the professor’s musings on what might soon be possible in the field.
Raibert left the auditorium feeling entirely
fired up.
He set about breaking down the bureaucratic wall that protected the department chair by sending Sutherland several polite emails, and also leaving a message with his secretary.

His initial inquiries ignored, Raibert became irritated.
He devised a plan.
For the next two and a half weeks, he called Sutherland’s office every day at two
P
.
M
.
Each day the secretary answered and took a message.
Finally a gruff Sutherland returned his call.
“What do you want?”
he shouted.
Raibert explained that he was anxious to collaborate with Sutherland and wanted to propose some possible projects.
When they finally met in 1977, Raibert had prepared three ideas and Sutherland, after listening to the concept of a one-legged walking—hopping, actually—robot, brusquely declared: “Do that one!”

Sutherland would become Raibert’s first rainmaker.
He took him along on a visit to DARPA (where Sutherland had worked for two years just after Licklider) and to the National Science Foundation, and they came away with a quarter million dollars in research funding to get the project started.
The two worked together on early walking robots at Caltech, and several years later Sutherland persuaded Raibert to move with him to Carnegie Mellon, where they continued with research on walking machines.

Ultimately Raibert pioneered a remarkable menagerie of robots that hopped, walked, twirled, and even somersaulted.
The two had adjoining offices at CMU and coauthored an article on walking machines for
Scientific American
in January 1983.
Raibert would go on to set up the Leg Laboratory at CMU in 1981 and then move the laboratory to MIT while he held a faculty position there from 1986 to 1992.
He left MIT to found Boston Dynamics.
Another young MIT professor, Gill Pratt, would continue to work in the Leg Lab, designing walking machines and related technologies enabling robots to work safely in partnership with humans.

R
aibert pioneered walking machines, but it was his CMU colleague Red Whittaker who almost single-handedly created “field robotics,” machines that moved freely in the physical world.
DARPA’s autonomous vehicle contest had its roots in Red Whittaker’s quixotic scheme to build a machine that could make its way across an entire state.
The new generation of mobile walking rescue robots had their roots in the work that he did in building some of the first rescue robots three and a half decades ago.

Whittaker’s career took off with the catastrophe at Three Mile Island Nuclear Generating Station on March 28, 1979.
He had just received his Ph.D.
when there was a partial meltdown in one of the two nuclear reactors at the site.
The crisis exposed how unprepared the industry was to cope with the loss of control of a reactor’s radioactive fuel.
It would be a half decade before robots built by Whittaker and his students would enter the most severely damaged areas of the reactor and help with the cleanup.

Whittaker’s opportunity came when two giant construction firms, having spent $1 billion, failed to get into the basement of the crippled reactor to inspect it and begin the cleanup.
Whittaker sent the first CMU robot, which his team assembled in six months and dubbed “Rover,” into Three Mile Island in April of 1984.
It was a six-wheeled contraption outfitted with lights, and a camera tethered to its controller.
It was lowered into the basement, where it traversed water, mud, and debris, successfully gathering the first images of consequences of the disaster.
The robot was later modified to perform inspections and conduct sampling.
4

BOOK: Machines of Loving Grace
8.7Mb size Format: txt, pdf, ePub
ads

Other books

Entwined With the Dark by Nicola Claire
Covert Craving by Jennifer James
Build a Man by Talli Roland
Tribb's Trouble by Trevor Cole
Love On The Brazos by Carlton, Susan Leigh
The Forbidden Tomb by Kuzneski, Chris
Hard to Love You by Megan Smith
Among Strange Victims by Daniel Saldaña París
Tiger Milk by Stefanie de Velasco,