Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (48 page)

BOOK: Machines of Loving Grace
2.76Mb size Format: txt, pdf, ePub
ads

Given that DeepMind had been acquired by Google, Legg’s public philosophizing is particularly significant.
Today, Google is the clearest example of the potential consequences of AI and IA.
Founded on an algorithm that efficiently collected human knowledge and then returned it to humans as a powerful tool for finding information, Google is now engaged in building a robot empire.
The company will potentially create machines that replace human workers, like drivers, delivery personnel, and electronics assembly workers.
Whether it will remain an “augmentation” company or become a predominately AI-oriented organization is unclear.

The new concerns about the potential threat from AI and robotics evoke the issues that confronted the fictional Tyrell Corporation in the science-fiction movie
Blade Runner,
which raised the ethical issues posed by the design of intelligent machines.
Early in the movie Deckard, a police detective, confronts Rachael, an employee of a firm that makes robots, or replicants, and asks her if an artificial owl is expensive.
She suggests that he doesn’t believe the company’s work is of value.
“Replicants are like any other machine,” he responds.
“They’re either a benefit or a hazard.
If they’re a benefit, it’s not my problem.”
12

How long will it be before Google’s intelligent machines, based on technologies from DeepMind and Google’s robotics
division, raise the same questions?
Few movies have had the cultural impact of
Blade Runner
.
It has been released seven different times, once with a director’s cut, and a sequel is on the docket.
It tells the story of a retired Los Angeles police detective in 2019 who is recalled to hunt down and kill a group of genetically engineered artificial beings known as replicants.
These replicants were originally created to work off-planet and have returned to Earth illegally in an effort to force their designer to extend their artificially limited life spans.
A modern-day
Wizard of Oz,
it captured a technologically literate generation’s hopes and fears.
From the Tin Man, who gains a heart and thus a measure of humanity, to the replicants who are so superior to humanity that Deckard is ordered to terminate them, humanity’s relations to robots have become the defining question of the era.

These “intelligent” machines may never be intelligent in a human sense or self-aware.
That’s beside the point.
Machine intelligence is improving quickly and approaching a level where it will increasingly offer the compelling
appearance
of intelligence.
When it opened in December 2013, the movie
Her
struck a chord, most likely because millions of people already interact with personal assistants such as Apple’s Siri.
Her
-like interactions have become commonplace.
Increasingly as computing moves between desktops and laptops and becomes embedded in everyday objects, we will expect them to communicate intelligently.
In the years while he was designing Siri and the project was still hidden from the public eye, Tom Gruber referred to this trend as “intelligence at the interface.”
He felt he had found a way to blend the competing worlds of AI and IA.

And indeed, the emergence of software-based intelligent assistants hints at a convergence between the work in disparate communities of AI and human-computer interaction designers.
Alan Kay, who conceived of the first modern personal computer, has said that in his early explorations of computer interfaces, he was working roughly ten to fifteen years in
the future, while Nicholas Negroponte, one of the first people to explore the ideas of immersive media, virtual reality, and conversational interfaces, was working twenty-five to thirty years in the future.
Like Negroponte, Kay asserts that the best computerized interfaces are the ones that are closer to theater, and the best theater draws the audience into its world so completely that they feel as if they are part of it.
That design focus on interactive performance points directly toward interactive systems that will function more as AI-based “colleagues” than computerized tools.

How will these computer avatars transform society?
Humans are already spending a significant fraction of their waking hours either interacting with other humans through computers or directly interacting with humanlike machines, either in fantasy and video games or in a plethora of computerized assistance systems that range from so-called FAQbots to Siri.
We even use search engines in our everyday conversations with others.

Will these AI avatars be our slaves, our assistants, our colleagues, or some mixture of all three?
Or more ominously, will they become our masters?
Considering robots and artificial intelligences in terms of social relationships may initially seem implausible.
However, given that we tend to anthropomorphize our machines, we will undoubtedly develop social relationships with them as they become increasingly autonomous.
Indeed, it is not much different to reflect on human-robot relations than it is to consider traditional human relations with slaves, who have been dehumanized by their masters throughout history.
Hegel explored the relationship between master and slave in
The Phenomenology of Spirit
and his ideas about the “master-slave dialectic” have influenced other thinkers ranging from Karl Marx to Martin Buber.
At the heart of Hegel’s dialectic is the insight that both the master and the slave are dehumanized by their relationship.

Kay has effectively translated Hegel for the modern age.
Today, a wide variety of companies are developing conversational computers like Siri.
Kay argues that as a consequence, designers should aim to create programs that function as colleagues rather than servants.
If we fail, history hints at a disturbing consequence.
Kay worried that building intelligent “assistants” might only recapitulate the problem the Romans faced by letting their Greek slaves do their thinking for them.
Before long, those in power were unable to think independently.

Perhaps we have already begun to slip down a similar path.
For example, there is growing evidence that reliance on GPS for directions and for correction of navigational errors hinders our ability to remember and reason spatially, which are more generally useful survival skills.
13
“When people ask me, ‘are computers going to take over the world?’”
Kay said, “For most people they already have, because they have ceded authority to them in so many different ways.”

That hints at a second great problem: the risk of ceding individual control over everyday decisions to a cluster of ever more sophisticated algorithms.
Not long ago, Randy Komisar, a veteran Silicon Valley venture capitalist, sat in a meeting listening to someone describe a Google service called Google Now, the company’s Siri competitor.
“What I realized was that people are dying to have an intelligence tell them what they should be doing,” he said.
“What food they should be eating, what people they should be meeting, what parties they should be going to.”
For today’s younger generation, the world has been turned upside down, he concluded.
Rather than using computers to free them up to think big thoughts, develop close relationships, and exercise their individuality and creativity and freedom, young people were suddenly so starved for direction that they were willing to give up that responsibility to an artificial intelligence in the cloud.
What started out as Internet technologies that made it possible for individuals to share preferences efficiently has rapidly transformed into a growing array of algorithms that increasingly dictate those preferences
for them.
Now the Internet seamlessly serves up life directions.
They might be little things like finding the best place nearby for Korean barbecue based on the Internet’s increasingly complete understanding of your individual wants and needs, or big things like an Internet service arranging your marriage—not just the food, gifts, and flowers, but your partner, too.

T
he tension inherent in AI and IA perspectives was a puzzle to me when I first realized that Engelbart and McCarthy had set out to invent computer technologies with radically different goals in mind.
Obviously they represent both a dichotomy and a paradox.
For if you augment a human with computing technology, you inevitably displace humans as well.
At the same time, choosing one side or another in the debate is an ethical choice, even if the choice isn’t black or white.
Terry Winograd and Jonathan Grudin have separately described the rival communities of scientists and engineers that emerged from that early work.
Both men have explored the challenge of fusing the two contradictory approaches.
In particular, in 2009 Winograd set out to build a Program on Liberation Technology at Stanford to find ways that computing technologies could improve governance, enfranchise the poor, support human rights, and implement economic development, along with a host of other aims.

Of course, there are limits to this technology.
Winograd makes the case that whether computing technologies are deployed to extend human capabilities or to replace them is more a consequence of the particular economic system in which they are created and used than anything inherent in the technologies themselves.
In a capitalist economy, if artificial intelligence technologies improve to the point that they can replace new kinds of white-collar and professional workers, they will inevitably be used in that way.
That lesson carries
forward in the differing approaches of the software engineers, AI researchers, roboticists, and hackers who are the designers of these future systems.
It should be obvious that Bill Joy’s warning that “the future doesn’t need us” is just one possible outcome.
It is equally apparent that the world transformed by these technologies doesn’t have to play out catastrophically.

A little over a century ago, Thorstein Veblen wrote an influential critique of the turn-of-the-century industrial world,
The Engineers and the Price System
.
He argued that, because of the power and influence of industrial technology, political power would flow to engineers, who could parlay their deep knowledge of technology into control of the emerging industrial economy.
It certainly didn’t work out that way.
Veblen was speaking to the Progressive Era, looking for a middle ground between Marxism and capitalism.
Perhaps his timing was off, but his basic point, as echoed a half century later at the dawn of the computer era by Norbert Wiener, may yet prove correct.
Today, the engineers who are designing the artificial intelligence–based programs and robots will have tremendous influence over how we will use them.
As computer systems are woven more deeply into the fabric of everyday life, the tension between augmentation and artificial intelligence has become increasingly salient.

What began as a paradox for me has a simple answer.
The solution to the contradiction inherent in AI versus IA lies in the very human decisions of engineers and scientists like Bill Duvall, Tom Gruber, Adam Cheyer, Terry Winograd, and Gary Bradski, who all have intentionally chosen human-centered design.

At the dawn of the computing age, Wiener had a clear sense of the significance of the relationship between humans and their creations—smart machines.
He recognized the benefits of automation in eliminating human drudgery, but he also worried that the same technology might subjugate humanity.
The
intervening decades have only sharpened the dichotomy he first identified.

This is about us, about humans and the kind of world we will create.

It’s not about the machines.

ACKNOWLEDGMENTS

A
fter reporting on Silicon Valley since 1976, in 2010 I left that beat at the
New York Times
and moved to the paper’s science section.
The events and ideas in this book have their roots in two series that I participated in at the paper while I reported on robotics and artificial intelligence.
“Smarter Than You Think” appeared during 2010 and “The iEconomy” in 2012.

Glenn Kramon, who has worked with me as an editor since we were both at the
San Francisco Examiner
in the mid-eighties, coined the “Smarter Than You Think” rubric.
I am a reporter who values good editors, and Glenn is one of the best.

The case I made to the paper’s editors in 2010 and the one I describe here is that just as personal computing and the Internet have transformed the world during the past four decades, artificial intelligence and robotics will have an even larger impact during the next several.
Despite the fact that our machines are increasingly mimicking our physical and intellectual capabilities, they are still entirely man-made.
How they are made will determine the shape of our world.

Gregg Zachary and I have been both competitors and collaborators for decades, and he remains a close friend with an encyclopedic knowledge of the impact of technology on society.
John Kelley, Michael Schrage, and Paul Saffo are also friends who have each had innumerable conversations with me about
the shape and consequences of future computing technologies.
I have for years had similar conversations with Randy Komisar, Tony Fadell, and Steve Woodward on long bike rides.
Jerry Kaplan, who has returned to the world of artificial intelligence after a long hiatus, has real insight into the way it will change the modern world.

John Brockman, Max Brockman, and Katinka Matson are more than wonderful agents; they are good friends.
At HarperCollins my editor, Hilary Redmon, understood that if my last book borrowed its title from a song this one should come from a poem.
Her colleague Emma Janaskie was tremendously helpful in navigating all the details that go into producing a book.

Special thanks to Iris Litt and Margaret Levi, who as directors of the Center for Advanced Study in Behavioral Studies at Stanford University, allowed me to join the community of social scientists in the hills overlooking Silicon Valley.
Thanks also to Phil Taubman for introducing me to the Center.

When I was unable to obtain a visa to report in China in 2012, John Dulchinos pointed me to Drachten and my first factory of the future.
In my reporting travels Frank Levy and David Mindell at the Massachusetts Institute of Technology took time to discuss the effects of robotics on the workplace and the economy.
Larry Smarr, director of the California Institute for Telecommunications and Information Technology, has frequently hosted me and is always a decade or two ahead in seeing where computing is heading.
Mark Stahlman was generous in offering insights on Norbert Wiener and his impact.

Mark Seiden, whose real-world computing experience stretches back to the first interactive computers, took time away from his work to help with editing, offering technical insight.
Anders Fernstedt delved into the archives for gems from Norbert Wiener that had been lost for far too long.
He painstakingly went through several of my drafts, offering context and grammar tips.

Finally, to Leslie Terzian Markoff for sharing it all with me.

BOOK: Machines of Loving Grace
2.76Mb size Format: txt, pdf, ePub
ads

Other books

Angel's Tip by Alafair Burke
Surviving ELE (ELE Series #4) by Gober, Rebecca, Courtney Nuckels
Light in August by William Faulkner
Within by Rachel Rae
Redeemer by Chris Ryan
Brides of Iowa by Stevens, Connie;
Air Ticket by Susan Barrie