Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (39 page)

BOOK: Machines of Loving Grace
13.31Mb size Format: txt, pdf, ePub
ads

After selling Mako Surgical for $1.65 billion in late 2013, Abovitz set out to pursue his broader and more powerful augmentation idea—Magic Leap, a start-up with the modest goal of replacing both televisions and personal computers with a technology known as augmented reality.
In 2013, the Magic Leap system worked only in a bulky helmet.
However, the company’s goal was to shrink the system into a pair of glasses less obtrusive and many times more powerful than Google Glass.
Instead of joining Google, Bradski went to work for Abovitz’s Magic Leap.

In 2014, there was already early evidence that Abovitz had made significant headway in uniting AI and IA.
It could be seen in Gerald, a half-foot-high animated creature floating in an anonymous office complex in a Miami suburb.
His four arms waved gently while he hung in space and walked in circles in front of a viewer.
Gerald wasn’t really there.
He was actually an animated projection that resembled a three-dimensional hologram.
Users could watch him through transparent lenses that project what computer scientists and optical engineers describe as a “digital light field” into the eyes of a human observer.
Although Gerald doesn’t exist in the real world, Abovitz is trying to create an unobtrusive pair of computer-augmented glasses with which to view animations like Gerald.
And it doesn’t stop with imaginary creatures.
In principle it is possible to project any visual object created with the technology that matches the visual acuity of the human eye.
For example, as Abovitz describes the Magic Leap system, it will make it possible for someone wearing the glasses to simply gesture with their hands to create a high-resolution
screen as crisp as a flat-panel television.
If they are perfected, the glasses will replace not only our TVs and computers, but many of the other consumer electronics gadgets that surround us.

The glasses are based on a transparent array of tiny electronic light emitters that are installed in each lens to project the light field—and so the image—onto each retina.
In practice, computer-generated light fields attempt to mimic what the human eye sees in the physical world.
It is a computer-generated version of the analog light field that comprises the sum of all of the light rays that form a visual scene for the human eye.
Digital light fields simulate the way light behaves in the physical world.
When photons bounce off objects in the world, they act like rivers of light.
The human neuro-optic system has evolved so that the lenses in our eyes adjust to match the wavelength of the natural light field and focus on objects.
Watching Gerald wander in space through a prototype of the Magic Leap glasses gives a hint that in the future it will be possible to visually merge computer-generated objects with the real world.
Significantly, Abovitz claims that digital light field technology holds out the promise of circumventing the limitations that have plagued stereoscopic displays for decades.
Today, these displays cause motion sickness in users and they do not offer “true” depth-of-field perception.

By January of 2015 it had become clear that augmented reality was no longer a fringe idea.
With great fanfare Microsoft demonstrated a similar system called HoloLens based on a competing technology.
Is it possible to imagine a world where the ubiquitous LCDs of today’s modern world—televisions, computer monitors, smartphone screens—simply disappear?
In Hollywood, Florida, Magic Leap’s demonstration suggests that workable augmented reality is much closer than we might assume.
If they are correct, such an advance would also change the way we think about and experience augmentation and automation.
In October 2014, Magic Leap’s technology received
a significant boost when Google led a $524 million investment round in the tiny start-up.

The Magic Leap prototype glasses look like ordinary glasses, save for the thin cable that runs down a user’s back and connects to a small, smartphone-sized computer.
These glasses don’t simply represent a break with existing display technologies.
The technology behind them makes extensive use of artificial intelligence, and machine vision, to remake reality.
The glasses are compelling for two reasons.
First, their resolution will approach the resolving power of the human eye.
The best computer displays are just reaching this level of resolution.
As a result, the animations and imagery will surpass those of today’s best consumer video game systems.
Second, they are the first indication that it is possible to seamlessly blend computer-generated imagery with physical reality.
Until now, the limits of consumer computing technology have been defined by what is known as the “WIMP” graphical interface—the windows, icons, menus, and pointer of the Macintosh and Windows.
The Magic Leap glasses, however, will introduce augmented reality as a way of revitalizing personal computing and, by extension, presenting new ways to augment the human mind.

In an augmented reality world, the “Web” will become the space that surrounds you.
Cameras embedded in the glasses will recognize the objects in people’s environments, making it possible to annotate and possibly transform them.
For example, reading a book might become a three-dimensional experience: images could float over the text, hyperlinks might be animated, readers could turn pages with the movement of their eyes, and there would be no need for limits to the size of a page.

Augmented reality is also a profoundly human-centered version of computing, in line with Xerox PARC computer scientist Mark Weiser’s original vision of “calm” ubiquitous computing.
It will be a world in which computers “disappear” and
everyday objects acquire “magical” powers.
This presents a host of new and interesting ways for humans to interact with robots.
The iPod and the iPhone were the first examples of this transition as a reimagining of the phonograph and the telephone.
Augmented reality would also make the idea of telepresence far more compelling.
Two people separated by great distance could gain the illusion of sharing the same space.
This would be a radical improvement on today’s videoconferencing and awkward telepresence robots like Scott Hassan’s Beam, which place a human face on a mobile robot.

Gary Bradski left the world of robots to join Abovitz’s effort to build what will potentially become the most intimate and powerful augmentation technology.
Now he spends his days refining computer vision technologies to fundamentally remake computing in a human-centered way.
Like Bill Duvall and Terry Winograd, he has made the leap from AI to IA.

8
   
|
   
“ONE LAST THING”

S
et on the Pacific Ocean a little more than an hour’s drive south of San Francisco, Santa Cruz exudes a Northern California sensibility.
The city blends the Bohemian flavor of a college town with the tech-savvy spillover from Silicon Valley just over the hill.
Its proximity to the heart of the computing universe and its deep countercultural roots are distinct counterpoints to the tilt-up office and manufacturing buildings that are sprinkled north from San Jose on the other side of the mountains.
Geographically and culturally, Santa Cruz is about as far away from the Homestead-Miami Speedway as you can get.

It was a foggy Saturday morning in this eclectic beach town, just months after the Boston Dynamics galloping robots stole the show at the steamy Florida racetrack.
Bundled against the morning chill, Tom Gruber and his friend Rhia Gowen wandered into The 418 Project, a storefront dance studio that backs up against the river.
They were among the first to arrive.
Gruber is a wiry salt-and-pepper-goateed software designer and Gowen is a dance instructor.
Before returning to the United States several years ago, she spent two decades in Japan, where she directed a Butoh dance theater company.

Tom Gruber began his career as an artificial intelligence researcher who swung from AI to work on augmenting human intelligence.
He was a cofounder of the team of programmers who designed Siri, Apple’s iPhone personal assistant.
(
Photo © 2015 by Tom Gruber
)

In Santa Cruz, Gowen teaches a style of dance known as Contact Improvisation, in which partners stay physically in touch with each other while moving in concert with a wide range of musical styles.
To the untrained eye, “Contact Improv” appears to be part dance, part gymnastics, a bit of tumbling, and even part wrestling.
Dancers use their bodies in a way that provides a sturdy platform for their partners, who may roll over and even bounce off them in sync with the music.
The Saturday-morning session that Gruber and Gowen attended was even more eclectic: it was a morning weekend ritual for the Santa Cruz Ecstatic Dance Community.
Some basic rules are spelled out at ecstaticdance.org:

             
1.
  
Move however you wish;

             
2.
  
No talking on the dance floor;

             
3.
  
Respect yourself and one another.

There is also an etiquette that requires that partners be “sensitive” if they want to dance with someone and that offers a way out if they don’t: “If you’d rather not dance with someone,
or are ending a dance with someone, simply thank them by placing your hands in prayer at your heart.”

The music mix that morning moved from meditative jazz to country, rock, and then to a cascade of electronic music styles.
The room gradually filled with people, and the dancers each entered a personal zone.
Some danced together, some traded partners, some swayed to an inner rhythm.
It was free-form dance evocative of a New Age gym class.

Gruber and Gowen wove through the throng.
Sometimes they were in contact, and sometimes they broke off to dance with other partners, then returned.
He picked her up and bent down and let her roll across his back.
It wasn’t exactly “do-si-do your partner,” but if the move was done well, one body formed a platform that shouldered the other partner’s weight without strain.
Gruber was a confident dancer and comfortable with moves that evoked a modern dance sensibility.
It offered a marked contrast to the style of many of the more hippie, middle-aged Californians, who were skipping and waving in all directions against a quickening beat.
The pace of the dancers ascended to a frenzy and then backed down to a mellower groove.
Gradually, the dancers melted away from the dance floor.
Gruber and Gowen donned their jackets and stepped out into the still-foggy morning air.

Gruber casually pulled an iPhone from his pocket and asked Siri, the software personal assistant he designed, a simple question about his next stop.
On Monday he would be back in the fluorescent-lit hallways of Apple, amid endless offices overloaded with flat-panel displays.
On that morning, however, he wandered in a more human-centric world, where computers had disappeared and everyday devices like phones were magical.

A
pple’s corporate campus is circumscribed by Infinite Loop, a balloon-shaped street set just off the Interstate 280 freeway in Cupertino.
The road wraps in a protective circle around
a modern cluster of six office buildings facing inward onto a grassy courtyard.
It circles a corporate headquarters that reflects Apple’s secretive style.
The campus was built during the era in which John Sculley ran the company.
When originally completed, it served as a research and development center, but as Apple scaled down after Sculley left in 1993, it became a fortress for an increasingly besieged company.
When Steve Jobs returned, first as “iCEO” in 1997, there were many noticeable changes including a dramatic improvement in the cafeteria food.
The fine silver that had marked the executive suite during the brief era when semiconductor chief Gilbert Amelio ran the company also disappeared.

As his health declined during a battle with pancreatic cancer in 2011, Steve Jobs came back for one last chapter at Apple.
He had taken his third medical leave, but he was still the guiding force at the company.
He had stopped driving and so he would come to Apple’s corporate headquarters with the aid of a chauffeur.
He was bone-thin and in meetings he would mention his health problems, although never directly acknowledging the battle was with cancer.
He sipped 7UP, which hinted to others that he might have been struggling through chemotherapy.

The previous spring Jobs had acquired Siri, a tiny developer of a natural language software application that was designed to act as a virtual assistant, in effect a software assistant, on the iPhone.
The acquisition had drawn a great deal of attention in Silicon Valley.
Apple acquisitions, particularly large ones, are extremely rare.
When word circulated that the firm had been acquired, possibly for more than $200 million, it sent shock waves up and down Sand Hill Road and within the burgeoning “app economy” that the iPhone had spawned.
After Apple acquired Siri, the program was immediately pulled from the App Store, the iPhone service through which programs were screened and sold, and the small team of programmers who had designed Siri vanished back into
“stealth mode” inside the Cupertino campus.
The larger implications of the acquisition weren’t immediately obvious to many in the Valley, but as one of his last acts as the leader of Apple, Steve Jobs had paved the way for yet another dramatic shift in the way humans would interact with computers.
He had come down squarely on the side of those who placed humans in control of their computing systems.

Jobs had made a vital earlier contribution to the computing world by championing the graphical desktop computing approach as a more powerful way to operate a PC.
The shift from the command line interface of the IBM DOS era to the desktop metaphor of the Macintosh had opened the way for the personal computer to be broadly adopted by students, designers, and office workers—a computer for “the rest of us,” in Apple parlance.
Steve Jobs’s visits to PARC are the stuff of legend.
With the giant copier company’s blessing and a small but lucrative Xerox investment in Apple pre-IPO, he visited several times in 1979 and then over the next half decade created first the Lisa and then the Macintosh.

But the PC era was already giving way to a second Xerox PARC concept—ubiquitous computing.
Mark Weiser, the PARC computer scientist, had conceived the idea during the late 1980s.
Although he had been given less credit for the insight and the shift, Jobs had been the first to successfully translate Weiser’s ideas for general consumer audiences.
The iPod and then the iPhone were truly ubiquitous computing devices.
Jobs first transformed the phonograph and then the telephone by adding computing.
“A thousand songs in your pocket” and “something wonderful for your hand.”
He was the consummate showman, and “one more thing” had become a trademark slogan that Jobs used at product introductions, just before announcing something “insanely great.”
For Jobs, however, Siri was genuinely his “one last thing.”
By acquiring Siri he took his final bow for reshaping the computing world.
He bridged the gap between Alan Kay’s Dynabook and the Knowledge Navigator,
the elaborate Apple promotional video imagining a virtual personal assistant.
The philosophical distance between AI and IA had resulted in two separate fields that rarely spoke.
Even today, in most universities artificial intelligence and human-computer interaction remain entirely separate disciplines.
In a design approach that resonated with Lee Felsenstein’s original golemics vision, Siri would become a software robot—equipped with a sense of humor—intended to serve as a partner, not a slave.

It was an extraordinary demand that only Steve Jobs would have considered.
He directed his phone designers to take a group of unknown software developers, who had never seen any of Apple’s basic operating system software, and allow them to place their code right at the heart of the iPhone.
He then forced his designers to create connections to all of the iPhone’s application programs from the ground up.
And he ordered that it all happen in less than a year.
To supplement the initial core of twenty-four people who had arrived with the Siri acquisition, the programmers borrowed and begged from various corners of Apple’s software development organization.
But it wasn’t enough.
In most technical companies a demand of this scale would be flatly rejected as impossible.
Jobs simply said, “Make it happen.”

T
om Gruber was a college student studying psychology in the late 1970s when he stumbled upon artificial intelligence.
Wandering through his school library, he found a paper describing the work of Raj Reddy and a group of Carnegie Mellon University computer scientists who had built a speech recognition system called Hearsay-II.
The program was capable of recognizing just a thousand words spoken in sentences with a 90 percent accuracy rate.
One error every ten words, of course, was not usable.
What struck Gruber, though, was that the Hearsay system married acoustic signal processing with
more general artificial intelligence techniques.
He immediately realized that the system implied a model of the brain that was required to represent human knowledge.
He realized that psychologists were also modeling this process, but poorly.
At that point in the 1980s, there were no PET scans or fMRI brain-imaging systems.
Psychologists were studying human behavior, but not the brain itself.

Not long after reading about the Hearsay research, Gruber found the early work of Edward Feigenbaum, a Stanford University computer science professor who focused on the idea of building “expert systems” to capture human knowledge and replicate the capabilities of specialists in highly technical fields.
While he was a graduate student at Carnegie Mellon working with Herbert Simon, Feigenbaum had done research in designing computer models of human memory.
The Elementary Perceiver and Memorizer, or EPAM, was a psychological theory of human learning and memory that researchers could integrate into a computer program.

Feigenbaum’s work inspired Gruber to think more generally about building models of the mind.
At this point, however, he hadn’t considered applying to graduate school.
No one in his family had studied for an advanced degree and the idea wasn’t on his radar.
By the time he finally sent out applications, there were only a few places that would still offer him funding.
Both Stanford and MIT notified Gruber that his application was about three months late for the upcoming school year, and they invited him to apply again in the future.
Luckily, he was accepted by the University of Massachusetts, which at the time was home to a vibrant AI group that was researching work in robotics including how to program robotic hands.
The program’s academic approach to robotics explicitly melded artificial intelligence and cognitive science, which spoke perfectly to his interest in modeling the human mind.

BOOK: Machines of Loving Grace
13.31Mb size Format: txt, pdf, ePub
ads

Other books

Under Fire: The Admiral by Beyond the Page Publishing
The Pursuit of Jesse by Helen Brenna
The Twilight Hour by Elizabeth Wilson
Echoes of Dollanganger by V.C. Andrews
The Soother by Elle J Rossi
Princess in Pink by Meg Cabot
Nancy and Nick by Caroline B. Cooney
From Afar by John Russell Fearn