The Glass Cage: Automation and Us (24 page)

BOOK: The Glass Cage: Automation and Us
7.58Mb size Format: txt, pdf, ePub
ads

H
UMAN-FACTORS EXPERTS
have long urged designers to move away from the technology-first approach and instead embrace
human-centered automation
. Rather than beginning with an assessment of the capabilities of the machine, human-centered design begins with a careful evaluation of the strengths and limitations of the people who will be operating or otherwise interacting with the machine. It brings technological development back to the humanistic principles that inspired the original ergonomists. The goal is to divide roles and responsibilities in a way that not only capitalizes on the computer’s speed and precision but also keeps workers engaged, active, and alert—in the loop rather than out of it.
21

Striking that kind of balance isn’t hard. Decades of ergonomic research show it can be achieved in a number of straightforward ways. A system’s software can be programmed to shift control over critical functions from the computer back to the operator at frequent but irregular intervals. Knowing that they may need to take command at any moment keeps people attentive and engaged, promoting situational awareness and learning. A design engineer can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than being relegated to passive, observational roles. Giving people more to do helps sustain the generation effect. A designer can also give the operator direct sensory feedback on the system’s performance, using audio and tactile alerts as well as visual displays, even for those activities that the computer is handling. Regular feedback heightens engagement and helps operators remain vigilant.

One of the most intriguing applications of the human-centered approach is
adaptive automation
. In adaptive systems, the computer is programmed to pay close attention to the person operating it. The division of labor between the software and the human operator is adjusted continually, depending on what’s happening at any given moment.
22
When the computer senses that the operator has to perform a tricky maneuver, for example, it might take over all the other tasks. Freed from distractions, the operator can concentrate her full attention on the critical challenge. Under routine conditions, the computer might shift more tasks over to the operator, increasing her workload to ensure that she maintains her situational awareness and practices her skills. Putting the analytical capabilities of the computer to humanistic use, adaptive automation aims to keep the operator at the peak of the Yerkes-Dodson performance curve, preventing both cognitive overload and cognitive underload. DARPA, the Department of Defense laboratory that spearheaded the creation of the internet, is even working on developing “neuroergonomic” systems that, using various brain and body sensors, can “detect an individual’s cognitive state and then manipulate task parameters to overcome perceptual, attentional, and working memory bottlenecks.”
23
Adaptive automation also holds promise for injecting a dose of humanity into the working relationships between people and computers. Some early users of the systems report that they feel as though they’re collaborating with a colleague rather than operating a machine.

Studies of automation have tended to focus on large, complex, and risk-laden systems, the kind used on flight decks, in control rooms, and on battlefields. When these systems fail, many lives and a great deal of money can be lost. But the research is also relevant to the design of decision-support applications used by doctors, lawyers, managers, and others in analytical trades. Such programs go through a lot of personal testing to make them easy to learn and operate, but once you dig beneath the user-friendly interface, you find that the technology-centered ethic still holds sway. “Typically,” writes John Lee, “expert systems act as a prosthesis, supposedly replacing flawed and inconsistent human reasoning with more precise computer algorithms.”
24
They’re intended to supplant, rather than supplement, human judgment. With each upgrade in an application’s data-crunching speed and predictive acumen, the programmer shifts more decision-making responsibility from the professional to the software.

Raja Parasuraman, who has studied the personal consequences of automation as deeply as anyone, believes this is the wrong approach. He argues that decision-support applications work best when they deliver pertinent information to professionals at the moment they need it, without recommending specific courses of action.
25
The smartest, most creative ideas come when people are afforded room to think. Lee agrees. “A less automated approach, which places the automation in the role of critiquing the operator, has met with much more success,” he writes. The best expert systems present people with “alternative interpretations, hypotheses, or choices.” The added and often unexpected information helps counteract the natural cognitive biases that sometimes skew human judgment. It pushes analysts and decision makers to look at problems from different perspectives and consider broader sets of options. But Lee stresses that the systems should leave the final verdict to the person. In the absence of perfect automation, he counsels, the evidence shows that “a lower level of automation, such as that used in the critiquing approach, is less likely to induce errors.”
26
Computers do a superior job of sorting through lots of data quickly, but human experts remain subtler and wiser thinkers than their digital partners.

Carving out a protected space for the thoughts and judgments of expert practitioners is also a goal of those seeking a more humanistic approach to automation in the creative trades. Many designers criticize popular CAD programs for their pushiness. Ben Tranel, an architect with the Gensler firm in San Francisco, praises computers for expanding the possibilities of design. He points to the new, Gensler-designed Shanghai Tower in China, a spiraling, energy-efficient skyscraper, as an example of a building that “couldn’t have been built” without computers. But he worries that the literalism of design software—the way it forces architects to define the meaning and use of every geometric element they input—is foreclosing the open-ended, unstructured explorations that freehand sketching encouraged. “A drawn line can be many things,” he says, whereas a digitized line has to be just one thing.
27

Back in 1996, the architecture professors Mark Gross and Ellen Yi-Luen Do proposed an alternative to literal-minded CAD software. They created a conceptual blueprint of an application with a “paper-like” interface that would be able to “capture users’ intended ambiguity, vagueness, and imprecision and convey these qualities visually.” It would lend design software “the suggestive power of the sketch.”
28
Since then, many other scholars have made similar proposals. Recently, a team led by Yale computer scientist Julie Dorsey created a prototype of a design application that provides a “mental canvas.” Rather than having the computer automatically translate two-dimensional drawings into three-dimensional virtual models, the system, which uses a touchscreen tablet as an input device, allows an architect to do rough sketches in three dimensions. “Designers can draw and redraw lines without being bound by the constraints of a polygonal mesh or the inflexibility of a parametric pipeline,” the team explained. “Our system allows easy iterative refinement throughout the development of an idea, without imposing geometric precision before the idea is ready for it.”
29
With less pushy software, a designer’s imagination has more chance to flourish.

T
HE TENSION
between technology-centered and human-centered automation is not just a theoretical concern of academics. It affects decisions made every day by business executives, engineers and programmers, and government regulators. In the aviation business, the two dominant airliner manufacturers have been on different sides of the design question since the introduction of fly-by-wire systems thirty years ago. Airbus pursues a technology-centered approach. Its goal is to make its planes essentially “pilot-proof.”
30
The company’s decision to replace the bulky, front-mounted control yokes that have traditionally steered planes with diminutive, side-mounted joysticks was one expression of that goal. The game-like controllers send inputs to the flight computers efficiently, with minimal manual effort, but they don’t provide pilots with tactile feedback. Consistent with the ideal of the glass cockpit, they emphasize the pilot’s role as a computer operator rather than as an aviator. Airbus has also programmed its computers to override pilots’ instructions in certain situations in order to keep the jet within the software-specified parameters of its flight envelope. The software, not the pilot, wields ultimate control.

Boeing has taken a more human-centered tack in designing its fly-by-wire craft. In a move that would have made the Wright brothers happy, the company decided that it wouldn’t allow its flight software to override the pilot. The aviator retains final authority over maneuvers, even in extreme circumstances. And not only has Boeing kept the big yokes of yore; it has designed them to provide artificial feedback that mimics what pilots felt back when they had direct control over a plane’s steering mechanisms. Although the yokes are just sending electronic signals to computers, they’ve been programmed to provide resistance and other tactile cues that simulate the feel of the movements of the plane’s ailerons, elevators, and other control surfaces. Research has found that tactile, or haptic, feedback is significantly more effective than visual cues alone in alerting pilots to important changes in a plane’s orientation and operation, according to John Lee. And because the brain processes tactile signals in a different way than visual signals, “haptic warnings” don’t tend to “interfere with the performance of concurrent visual tasks.”
31
In a sense, the synthetic, tactile feedback takes Boeing pilots out of the glass cockpit. They may not wear their jumbo jets the way Wiley Post wore his little Lockheed Vega, but they are more involved in the bodily experience of flight than are their counterparts on Airbus flight decks.

Airbus makes magnificent planes. Some commercial pilots prefer them to Boeing’s jets, and the safety records of the two manufacturers are pretty much identical. But recent incidents reveal the shortcomings of Airbus’s technology-centered approach. Some aviation experts believe that the design of the Airbus cockpit played a part in the Air France disaster. The voice-recorder transcript revealed that the whole time the pilot controlling the plane, Pierre-Cédric Bonin, was pulling back on his sidestick, his copilot, David Robert, was oblivious to Bonin’s fateful mistake. In a Boeing cockpit, each pilot has a clear view of the other pilot’s yoke and how it’s being handled. If that weren’t enough, the two yokes operate as a single unit. If one pilot pulls back on his yoke, the other pilot’s goes back too. Through both visual and haptic cues, the pilots stay in sync. The Airbus sidesticks, in contrast, are not in clear view, they work with much subtler motions, and they operate independently. It’s easy for a pilot to miss what his colleague is doing, particularly in emergencies when stress rises and focus narrows.

Had Robert seen and corrected Bonin’s error early on, the pilots may well have regained control of the A330. The Air France crash, Chesley Sullenberger has said, would have been “much less likely to happen” if the pilots had been flying in a Boeing cockpit with its human-centered controls.
32
Even Bernard Ziegler, the brilliant and proud French engineer who served as Airbus’s top designer until his retirement in 1997, recently expressed misgivings about his company’s design philosophy. “Sometimes I wonder if we made an airplane that is too easy to fly,” he said to William Langewiesche, the writer, during an interview in Toulouse, where Airbus has its headquarters. “Because in a difficult airplane the crews may stay more alert.” He went on to suggest that Airbus “should have built a kicker into the pilots’ seats.”
33
He may have been joking, but his comment jibes with what human-factors researchers have learned about the maintenance of human skills and attentiveness. Sometimes a good kick, or its technological equivalent, is exactly what an automated system needs to give its operators.

BOOK: The Glass Cage: Automation and Us
7.58Mb size Format: txt, pdf, ePub
ads

Other books

As Long as the Rivers Flow by James Bartleman
Maza of the Moon by Otis Adelbert Kline
The Deal from Hell by James O'Shea
Good King Sauerkraut by Barbara Paul
The Lonesome Young by Lucy Connors