Read The Future of Success Online

Authors: Robert B. Reich

Tags: #Business & Economics, #Labor

The Future of Success (11 page)

BOOK: The Future of Success
9.91Mb size Format: txt, pdf, ePub
ads

WHEN LOYALTY PAYS

Some will object that I’ve failed to account for the positive effects of institutional loyalty on the bottom line. Surely, being nice to employees and suppliers can pay off. There’s ample evidence that employees who feel well treated are willing to work harder and better. Employee turnover can be expensive. Sometimes a union can give such efficient voice to employee concerns and ideas that it enhances productivity. Suppliers that are treated as partners rather than as vendors from whom every last ounce of cost-saving is squeezed are often more willing to share customer data and invest in new ways to improve efficiency along the entire supply chain. Overt displays of good “corporate citizenship” can burnish a public image and thus help sales. And not a few “socially responsible” investment funds enjoy high returns to shareholders, precisely because certain kinds of social responsibility pay off.

Whatever financial rewards accrue to being nice do not, however, imply
separate
obligations toward employees, suppliers, or communities over and above the necessity of maximizing returns to investors (or, if a privately held company, of generating enough revenue to be able to reinvest and stay competitive; if a nonprofit, of maximizing revenues in order to better accomplish whatever the nonprofit is set up to do). To the extent that being nice to others furthers these more basic goals, being nice makes good business sense, but
only
to this extent. When more money can be made by severing these other relationships, the bonds will be cut. Even Levi Strauss, a clothing manufacturer with a sterling reputation for social responsibility (it kept most of its idle workers on the payroll during the Depression), was, by century’s end, severing bonds with its communities and employees, closing most of its North American plants and firing nearly half of its workforce while subcontracting production to foreign factories with lower labor costs. To be sure, the company did the severing nicely, providing its former employees generous severance payments and helping them train for new jobs. But in the end Levi Strauss had no choice but to cut the bonds. Its competitors had already done so, and their lower costs gave them an advantage that threatened Levi’s future.

In the new economy, there will be no random acts of kindness to employees, suppliers, or communities separate from their positive impact on the bottom line. If being “socially responsible” helps the bottom line by eliciting good will from employees, suppliers, or the public at large, then such actions make sound business sense, and the new competitive logic dictates that executives pursue them. If, however, being “socially responsible” detracts from the bottom line—handicapping the enterprise by drawing resources away from, or otherwise preventing, production that’s better, faster, and cheaper than rivals’—then it creates the risk that consumers and shareholders will switch to a better deal. By the new logic, executives then pursue such actions at their peril.

THE ABNORMALITY OF LOYALTY

The unexpected, or even repellent, repeated often enough, eventually becomes acceptable; the acceptable, replicated widely, becomes the norm. Commercial behavior once thought to be a betrayal of trust is now common practice. When, at the start of 1996, AT&T announced it would fire tens of thousands of workers and award its chief executive a fat bonus, the press roundly excoriated the corporation. After a few other big companies followed suit, a Republican presidential candidate condemned the perfidy of big business, and a prominent national newsweekly carried on its cover the photographs of several chief executives under the headline “Corporate Killers.” By the end of the decade, firings were continuing on about the same scale, even though companies were more profitable than they had been during the mid-nineties and executive compensation was considerably higher. Yet by then the blame, and the shame, had disappeared. Such practices had become a routine aspect of American business.

A generation before, being fired from one’s job suggested a moral failure—a personal defect, a profound flaw. An employee might be temporarily laid off during an economic downturn but not permanently fired. It wouldn’t be rational for an employer to fire someone who was doing an adequate job. Firing signaled that the person had failed to do the job expected of him, or was no longer capable of doing it. A firing thus entailed a profound loss of self-respect. Great tragedies centered on such events. When young Howard fires Willie Loman in Arthur Miller’s 1949 play
Death of a Salesman,
it’s because Willie can no longer make the grade. Willie was once a great salesman but is no longer useful, and this knowledge breaks him. “You can’t eat an orange and throw the peel away,” Willie roars. “A man is not a piece of fruit.”

Willie’s plight is still poignant, but the play seems strangely dated. Someone who is fired from a job may still feel angry or humiliated, but he is no longer presumed to be flawed. People are fired all the time for reasons having nothing to do with their failure to achieve. John Scully, the former CEO of Apple Computer, saw this as a California phenomenon, but by century’s end he was describing almost all of America: “When someone is fired or leaves on the East Coast, it’s a real trauma in their lives. When they are fired or leave here, it doesn’t mean much. They just go off and do something else.”
20

The old economy rewarded stable, predictable relationships—among customers, investors, companies, suppliers, employees, and communities—because large-scale production depended on stability and predictability. Any deviation undermined efficiency. Thus all participants came to rely on permanence. But the emerging economy is altering expectations. Commercial relationships are no longer assumed to last. People figure that everyone with whom they deal will switch to a better alternative should one become available, as will they.

As disloyalty is “normalized,” loyalty itself comes under suspicion. Remain too long with one company or in one job, and your behavior must be explained. Perhaps your immobility is due to your spouse or family, but it also may be due to some failing on your part—a lack of options (no other opportunity has beckoned) or a notable lack of ambition. A company or organization that keeps its same executives and employees for too long invites similar scrutiny. Maybe it is just quaintly old-fashioned. But it may harbor deeper problems—it’s too stodgy to keep up with the times, too hidebound, stale, lacking in new blood and vision. A community that retains the same residents decade after decade is sometimes presumed to be insular and ingrown, obviously lacking in vitality.

LOYALTY TO WHAT?

In the years ahead, it will be unclear, in any event, what the entity
is
that might summon loyalty, or be loyal in return. The very meaning of a company or university or any other institution is growing less coherent. All institutions are flattening into networks of entrepreneurial groups, temporary projects, electronic communities and coalitions, linked to various brands and portals. In this emerging cyber-landscape, it will be odd to speak of institutional loyalty because there will be fewer clear boundaries around any institution.

Organizations used to be recognizable: They were shaped like pyramids, with top executives, layers of middle-level managers and staff, and a larger number of people doing relatively simple and repetitive tasks at the bottom. You were either in—a member, resident, partner, or employee—or on the outside. Now bureaucratic controls are no longer necessary for coordinating large numbers of people. People can coordinate themselves through the Internet. Broad constellations of designers, suppliers, marketers, financial specialists, contractors, and shippers can function
as if
they were a single enterprise, then form a different constellation tomorrow. So who’s in? Who’s out? In a few years, a “company” will be best defined by who has access to what data, and gets what portion of a particular stream of revenues, over what period of time.

A glimpse of the future: The Monorail Corporation owns no factories, warehouses, or any other tangible asset. It operates from a single floor it leases in an office building in Atlanta. A few designers on contract to Monorail devised a personal computer that could fit into a standard box shipped by Federal Express. To place orders for it, customers call a 1-800 number connected to FedEx’s Logistics Service, which passes the orders on to a contract manufacturer that assembles it from various parts coming from around the world. FedEx then ships the computer to the customer and sends the invoice to the Sun Trust Bank in Atlanta, whose factoring department handles billing and credit approvals, remits a prearranged portion to everyone who played a part along the way (including a small commission to Monorail), and assumes the cost and risk of collecting from the customer. A customer who needs help at any point can call “Monorail’s” 1-800 service center, which is actually staffed and run by Sykes Enterprises, a call-center outsourcing company based in Tampa, Florida. As a result of this network, Monorail can offer among the lowest-priced PCs available anywhere. It can also increase its sales almost effortlessly, simply by expanding its network of suppliers.
21

But Monorail, by this account, is not what it seems. It’s not really much of anything except a good idea, a handful of people in Atlanta, and a bunch of contracts. By the time you read this, Monorail may not even exist any longer. Can Monorail be “loyal” to anyone? Can anyone be loyal to Monorail?

RESPONSIBILITY TO WHOM?

Through the Internet, responsibilities for who does what and who gets what in return can be distributed through a wide cluster of temporary contracts. But such contracts can’t take into account all possible problems. Example: For about ten days starting on August 7, 1999, a number of small businesses that depended on the Internet to link them to their customers lost Internet service—a near-death experience. Who was responsible? Follow the trail: Their Internet service providers had relied on DataXchange, a Washington, D.C.–based company that had bought large chunks of Internet access wholesale and then sold it to them in pieces. DataXchange had bought its largest chunk from MCI WorldCom. On August 7, MCI WorldCom’s high-speed network went down. Why? MCI WorldCom’s network used software from Lucent Technologies (once the research arm of AT&T); on August 7 that software developed a glitch that MCI engineers couldn’t fix. And why not? Because the software had been developed several years before by a different group of engineers working for an outfit called Cascade Communications. Cascade was subsequently acquired by Ascent Communications, which Lucent acquired along with the software in early 1999 for $20 billion. That’s how the software got into Lucent’s system, and hence into MCI WorldCom’s data network.

Strip away all the corporate names, and you get a truer picture of what happened—like clicking on “reveal codes” in your computer program and discovering the underlying instructions. All we now see is a bunch of people who contracted with one another for specific services. Those who contributed services several years ago by writing the original software are now working on other projects. The problem is, they’re the only ones who know enough about the software to be able to fix the glitch quickly, and they are no longer available. They didn’t come with the software that went from Cascade to Ascent and then on to Lucent in early 1999. Their intelligence is crucial, but it wasn’t part of the intellectual property that changed hands.

When the “glue” that holds an enterprise together is little more than a bunch of temporary contracts, who’s responsible for making sure that the system as a whole works as planned? It’s one thing if a lot of small businesses lose money because nobody can fix a software glitch, but the subcontracting of responsibility can sometimes cause graver problems. When a small company in Indonesia employs young children to weave its fabrics ten hours a day for six days a week in unsanitary conditions, and then sells the fabric to a Taiwanese company, which cuts and sews it into garments that, in turn, are sent to a jobber in California who supplies Wal-Mart—is Wal-Mart responsible for how the children are treated? How can it be, if it has no practical way of knowing? But how can it
not
be, if a significant percentage of the American public finds child labor morally offensive? When a few employees of a now-defunct aircraft maintenance company improperly packed oxygen generators that were then delivered to its client, an airline, sparking a fire in a plane’s cargo hold and causing it to crash in the Everglades, who is morally responsible?
*

* * *

C
OMMERCIAL LOYALTY
has not disappeared entirely. You may still feel loyalty toward your employer and your employer may feel it toward you. But the trend is undeniably against such sentiments, and the reason should be clear. Every consumer and every investor can switch to something better with increasing ease and speed—which means that everyone along the supply chain must be changeable as well, in order to be better, faster, and cheaper. Consumers and investors like you and me are taking advantage of technologies—most recently, the Internet, e-commerce, and fancy software—that allow greater flexibility at all junctures. Under the combined pressure, enterprises are becoming collections of people bound to one another by little more than temporary convenience.

The result is boundless innovation and unprecedented dynamism. But it’s also a set of economic relationships so transient as to render ambiguous who owes what obligations to whom—and who will be there for whom tomorrow. My students view the world they are entering in far more temporary terms than my generation saw it. They don’t plan to spend more than a few years in any job. They don’t anticipate any loyalty from any organization or institution, and rarely from another person—and they don’t expect to be loyal in return. To them, a commercial relationship is fleeting. They assume they’ll have to take full responsibility for navigating their careers; they cannot entrust that responsibility to anyone else.

         

CHAPTER FIVE

The End of Employment As We Knew It

Work is of two kinds: First, altering the position of matter at or near the earth’s surface relative to other such matter; second, telling other people to do so.

—Bertrand Russell,
In Praise of Idleness and Other Essays

K
EEP FOLLOWING
the logic: Technology is speeding and broadening access to terrific deals. Buyers and investors can switch to something better with ever-increasing ease. In order to survive in this new era of fiercer competition, sellers have to innovate continuously and do so faster than their rivals. The best way is through small entrepreneurial groups linked to trusted brands. At their core are talented geeks and shrinks, in ever-greater demand. The enterprise must also continuously cut costs, leasing almost everything it needs, finding the lowest-cost suppliers, pushing down wages of routine workers, and flattening all hierarchies into fast-changing contractual networks.

It’s not like this everywhere, at least not yet. Most people still work for, and within, organizations. But the logic of the new economy is changing the employment relationship. Fewer working people are “employees,” as that term was used through most of the twentieth century—and in the future there will be fewer still. The working citizens of other nations are treading the same path away from steady employment, although several steps behind.

What’s in store for you and your children? You won’t be an utterly free agent selling your individual services in the open market to the highest bidder, nor will you be an “organization” man or woman. Instead, you’re likely to become a member of an entrepreneurial group whose profits vary yearly or even monthly, your share depending on your contribution. Or you’ll be part of a professional-services firm, for whose clients you do projects and for which you receive a share of overall earnings. Or you’ll work for a talent agency or temp firm that sends you to work on specific projects for a limited time and takes a percentage of your earnings in return. Even Silicon Valley is sprouting agencies that rent out top programmers for $200 or more an hour.
1

Regardless of your precise relationship to the people who buy your services, the organization that stands between you and them is thinning out. Even if you’re
called
a full-time employee, you’re becoming less of an employee of an organization than you are a seller of your services to particular customers and clients, under the organization’s brand name. Accordingly, your income will depend on how much these buyers are willing to pay for your services, and the reputation of the brand that attracts them to you.

In some respects, we’re coming full circle to an earlier stage in economic history in which people contracted to do specific tasks. The whole idea of a
steady
job is rather new, historically speaking—and, as it turns out, short-lived. It flourished in the United States and other industrialized nations for a century and a half, during the industrial era of large-scale production. And it’s now coming to an end.

THE ORIGIN OF EMPLOYMENT

A brief pause for some history. Before the dawn of large-scale production in the latter half of the nineteenth century, few people were permanently employed at a fixed wage. Most work occurred on family or tenant farms or in small family-run shops, or it was done by craftsmen, artisans, and tradesmen. In the old South, most of the tasks of planting and harvesting large tracts of tobacco, rice, and indigo fell to people who did work permanently for someone else but were not free: indentured whites and black slaves. In none of these cases were earnings “steady.” Income depended on the vagaries of weather, pestilence, disease, and warfare. It required unflinching effort, often hard on muscles and joints. Nor was there a sharp divide between work life and home life, between paid work and unpaid. Women and children worked alongside men, and home production was a central feature of a family’s economic well-being. This is all still the case today for the majority of humankind around the world.

When industrial production first made its appearance in America, the very idea of working permanently for someone else was thought degrading, if not a threat to individual liberty. Typical was the view of the political pamphleteer Orestes Brownson, a fierce Jacksonian Democrat, who wrote in an 1840 tract that “wages are a cunning device of the devil for the benefit of tender consciences who would retain all the advantages of the slave system without the expense, trouble, and odium of being slave holders.”
2
Wage work was morally acceptable only as a step toward economic independence—a transient condition that, to the minds of many Northerners, distinguished it from slavery. Abraham Lincoln offered himself up as an example—beginning as a hired laborer splitting rails, then learning the law and earning his own living. “They insist that their slaves are far better off than Northern freemen,” Lincoln scoffed at Southerners who defended the slave system. “What a mistaken view do these men have of Northern laborers! They think that men are always to remain laborers here—but there is no such class. The man who labored for another last year, this year labors for himself, and next year he will hire others to labor for him.”
3

Owners of the small mills and factories that had sprouted around New England and the mid-Atlantic states contracted directly with skilled craftsmen and paid them according to what they produced. The craftsmen’s knowledge of, and control over, most manufacturing tasks gave them significant bargaining power in this arrangement. But as large-scale production spread after the Civil War, factory owners began replacing skilled craftsmen with machines, and hired unskilled laborers—many of them new immigrants—to run them at fixed wages. The craftsmen responded by joining together in America’s first large union, the Knights of Labor, whose goal was to “abolish the wage system.”
4

The first major clash occurred in 1892 at Andrew Carnegie’s Homestead Works, near Pittsburgh. When the craftsmen refused to accept lower pay, they were locked out of the mill, but they refused to allow unskilled laborers in. The standoff lasted several months, until nonunion workers were ushered into the mill under the protection of the Pennsylvania state militia, and the union surrendered. For several years thereafter, state and federal governments, backed by business interests, continued to weigh in on the side of owners. In 1894, Chicago and much of the Midwest were immobilized by striking rail workers protesting the treatment of workers by the Pullman Car Company. In quick succession, a federal court enjoined the strikers, President Grover Cleveland deployed federal troops at key railway junctions, martial law was declared in Chicago, and the leaders of the strike were jailed.

The Knights went down to defeat, and wage work became the norm. Between 1870 and 1910, while the American population more than doubled, the number of wage workers in industrial labor more than quadrupled, from 3.5 million to 14.2 million.
5
And the number employed within any given factory soared. The New England mills of the mid-nineteenth century had employed no more than a few hundred people; the first Ford Motor plant in 1915 employed 15,000.

Drawing from the ranks of coal miners, cigar makers, printers, iron and steel workers, and garment sewers, a new union appeared—the American Federation of Labor—that accepted the inevitability of wage work. Samuel Gompers, the AFL’s first president, conceded that “we are operating under the wage system, and so long as that lasts it is our purpose to secure a continually larger share for labor.”
6
To Gompers, industrial concentration was “a logical and inevitable feature of our modern system of industry.”
7

Progressives like Woodrow Wilson still pined for a simpler time when “men were everywhere captains of industry, not employees, not looking to a distant city to find out what they might do, but looking about among their neighbors,”
8
but reluctantly accepted that the new economy required wage work. The question that preoccupied Progressives was how to reconcile wage work with the American values of individualism and freedom, while at the same time protecting workers from its more unsavory aspects. The answer they devised was to set broad limits on business, within laws establishing maximum hours, minimum pay, compensation for injuries, and minimal requirements for safety and sanitation.

The limits were not established without a struggle. There were those who wanted to argue that wage work represented just another kind of freedom. In the 1905 case of
Lochner v. New York,
the Supreme Court decided that New York’s maximum ten-hour day for bakery workers was nothing less than an “illegal interference with the rights of individuals, both employers and employees, to make contracts regarding labor upon which terms they may think best.” New York had no business “limiting the hours in which grown and intelligent men may labor to earn their living.”
9
Only three years later, in
Muller v. Oregon,
the Court reached a very different decision, upholding Oregon’s ten-hour day for women because, the Court reasoned, “healthy mothers are essential to vigorous offspring, [and] the physical well-being of women becomes an object of public interest and care in order to preserve the strength and vigor of the race.” Women differed from men not only “in structure of body,” but also in “the self-reliance which enables [men] to assert full rights.”
10
In point of fact, of course, neither male nor female wage workers were free to negotiate terms of employment in the new system of large-scale production. Neither had any bargaining leverage.

After a prolonged legal and political struggle, labor protections were finally extended to the nation’s entire workforce, along with the right to bargain collectively. Social Security and unemployment insurance were added as well, protecting workers against the risks of job loss during downturns in the business cycle, the death of a working husband and father, permanent disability, and inadequate savings in old age for retirement. The most unique aspect of the American system of social protection (in contrast with those being adopted in other industrializing nations) was that it was available only to people in permanent wage work—precisely the circumstance that had been rejected less than a century before. All benefits depended on being (or having been, or been married to) a full-time employee. Notably excluded were casual workers, part-timers, independent contractors, the self-employed, and the chronically unemployed. Even welfare as originally conceived was intended only for the widows of working men. Franklin D. Roosevelt’s Committee on Economic Security, headed by my formidable predecessor Labor Secretary Frances Perkins, reported that the purpose of Aid to Dependent Children (as it was then called) was to free widows with young children from “the wage earning role” so they could keep their children from “falling into social misfortune,” and “more affirmatively to rear them into citizens capable of contributing to society.”
11
In other words, in the new industrial order, it was assumed that all men should be wage earners; women with young children should not be working.

One final but often overlooked aspect of America’s twentieth-century system of social insurance bears mention, because it also depended on full-time employment. This was the tax-favored fringe benefit, such as company-provided health insurance and a company pension. Most people still think of these as private rather than public benefits, but they ballooned in the 1940s, and unions pressed for them, because employees didn’t have to pay taxes on employer-provided health insurance and could defer taxes on employer-provided pensions until retirement. These benefits were thus the economic equivalent of direct government payouts, since the forgone taxes left equivalent-sized holes in the government’s budget. And the holes steadily widened: By the mid-1980s, at the peak of their scope and generosity, the revenue losses to the Treasury from such tax-favored employee benefits were larger than expenses on all federal programs for the poor. The tax subsidy for employee health plans roughly equaled what was spent directly on health care for the poor through Medicaid, and revenue losses from tax-advantaged pension contributions totaled more than twice the total cost of cash aid for the poor.
12

THE RULES OF EMPLOYMENT

By midcentury, the transformation was complete. More than a third of all working Americans belonged to a union, and agreements between management and labor set wage and benefit levels throughout industry. Labor, management, and government together ushered into the middle class a large phalanx of blue-collar workers and stabilized the middle-class status of an expanding group of white-collar employees. The careers of these latter “organization men” (to use the felicitous phrase of sociologist William H. Whyte, Jr.’s best-seller of the era)
13
came to be as ordered and predictable as those of their blue-collar counterparts. So common as to be taken for granted, the implicit rules of employment at midcentury still frame much of our understanding, although they have almost nothing to do with the emerging reality of work in the twenty-first century. For example:

         

Steady work with predictably rising pay.
The typical employee spent almost his entire working life within the same enterprise. This was true not only of blue-collar workers; middle-level managers often joined their companies fresh out of college and remained with them until retirement. Two-thirds of senior executives surveyed in 1952 had been with the same company for more than twenty years.
14
The young white-collar men interviewed by Whyte gave voice to the accepted view:
“Be loyal to the company and the company will be loyal to you,”
they told him (emphasis in the original). “[T]he average young man cherishes the idea that his relationship with The Organization is to be for keeps,” wrote Whyte. Mutual loyalty could be counted on because, it was thought, “the goals of the individual and the goals of the organization will work out to be one and the same.”
15

BOOK: The Future of Success
9.91Mb size Format: txt, pdf, ePub
ads

Other books

The Snow Globe by Marita Conlon McKenna
The Happy Warrior by Kerry B Collison
Marisa Chenery by Warrior's Surrender