I’ve been wanting to comment on the #Occupy movement for quite some time, but events keep outrunning my prose. That’s still true today. So this is just going to be some disjointed musings to emphasize how delighted I am with these young people—the ones who’ll have to live in the mess we’ve created—and how crucial I think their movement is. Just consider: a few weeks ago, the wacky right seemed firmly in command of the entire political spectrum. Obama was reeling from hits to every one of his proposals, no matter how lame. All we heard was the Tea Party and the rantings and ravings of the Republican pretenders to the White House: Tweedledum and Tweedledumber-by-the-minute (I mean really, has there ever been such a gathering of cruel, incompetent morons in a presidential primary?)
Now, though, the #Occupy movement in city after city has changed all that. Just this morning, for example, I read a piece about the latest initiative in Congress: Ted Deutch, (D-FL) has offered a constitutional amendment (he calls it OCCUPIED: Outlawing Corporate Cash Undermining the Public Interest in our Elections and Democracy) to affirm that “rights protected by the Constitution belong to human beings, not to for-profit corporations or other business entities.” It would “prohibit business corporations and their associations from using money or other resources to influence voting on candidates or ballot measures anywhere in America.” Amazing. The Democrats in Congress are clearly feeling the heat from the occupiers, and some, at least, are starting to find some damn backbone.
Of course, it won’t be enough. But this is what such movements are supposed to do: change the debate, and force legislators to act rather than hide behind mealy-mouthed rhetoric. And just before this, I watched a video of a few dozen occupiers marching—on foot, along the highway where people can stop and congratulate them—from New York to Washington. They plan, according to some of their interviews, to barge in on the deadlocked “Super Committee” that’s supposed to be coming up with compromise measures to reduce the deficit. Of course, this “stupor committee” will do nothing of the kind, but the occupiers are pushing ahead, getting some press, and dramatizing the determined inaction of the U.S. Congress.
Even before that, I read the beautiful op-ed written (NY Times)by former poet laureate Robert Hass about his encounter with the police at UC Berkeley’s occupy gathering last week. In brief, Hass and his wife, poet Brenda Hillman, decided to monitor police behavior the night they were to remove the occupiers from UC’s Sproul Plaza. Instead, the Hass’s found themselves stuck in a crowd being forced together, and when Hillman sought to engage a policeman in dialogue, he struck her to the ground, also striking Hass when he tried to come to her aid. Hass, nursing bruised ribs, decries the militaristic tactics of the Darth Vader forces that have attacked, without provocation, the occupiers from New York to Denver to Oakland to San Francisco in what many see as a coordinated attempt to intimidate the occupiers, break their movement, and discourage any others who might be thinking of joining them. It hasn’t worked so far. Each broken-up demonstration has simply come back stronger—a fact we learned in the 60s, i.e. that inducing the authorities to overreact is part of revolutionary strategy. And these days, i.e., post-9/11, one hardly has to induce at all. The militarized police forces—the equipping of whom has become a booming industry for America’s military-industrial complex—seem to all be either on hair-trigger alert, or specifically instructed to beat the hell out of a few hundred demonstrators, regardless of provocation or law-breaking, to send a message. Fortunately, the message is having the opposite effect. Police brutality is encouraging, rather than discouraging more people to join the movement. And if polls are correct, millions of Americans, like myself, are cheering them on from the sidelines.
The police will, and already have scaled back their brutalities—especially after the horrific video of a helmeted officer walking calmly back and forth spraying pepper gas directly on a sitting group of UC Davis students blocking a sidewalk; which spraying called forth condemnation and an investigation by the UC Davis Chancellor. But things have gone very far already, and the police, like all authorities, are fixed in their attitudes. Crowds threaten them. Protest types disgust and alarm them. Used to intimidating, used to immediate compliance with their orders no matter how unreasonable, their responses are virtually automatic (their force has been rationalized by one spokesperson who said “linking arms is a form of violence”). Indeed, the conflict between police/soldiers and unarmed demonstrators has become the emblem of our time—in Tunisia, in Egypt, in Yemen, in Burma, in Libya, in Syria. The only question in any such situation is how far these “upholders of law and order” will go to snuff out the legitimate cries of the suffering.
And this is why, in the end, the #Occupy movement is so important. Ordinary people, mostly young people, are demonstrating that the situation—of inequality, of organized theft, of corporate malfeasance, of ecological disaster—has become so dire that they are willing to put their bodies on the line to change not just rhetoric, but everything. Even former lawmen—I know of two who have recently joined the occupiers, Ray Lewis, former police chief of Philadelphia (arrested), and Norm Stamper, former police chief of Seattle—are adding their voices to the rising chorus. Where all this will end is anybody’s guess: it could fizzle in the cold and wet. But one thing is sure. Those in power are taking note, and planning furiously to deflect the movement, infiltrate the movement, discourage and discredit the movement (this just in: Reader Supported News is reporting that a well-known DC Lobbying Firm has proposed an $850,000 plan to conduct ‘opposition research’ on the Occupy Movement and construct ‘negative narratives’ about it. See it at readersupportednews.org). There is fear in their hearts, because they know that the movement has focused on the one truth that cannot be denied: We really are the 99%, and without our cooperation, they cannot maintain their exploitation of the masses. For that alone, I salute the occupiers. And hope, when the time is ripe, to join them.
Lawrence DiStasi
Sunday, November 20, 2011
Wednesday, November 16, 2011
One Continuous Mistake
I’ve recently read an inspiring book called Fire Monks, which tells the story of how five Zen monks from the San Francisco Zen Center, against all odds, saved the center’s monastery at Tassajara, near Big Sur during the raging forest fire there in 2008. In one segment, the writer, Colleen Busch, quoted a phrase that SF Zen master Suzuki Roshi liked to use: it referred to life, even a Zen master’s life, as shoshaku jushaku, Japanese for “one continuous mistake.”
That resonated with me. Being a perfectionist, I’m always trying to get everything just right. I think it’s a common ailment among humans: We think we can outwit life and, by doing things just right, insulate ourselves—from rain water coming into our houses, from waste water leaking out, from fire, from flood, from storm, from sickness and death, from all the shocks that flesh is heir to. We cannot. Everything we do, every decision we make is dogged by mistakes. Indeed, when looked at carefully, most life decisions are impossible decisions, impossible to get right, that is. We are constantly making mistakes, always falling short to one degree or another, except where we’re very lucky.
Even in baseball (I’ve also been reading A. Bartlett Giamatti’s profound study of American sport, especially baseball, Take Time for Paradise), the situation is the same. As Giamatti points out, America’s sport is a game dominated by failure. Most attempts to reach base safely, not to mention getting home, are failures. Even the .300 hitter, the game’s crème de la creme, fails to hit safely 2 out of 3 times. Which is to repeat that life is one continuous mistake; we work hard to realize our “dreams,” but most of the time, most of us fail.
I have been reminded of this lately while doing work on my house. To engage in this kind of work—carpentry, painting, roofing—is to engage in continuous mistakes. Measurements don’t work out. One forgets to consider some inner cut. A carpenter I used to work with years ago had a wonderful way of dealing with this: “The universe,” he said only half joking, “is off by a quarter of an inch. Measure all you want, you’ll always be off.” I think he was right. We humans insist on making right angles of the world, when in fact, nature is all circles and curves. So whole pieces of lumber or fittings get wasted. The paint is either too thick or too thin and the color never looks the way it did on the sample. And when it comes to painting it on, there are always “holidays,” or slips of the brush splattering glass instead of wood, and if you’re really paying attention, boards that turn out to be dry-rotted or termite-infested. One continuous mistake.
Nor is it any different in the writing business. Years ago at Harcourt Brace, a manuscript I was preparing for publication had book titles typed in capital letters—a convention for typed manuscripts before computers (typewriters had no italics). The editor was always supposed to convert such caps for the printer by marking them ‘ital’ and lower-casing all but the first letter. In the first book I edited on my own, I overlooked this little detail, so the book was printed with book titles in all caps. Humiliating; but I never mentioned it and no one seemed to notice. Writing one’s own books carries the same, or even greater hazards. The writer lives in fear of that mistake—the one that appears in the title, in chapter titles, in boneheaded misquotes, in whole paragraphs that get cut off. In fact, I have never written an essay for publication that didn’t get mangled by the publication in one way or another. Most people never notice, but the writer does, living always with the knowledge that no book, no piece of art is ever perfectly rendered, and it haunts him. One continuous mistake.
Even when a star rises to the top, how many are there who avoid conspicuous, often fatal mistakes? Consider the last few presidents we’ve had—successes in the most exalted sense, having made it to the highest position available in America, or, for that matter, the whole world. And yet, think of the recent ones: Lyndon Johnson resigning in frustration over the Vietnam War, with war protesters shouting outside the White House, “Hey Hey LBJ, how many kids have you killed today?” Richard Nixon following him, with a great landslide victory over Humphrey, with a re-election landslide over McGovern, which triumph led to encomiums about his political genius; and within weeks had him embroiled in the greatest scandal in American history, Watergate, and within months, with impeachment looming, resigned in disgrace. Consoling himself only with that pathetic mantra, “I am not a crook.” Then Ford pardoning Nixon, another scandal, mouthing his presidential mantra that no one is above the law, but of course sometimes the law must accede to “reality” (i.e. power). Jimmy Carter taking office next, with high hopes, but shortly after his major achievement at Camp David, confronted with the Iran hostage crisis that drove him from office in ridicule. Then Reagan: Mr. “Morning in America.” But before too long, mired in his own disgrace, Iran-Contra, confirming him as lawbreaker-in-chief, and in hindsight seen by many as the architect of the long-term collapse not so much of the Soviet Union, but of American capitalism itself. Then G. H. W. Bush faltering on every level, shortly after claiming a transformative victory over Iraq. And Clinton with a few victories in the economy, but so unable to control his wee wee he ends up barely fighting off impeachment, his legacy “I never had sex with that woman.” And W; need we say anything about W? Non-existent weapons of mass destruction as justification for war? Deaths in the millions for what? To create the worst economic disaster since the Great Depression? And now Obama, failing utterly to fulfill the promise, siding with the worst elements that brought the nation to the brink, abandoning those who swept him into office, and unable now to get even a small bill through Congress.
In short, the U.S. presidency, not to mention the current Congress, is one continuous mistake.
The trick, for the zenist, and for all of us, is how to come to terms with this, with this knowing that no matter how carefully one proceeds, mistakes are continuous. (The obverse is that in zen, it is said, one can never make a “wrong” decision; or a “right” decision, for that matter. One just does what is needed at the time.) Or, as Suzuki Roshi used to say, just commit and do your best: ‘It’s the effort that counts, the sincere commitment to wake up, wherever you are. That’s all anyone can ask.’ Which is also to say, awake to what life is. Despite the assurance of our myths, life is not success; life is not progress; life is not keeping the rain out completely (other animals simply get rained on; and don’t melt). Life is one continuous mistake. Which is pretty much how it proceeds. Mistake after mistake, leading over time, perhaps, to a little tinkering here, a bit of tinkering there with just enough of us getting by to keep it going, the only success being its continuation; our continuation. It goes on; we go on: the entire creation. Which is about all we can say; and isn’t that, mistakes and all, miraculous enough?
Lawrence DiStasi
That resonated with me. Being a perfectionist, I’m always trying to get everything just right. I think it’s a common ailment among humans: We think we can outwit life and, by doing things just right, insulate ourselves—from rain water coming into our houses, from waste water leaking out, from fire, from flood, from storm, from sickness and death, from all the shocks that flesh is heir to. We cannot. Everything we do, every decision we make is dogged by mistakes. Indeed, when looked at carefully, most life decisions are impossible decisions, impossible to get right, that is. We are constantly making mistakes, always falling short to one degree or another, except where we’re very lucky.
Even in baseball (I’ve also been reading A. Bartlett Giamatti’s profound study of American sport, especially baseball, Take Time for Paradise), the situation is the same. As Giamatti points out, America’s sport is a game dominated by failure. Most attempts to reach base safely, not to mention getting home, are failures. Even the .300 hitter, the game’s crème de la creme, fails to hit safely 2 out of 3 times. Which is to repeat that life is one continuous mistake; we work hard to realize our “dreams,” but most of the time, most of us fail.
I have been reminded of this lately while doing work on my house. To engage in this kind of work—carpentry, painting, roofing—is to engage in continuous mistakes. Measurements don’t work out. One forgets to consider some inner cut. A carpenter I used to work with years ago had a wonderful way of dealing with this: “The universe,” he said only half joking, “is off by a quarter of an inch. Measure all you want, you’ll always be off.” I think he was right. We humans insist on making right angles of the world, when in fact, nature is all circles and curves. So whole pieces of lumber or fittings get wasted. The paint is either too thick or too thin and the color never looks the way it did on the sample. And when it comes to painting it on, there are always “holidays,” or slips of the brush splattering glass instead of wood, and if you’re really paying attention, boards that turn out to be dry-rotted or termite-infested. One continuous mistake.
Nor is it any different in the writing business. Years ago at Harcourt Brace, a manuscript I was preparing for publication had book titles typed in capital letters—a convention for typed manuscripts before computers (typewriters had no italics). The editor was always supposed to convert such caps for the printer by marking them ‘ital’ and lower-casing all but the first letter. In the first book I edited on my own, I overlooked this little detail, so the book was printed with book titles in all caps. Humiliating; but I never mentioned it and no one seemed to notice. Writing one’s own books carries the same, or even greater hazards. The writer lives in fear of that mistake—the one that appears in the title, in chapter titles, in boneheaded misquotes, in whole paragraphs that get cut off. In fact, I have never written an essay for publication that didn’t get mangled by the publication in one way or another. Most people never notice, but the writer does, living always with the knowledge that no book, no piece of art is ever perfectly rendered, and it haunts him. One continuous mistake.
Even when a star rises to the top, how many are there who avoid conspicuous, often fatal mistakes? Consider the last few presidents we’ve had—successes in the most exalted sense, having made it to the highest position available in America, or, for that matter, the whole world. And yet, think of the recent ones: Lyndon Johnson resigning in frustration over the Vietnam War, with war protesters shouting outside the White House, “Hey Hey LBJ, how many kids have you killed today?” Richard Nixon following him, with a great landslide victory over Humphrey, with a re-election landslide over McGovern, which triumph led to encomiums about his political genius; and within weeks had him embroiled in the greatest scandal in American history, Watergate, and within months, with impeachment looming, resigned in disgrace. Consoling himself only with that pathetic mantra, “I am not a crook.” Then Ford pardoning Nixon, another scandal, mouthing his presidential mantra that no one is above the law, but of course sometimes the law must accede to “reality” (i.e. power). Jimmy Carter taking office next, with high hopes, but shortly after his major achievement at Camp David, confronted with the Iran hostage crisis that drove him from office in ridicule. Then Reagan: Mr. “Morning in America.” But before too long, mired in his own disgrace, Iran-Contra, confirming him as lawbreaker-in-chief, and in hindsight seen by many as the architect of the long-term collapse not so much of the Soviet Union, but of American capitalism itself. Then G. H. W. Bush faltering on every level, shortly after claiming a transformative victory over Iraq. And Clinton with a few victories in the economy, but so unable to control his wee wee he ends up barely fighting off impeachment, his legacy “I never had sex with that woman.” And W; need we say anything about W? Non-existent weapons of mass destruction as justification for war? Deaths in the millions for what? To create the worst economic disaster since the Great Depression? And now Obama, failing utterly to fulfill the promise, siding with the worst elements that brought the nation to the brink, abandoning those who swept him into office, and unable now to get even a small bill through Congress.
In short, the U.S. presidency, not to mention the current Congress, is one continuous mistake.
The trick, for the zenist, and for all of us, is how to come to terms with this, with this knowing that no matter how carefully one proceeds, mistakes are continuous. (The obverse is that in zen, it is said, one can never make a “wrong” decision; or a “right” decision, for that matter. One just does what is needed at the time.) Or, as Suzuki Roshi used to say, just commit and do your best: ‘It’s the effort that counts, the sincere commitment to wake up, wherever you are. That’s all anyone can ask.’ Which is also to say, awake to what life is. Despite the assurance of our myths, life is not success; life is not progress; life is not keeping the rain out completely (other animals simply get rained on; and don’t melt). Life is one continuous mistake. Which is pretty much how it proceeds. Mistake after mistake, leading over time, perhaps, to a little tinkering here, a bit of tinkering there with just enough of us getting by to keep it going, the only success being its continuation; our continuation. It goes on; we go on: the entire creation. Which is about all we can say; and isn’t that, mistakes and all, miraculous enough?
Lawrence DiStasi
Wednesday, November 9, 2011
Wake-up Time?
I have to confess that I didn’t go to the polls yesterday, having glanced at the sample ballot to find mostly school bond issues of little interest to me now. But across the country, what an election it was. Though it may be too much to hope, it seems that our great unwashed are finally waking up to the fact that capitalist democracy, in its present form, is not going to save them. Rather, the oligarchs and banksters and Wall Street billionaires now in control of both the economy and the political process will never be satisfied until they have ground the faces of the working classes into the dirt, stripped them of all dignity, and forced them to shut up, watch the circus, and become slaves. But wait: enter the Wall Street occupiers who, contrary to all expectations, seem to have changed the conversation, and now, voters across the country have shown that they, too, are fed up.
In Ohio, where Governor John Kasich had emulated his Republican counterpart in Wisconsin by pushing a law, SB 5, that stripped public sector unions of their right to collectively bargain, the voters repealed the law in a huge victory for union rights. Over 60% of voters stood with nurses, teachers, policemen and firefighters in a victory that had the Ohio governor sheepishly acknowledging that he had “heard the voters.” I just bet he did. I bet the smart-ass governor of Wisconsin heard too. Perhaps even the billionaire Koch brothers, who financed much of this concerted Republican attack on workers, heard it as well. Because this wasn’t the only reversal for the conservatives who just months ago appeared poised to take over the whole nation.
No. Progressive victories took place in several more states, including Maine, Mississippi, Iowa, Arizona and North Carolina. Something is happening here, Mr. Jones. In Maine, the people voted to maintain their same-day voter registration policy after the right-wing legislature had passed a law to repeal it—employing their usual argument about “voter fraud.” The people didn’t believe it, saw it as disenfranchisement, and yesterday took their right back. In Mississippi, voters struck back on a different front, rejecting another attempt by fundamentalists to pass a constitutional amendment granting “personhood” to a “fertilized egg.” That’s right. On the one hand, these right-wing bozos grant personhood to corporations; on the other, to “fertilized eggs”, thus putting at risk not just abortions, but even birth control. Even benighted voters in Mississippi said “no thanks” thank god.
But my two favorites, at least in the U.S., were Arizona and Missoula, Montana. In Arizona, the Republican state Senator who had pushed the state’s nasty immigration bill, SB 1070, one Russell Pearce by name, was recalled. Tossed out of office. The gopher for the notorious American Legislative Council (ALEC)—funded by corporate special interests including the aforementioned Koch Brothers—Pearce this morning was talking about having to re-examine his options after his big defeat. Which probably means figuring out how to maintain his racism by putting a more palatable face on it. No matter. He’s gone and SB 1070 should be toast. The Koch brothers suffered another defeat in Wake County, North Carolina where voters defeated four conservative school board candidates backed by the Koch’s “Americans for Prosperity” who wanted to get rid of the district’s diversity policies. In other words, to re-segregate the schools. The voters said no, and replaced them with Democrats. Why, it might even be called morning in America!
Finally, in Missoula MT, (site, incidentally, of the camp where Italian Americans were interned during WWII), citizens passed a resolution proposing to amend the U.S. Constitution to END CORPORATE PERSONHOOD. To me, this is potentially the most important victory of all. This is because the absurd notion that corporations are actually persons, with all the First Amendment rights granted to human beings by the U.S. Constitution—including and specifically free speech (the basis for the Supreme Court decision in Citizens United granting corporations complete freedom to throw money at any and all candidates for public office without restrictions)—makes a mockery of democracy itself. Corporations are fictitious entities. Persons organize themselves into corporations specifically to limit their liability as individual humans in business dealings. That limited liability is granted because it allows corporations to do what individuals cannot—so to then turn around and grant a fiction with immunity the same protections as vulnerable humans is an absurdity. Further, the Supreme Court itself never actually decided on this issue; it was a clerk working for the court, J.C. Bancroft Davis, who added a headnote to the 1886 Santa Clara case that assumed the personhood of corporations—a headnote that slipped by and became precedent ever after. In other words, corporate personhood should never have had the force of law. Since it does, however, the remedy is to pass a constitutional amendment to bring the situation back to where the Founders—Jefferson, Madison, and others who insisted that it was the people who needed protection from corporations—initially put it. Humans have human rights. Corporations do not, except in the fictitious world established in the United States in recent years. As one sign in the Occupy movement put it, “I’ll believe corporations are persons when Texas executes one.” It is time to abolish this so-called right, and the voters of Missoula, Montana took a first step. My hope is that before too long, the entire nation will wake up as well, and take the necessary actions to put corporations and their power back in the bottle where they belong. If, that is, it isn’t already too late—which it will be if now all of Italy comes undone (Berlusconi’s downfall another victory), joining Greece, and the whole Eurozone follows suit. Then, it might be too late not only to save Europe, but to save capitalism as well.
First things first, though, and today we can raise a glass to some small, but significant victories. May they continue.
Lawrence DiStasi
In Ohio, where Governor John Kasich had emulated his Republican counterpart in Wisconsin by pushing a law, SB 5, that stripped public sector unions of their right to collectively bargain, the voters repealed the law in a huge victory for union rights. Over 60% of voters stood with nurses, teachers, policemen and firefighters in a victory that had the Ohio governor sheepishly acknowledging that he had “heard the voters.” I just bet he did. I bet the smart-ass governor of Wisconsin heard too. Perhaps even the billionaire Koch brothers, who financed much of this concerted Republican attack on workers, heard it as well. Because this wasn’t the only reversal for the conservatives who just months ago appeared poised to take over the whole nation.
No. Progressive victories took place in several more states, including Maine, Mississippi, Iowa, Arizona and North Carolina. Something is happening here, Mr. Jones. In Maine, the people voted to maintain their same-day voter registration policy after the right-wing legislature had passed a law to repeal it—employing their usual argument about “voter fraud.” The people didn’t believe it, saw it as disenfranchisement, and yesterday took their right back. In Mississippi, voters struck back on a different front, rejecting another attempt by fundamentalists to pass a constitutional amendment granting “personhood” to a “fertilized egg.” That’s right. On the one hand, these right-wing bozos grant personhood to corporations; on the other, to “fertilized eggs”, thus putting at risk not just abortions, but even birth control. Even benighted voters in Mississippi said “no thanks” thank god.
But my two favorites, at least in the U.S., were Arizona and Missoula, Montana. In Arizona, the Republican state Senator who had pushed the state’s nasty immigration bill, SB 1070, one Russell Pearce by name, was recalled. Tossed out of office. The gopher for the notorious American Legislative Council (ALEC)—funded by corporate special interests including the aforementioned Koch Brothers—Pearce this morning was talking about having to re-examine his options after his big defeat. Which probably means figuring out how to maintain his racism by putting a more palatable face on it. No matter. He’s gone and SB 1070 should be toast. The Koch brothers suffered another defeat in Wake County, North Carolina where voters defeated four conservative school board candidates backed by the Koch’s “Americans for Prosperity” who wanted to get rid of the district’s diversity policies. In other words, to re-segregate the schools. The voters said no, and replaced them with Democrats. Why, it might even be called morning in America!
Finally, in Missoula MT, (site, incidentally, of the camp where Italian Americans were interned during WWII), citizens passed a resolution proposing to amend the U.S. Constitution to END CORPORATE PERSONHOOD. To me, this is potentially the most important victory of all. This is because the absurd notion that corporations are actually persons, with all the First Amendment rights granted to human beings by the U.S. Constitution—including and specifically free speech (the basis for the Supreme Court decision in Citizens United granting corporations complete freedom to throw money at any and all candidates for public office without restrictions)—makes a mockery of democracy itself. Corporations are fictitious entities. Persons organize themselves into corporations specifically to limit their liability as individual humans in business dealings. That limited liability is granted because it allows corporations to do what individuals cannot—so to then turn around and grant a fiction with immunity the same protections as vulnerable humans is an absurdity. Further, the Supreme Court itself never actually decided on this issue; it was a clerk working for the court, J.C. Bancroft Davis, who added a headnote to the 1886 Santa Clara case that assumed the personhood of corporations—a headnote that slipped by and became precedent ever after. In other words, corporate personhood should never have had the force of law. Since it does, however, the remedy is to pass a constitutional amendment to bring the situation back to where the Founders—Jefferson, Madison, and others who insisted that it was the people who needed protection from corporations—initially put it. Humans have human rights. Corporations do not, except in the fictitious world established in the United States in recent years. As one sign in the Occupy movement put it, “I’ll believe corporations are persons when Texas executes one.” It is time to abolish this so-called right, and the voters of Missoula, Montana took a first step. My hope is that before too long, the entire nation will wake up as well, and take the necessary actions to put corporations and their power back in the bottle where they belong. If, that is, it isn’t already too late—which it will be if now all of Italy comes undone (Berlusconi’s downfall another victory), joining Greece, and the whole Eurozone follows suit. Then, it might be too late not only to save Europe, but to save capitalism as well.
First things first, though, and today we can raise a glass to some small, but significant victories. May they continue.
Lawrence DiStasi
Friday, September 30, 2011
Greed Be Good
In 1965, Alan Greenspan wrote:
This should really be the epitaph inscribed on the tombstone of the American economy. Far from ‘protecting’ consumers, the greed that has defined American business and especially Wall Street these last 40 years has decimated the economy, loaded businesses with debt, put millions of Americans out of work, and transferred huge chunks of American industry to foreign countries such as China. Therein lies the theme of Jeff Madrick’s crucial book, The Age of Greed, (Knopf: 2011). To read it, with its portraits of banksters and junk bond traders and acquisition specialists and CEOs of America’s largest corporations, is to learn of chicanery, conniving and contempt for average Americans on such a scale as to sometimes deceive the reader into thinking he is reading Dante’s Inferno. Such characters—some of the mightiest names in corporate and political America in the latter years of the 20th Century, names like Rubin and Weill and Reagan and Greenspan and Friedman and Milken and Boesky and Welch—do deserve a poet like Dante to fix them in an appropriate level of pain and torment. While Madrick is not that poet, he does a creditable enough job of this to sicken even the most cynical reader, for his is the tale of the outright looting and crippling of the American industrial might (along with its workers) that was once the envy of the world.
The book begins with the general proposition that while industry and transportation and communications and retailing were once the foundations of American wealth and prosperity, “by the 1990s and 2000s, financial companies provided the fastest path to fabulous wealth for individuals” (24). And where government was once seen as a needed supporter and regulator of such enterprises, Milton Friedman’s economic doctrines, put into saleable form by Ronald Reagan and Alan Greenspan, turned government into the enemy. As Friedman wrote, “The fact is that the Great Depression, like most other periods of severe unemployment, was produced by government (mis)management rather than by the inherent instability of the private economy.” The answer to all problems, in this tortured view, lay not in government actions to help those who need it, but in reducing government and lowering taxes so as to (allegedly) make the poor better off, eliminate inequality and discrimination, and lead us all to the promised free-market land. As noted above, Alan Greenspan believed wholeheartedly in these and other theories (especially those espoused by his friend Ayn Rand), and Ronald Reagan became the shill for selling such pie-in-the-sky nonsense to the American public. As with his sales work for General Electric, Reagan marketed the kool-aid more successfully than anyone could have anticipated. In office in California as governor, he blamed welfare recipients for the state government’s financial problems: “Welfare is the greatest domestic problem facing the nation today and the reason for the high cost of government.” When he got to the national stage with inflation rampant, he hit government profligacy even harder. “We don’t have inflation because the people are living too well,” he said. “We have inflation because government is living too well” (169). All this was coupled with his mantra that getting back to the kind of “rugged individualism” that had made America (and himself) great required reducing taxes. And reduce he did. From a tax rate that was at 70% on the highest earners when he took office, he first signed the 1981 Kemp-Roth bill to reduce it to 50%, and then, in 1986, with the Tax Reform Act, reduced it even further to 28%. Meantime, the bottom rate for the poorest Americans was raised from 11% to 15%, while earlier, Reagan had also raised the payroll tax (for Social Security) from 12.3% to 15.3%. This latter raise, it should be noted, coupled with the provision that only wages up to $107,000 would be taxed for SS, meant that “earners in the middle one-fifth of Americans would now pay nearly 10% of their income in payroll taxes, while those in the top 1% now paid about 1-1/2%” (170). And what Reagan never mentioned about his “rugged individualism” is that he was made wealthy by those rich men who cajoled him to run for office: his agent arranged for 20th Century Fox to buy Reagan’s ranch for $2 million (he had paid only $65,000 for it), giving him a tidy profit with which to buy another ranch that also doubled in price when he sold it.
But such tales treat only the enablers. It is when we get to the actual hucksters of this story that things get interesting (or nauseating, depending on your point of view.) The basic scheme went like this: find a company that is undervalued—often because it had managed its assets so well it had cash on hand—and acquire it, using debt to finance the takeover. Then make money—and I mean millions and billions—on all the steps involved in the takeover, including the debt service, the legal fees, and the rise (or fall) in the stock price. For in the age of greed that Madrick documents, the stock price was all. Anything that pushed the stock price of a company up was good. Anything that pushed it down was bad (unless you were one of those smart guys like hedge-fund ace George Soros who worked the “shorts”). And of course, the best way to get a company’s stock price to go up was to increase profits. And the best way to do that was not to innovate or develop better products, but to slash costs, i.e. fire workers. Here is how Madrick puts it:
What began to happen in the 1980s and into the 1990s was that all companies, no matter how successful, became targets of the ruthless merger mania that replaced normal business improvements. Lawyers like Joe Flom and takeover artists like Carl Icahn and T. Boone Pickens could spot an undervalued, or low-stock-price company (the process reminds one of wolves spotting a young, or lame deer in a herd) to take over, using borrowed money to finance it (90% of the purchase price). The borrowing then demanded that the new merged company cut costs in order to service the huge debt required for the merger—which in turn required firing workers. If a company did not want to be taken over, the only way to do so was to get its stock price to rise, and this, too, required the firing of workers. In either case, the workers took the hit. But the CEOs running the merged ventures, often sweethearted into selling by generous gifts of stock, “usually made a fortune.” As Madrick notes, in 1986, Macy CEO Ed Finkelstein arranged for a private buyout of his firm, for $4.5 billion, and became the “envy of fellow CEOs” (174). Like many other mergers, however, this one drained what was one of America’s most successful retail operations, and Macy’s went bankrupt in 1992. Madrick concludes:
In the process, of course, the Wall Street bankers and leveraged buyout firms (LBOs) like Kohlberg Kravis Roberts who arranged the buys and the financing took in obscene amounts of money. So did risk abitrageurs (who invest in prospective mergers and acquisitions, angling to buy before the stock price rises on the rumor of a merger) like Ivan Boesky. Earning $100 million in one year alone (1986 when he was Wall Street’s highest earner), Boesky needed inside information to buy early, and got into the little habit of paying investment bankers for that information, i.e. on upcoming deals. Unfortunately for him, he got caught in his banner year because one of his informants (Dennis Levine of Drexel Burnham) was arrested and agreed to name those he had tipped off. Boesky was one (the deal was to pay Levine 5% of his profits for early information on a takeover), and he too was subpoenaed in the fall of 1986. Boesky immediately agreed to finger others (agreeing to wear a wire at meetings), and nailed Martin Siegel, also with Drexel, who, in turn, kept the daisy chain of ratting out associates going by naming Robert Freeman, an arbitrageur at Goldman Sachs. Nice fellows. Boesky ended up serving three years in prison, but he fingered an even bigger fish, Michael Milken. Then the wealthiest and most ruthless Wall Streeter of all, Milken, who made his money in junk bonds (risky high-interest bonds to ‘rescue’ companies in trouble) was sentenced to 10 years in jail (reduced to 2 years for good behavior) for securities violations, plus $1.3 billion in fines and restitution. He’d made so much money, though, that he and his family still had billions, including enough to start a nice foundation for economic research, to commemorate his good name in perpetuity.
There are, of course, lots of other admirable characters in this tale, but one in particular deserves mention, Jack Welch, the revered CEO of General Electric. This is because Welch’s reign at GE typifies what greed did to a once-great American institution, the very one that Ronald Reagan shilled for in a more innocent age, the one that brought the Gipper to the attention of the big money boys. Welch made enormous profits for GE (in 2000, the year he left, GE earnings had grown by 80 times to more than $5 billion), and himself, but he didn’t do it the “old fashioned way,” i.e. by developing new and better products. He did it by shifting the emphasis at GE from production to finance. Welch saw the value of this early:
To give an idea of how this works, Madrick points out that “in 1977, GE Capital…generated $67 million in revenue with only 7,000 employees, while appliances that year generated $100 million and required 47,000 workers” (191). Welch did the math. It didn’t take him long to sell GE’s traditional appliance business to Black & Decker, outraging most employees, though not many of them were left to protest: in his first two years, Welch laid off more than 70,000 workers, nearly 20% of his work force, and within five years, about 130,000 of GE’s 400,000 workers were gone. Fortune Magazine admiringly labeled him the “toughest boss in America.” And by the time he left the company in 2001, GE Capital Services had spread from North America to forty-eight countries, with assets of $370 billion, making GE the most highly valued company in America. The only problem was, with the lure of money and profits so great, GE Capital acquired a mortgage brokerage (Welch was no mean takeover artist himself) and got into subprime lending. In 2008, GE’s profits, mostly based on its financial dealings, sank like a stone, with its stock price dropping by 60%. Welch, the great prophet of American competition, now had to witness his company being bailed out by the Federal Deposit Insurance Company: since it owned a small federal bank, the FDIC guaranteed nearly $149 billion of GE’s debt. So after turning a U.S. industrial giant into a giant bank, the man Fortune Magazine named “manager of the century” also succeeded in turning it into a giant welfare case. Perhaps there’s a lesson here somewhere.
There’s more in this disturbing book—such as the fact that Wall Streeters not only attacked corporations in takeovers, they also attacked governments (George Soros’ hedge fund attacked the British pound, as well as Asian currency in 1999, causing crises in both places, and ultimately, cutbacks in government programs for the poor)—but the story is the same. During several decades of Wall Street financial predation, insider trading, and more financial chicanery than most of us can even dream of, the high-rolling banksters made off with trillions of dollars, and most others (including union pension funds) lost their shirts. Madrick quotes John Bogle, founder of Vanguard Funds, concerning the bust of the high-tech IPO bubble: “If the winners raked in some $2.275 Trillion, who lost all the money?...The losers, of course, were those who bought the stocks and who paid the intermediation fees…the great American public” (332). The same scenario was played out again and again, in derivatives trading, in the housing boom, in the mortgage-backed securities boom, in the false evaluations of stock analysts like Jack Grubman, in the predatory mergers and subprime shenanigans of Citibank CEO Sandy Weill, and on and on, all with an ethic perfectly expressed in an email, made public by the SEC, commenting on how ‘the biz’ was now run:
That’s essentially the story here. And the sad ending, which most of us haven’t really digested yet, is that the very vipers who cleverly and maliciously calculated each new heist and made off with all the money while destroying the economy, then got federal guarantees and loans that came to more than $12 trillion, that’s trillion, to “save the country.” And now lobby for “austerity” and “leaner government” and fewer “wasteful social programs” like social security and Medicare, and fewer regulations so that their delicate business minds can feel safe enough to invest again. And save us all again with their unfettered greed.
In which case, I’ll sure feel protected. Won’t you?
Lawrence DiStasi
“It is precisely the greed of the businessman, or more precisely, his profit-seeking, which is the unexcelled protector of the consumer” (Madrick, 228).
This should really be the epitaph inscribed on the tombstone of the American economy. Far from ‘protecting’ consumers, the greed that has defined American business and especially Wall Street these last 40 years has decimated the economy, loaded businesses with debt, put millions of Americans out of work, and transferred huge chunks of American industry to foreign countries such as China. Therein lies the theme of Jeff Madrick’s crucial book, The Age of Greed, (Knopf: 2011). To read it, with its portraits of banksters and junk bond traders and acquisition specialists and CEOs of America’s largest corporations, is to learn of chicanery, conniving and contempt for average Americans on such a scale as to sometimes deceive the reader into thinking he is reading Dante’s Inferno. Such characters—some of the mightiest names in corporate and political America in the latter years of the 20th Century, names like Rubin and Weill and Reagan and Greenspan and Friedman and Milken and Boesky and Welch—do deserve a poet like Dante to fix them in an appropriate level of pain and torment. While Madrick is not that poet, he does a creditable enough job of this to sicken even the most cynical reader, for his is the tale of the outright looting and crippling of the American industrial might (along with its workers) that was once the envy of the world.
The book begins with the general proposition that while industry and transportation and communications and retailing were once the foundations of American wealth and prosperity, “by the 1990s and 2000s, financial companies provided the fastest path to fabulous wealth for individuals” (24). And where government was once seen as a needed supporter and regulator of such enterprises, Milton Friedman’s economic doctrines, put into saleable form by Ronald Reagan and Alan Greenspan, turned government into the enemy. As Friedman wrote, “The fact is that the Great Depression, like most other periods of severe unemployment, was produced by government (mis)management rather than by the inherent instability of the private economy.” The answer to all problems, in this tortured view, lay not in government actions to help those who need it, but in reducing government and lowering taxes so as to (allegedly) make the poor better off, eliminate inequality and discrimination, and lead us all to the promised free-market land. As noted above, Alan Greenspan believed wholeheartedly in these and other theories (especially those espoused by his friend Ayn Rand), and Ronald Reagan became the shill for selling such pie-in-the-sky nonsense to the American public. As with his sales work for General Electric, Reagan marketed the kool-aid more successfully than anyone could have anticipated. In office in California as governor, he blamed welfare recipients for the state government’s financial problems: “Welfare is the greatest domestic problem facing the nation today and the reason for the high cost of government.” When he got to the national stage with inflation rampant, he hit government profligacy even harder. “We don’t have inflation because the people are living too well,” he said. “We have inflation because government is living too well” (169). All this was coupled with his mantra that getting back to the kind of “rugged individualism” that had made America (and himself) great required reducing taxes. And reduce he did. From a tax rate that was at 70% on the highest earners when he took office, he first signed the 1981 Kemp-Roth bill to reduce it to 50%, and then, in 1986, with the Tax Reform Act, reduced it even further to 28%. Meantime, the bottom rate for the poorest Americans was raised from 11% to 15%, while earlier, Reagan had also raised the payroll tax (for Social Security) from 12.3% to 15.3%. This latter raise, it should be noted, coupled with the provision that only wages up to $107,000 would be taxed for SS, meant that “earners in the middle one-fifth of Americans would now pay nearly 10% of their income in payroll taxes, while those in the top 1% now paid about 1-1/2%” (170). And what Reagan never mentioned about his “rugged individualism” is that he was made wealthy by those rich men who cajoled him to run for office: his agent arranged for 20th Century Fox to buy Reagan’s ranch for $2 million (he had paid only $65,000 for it), giving him a tidy profit with which to buy another ranch that also doubled in price when he sold it.
But such tales treat only the enablers. It is when we get to the actual hucksters of this story that things get interesting (or nauseating, depending on your point of view.) The basic scheme went like this: find a company that is undervalued—often because it had managed its assets so well it had cash on hand—and acquire it, using debt to finance the takeover. Then make money—and I mean millions and billions—on all the steps involved in the takeover, including the debt service, the legal fees, and the rise (or fall) in the stock price. For in the age of greed that Madrick documents, the stock price was all. Anything that pushed the stock price of a company up was good. Anything that pushed it down was bad (unless you were one of those smart guys like hedge-fund ace George Soros who worked the “shorts”). And of course, the best way to get a company’s stock price to go up was to increase profits. And the best way to do that was not to innovate or develop better products, but to slash costs, i.e. fire workers. Here is how Madrick puts it:
American business was adopting a business strategy based on maximizing profits, large size, bargaining power, high levels of debt, and corporate acquisitions…Cutting costs boldly, especially labor costs, was a central part of the strategy. (187)
What began to happen in the 1980s and into the 1990s was that all companies, no matter how successful, became targets of the ruthless merger mania that replaced normal business improvements. Lawyers like Joe Flom and takeover artists like Carl Icahn and T. Boone Pickens could spot an undervalued, or low-stock-price company (the process reminds one of wolves spotting a young, or lame deer in a herd) to take over, using borrowed money to finance it (90% of the purchase price). The borrowing then demanded that the new merged company cut costs in order to service the huge debt required for the merger—which in turn required firing workers. If a company did not want to be taken over, the only way to do so was to get its stock price to rise, and this, too, required the firing of workers. In either case, the workers took the hit. But the CEOs running the merged ventures, often sweethearted into selling by generous gifts of stock, “usually made a fortune.” As Madrick notes, in 1986, Macy CEO Ed Finkelstein arranged for a private buyout of his firm, for $4.5 billion, and became the “envy of fellow CEOs” (174). Like many other mergers, however, this one drained what was one of America’s most successful retail operations, and Macy’s went bankrupt in 1992. Madrick concludes:
The allegiance of business management thus shifted from the long-term health of the corporations, their workers, and the communities they served, to Wall St. bankers who could make them personally rich... (173)
In the process, of course, the Wall Street bankers and leveraged buyout firms (LBOs) like Kohlberg Kravis Roberts who arranged the buys and the financing took in obscene amounts of money. So did risk abitrageurs (who invest in prospective mergers and acquisitions, angling to buy before the stock price rises on the rumor of a merger) like Ivan Boesky. Earning $100 million in one year alone (1986 when he was Wall Street’s highest earner), Boesky needed inside information to buy early, and got into the little habit of paying investment bankers for that information, i.e. on upcoming deals. Unfortunately for him, he got caught in his banner year because one of his informants (Dennis Levine of Drexel Burnham) was arrested and agreed to name those he had tipped off. Boesky was one (the deal was to pay Levine 5% of his profits for early information on a takeover), and he too was subpoenaed in the fall of 1986. Boesky immediately agreed to finger others (agreeing to wear a wire at meetings), and nailed Martin Siegel, also with Drexel, who, in turn, kept the daisy chain of ratting out associates going by naming Robert Freeman, an arbitrageur at Goldman Sachs. Nice fellows. Boesky ended up serving three years in prison, but he fingered an even bigger fish, Michael Milken. Then the wealthiest and most ruthless Wall Streeter of all, Milken, who made his money in junk bonds (risky high-interest bonds to ‘rescue’ companies in trouble) was sentenced to 10 years in jail (reduced to 2 years for good behavior) for securities violations, plus $1.3 billion in fines and restitution. He’d made so much money, though, that he and his family still had billions, including enough to start a nice foundation for economic research, to commemorate his good name in perpetuity.
There are, of course, lots of other admirable characters in this tale, but one in particular deserves mention, Jack Welch, the revered CEO of General Electric. This is because Welch’s reign at GE typifies what greed did to a once-great American institution, the very one that Ronald Reagan shilled for in a more innocent age, the one that brought the Gipper to the attention of the big money boys. Welch made enormous profits for GE (in 2000, the year he left, GE earnings had grown by 80 times to more than $5 billion), and himself, but he didn’t do it the “old fashioned way,” i.e. by developing new and better products. He did it by shifting the emphasis at GE from production to finance. Welch saw the value of this early:
“My gut told me that compared to the industrial operations I did know, the business (i.e. GE Capital) seemed an easy way to make money. You didn’t have to invest heavily in R&D, build factories, or bend metal…” (191)
To give an idea of how this works, Madrick points out that “in 1977, GE Capital…generated $67 million in revenue with only 7,000 employees, while appliances that year generated $100 million and required 47,000 workers” (191). Welch did the math. It didn’t take him long to sell GE’s traditional appliance business to Black & Decker, outraging most employees, though not many of them were left to protest: in his first two years, Welch laid off more than 70,000 workers, nearly 20% of his work force, and within five years, about 130,000 of GE’s 400,000 workers were gone. Fortune Magazine admiringly labeled him the “toughest boss in America.” And by the time he left the company in 2001, GE Capital Services had spread from North America to forty-eight countries, with assets of $370 billion, making GE the most highly valued company in America. The only problem was, with the lure of money and profits so great, GE Capital acquired a mortgage brokerage (Welch was no mean takeover artist himself) and got into subprime lending. In 2008, GE’s profits, mostly based on its financial dealings, sank like a stone, with its stock price dropping by 60%. Welch, the great prophet of American competition, now had to witness his company being bailed out by the Federal Deposit Insurance Company: since it owned a small federal bank, the FDIC guaranteed nearly $149 billion of GE’s debt. So after turning a U.S. industrial giant into a giant bank, the man Fortune Magazine named “manager of the century” also succeeded in turning it into a giant welfare case. Perhaps there’s a lesson here somewhere.
There’s more in this disturbing book—such as the fact that Wall Streeters not only attacked corporations in takeovers, they also attacked governments (George Soros’ hedge fund attacked the British pound, as well as Asian currency in 1999, causing crises in both places, and ultimately, cutbacks in government programs for the poor)—but the story is the same. During several decades of Wall Street financial predation, insider trading, and more financial chicanery than most of us can even dream of, the high-rolling banksters made off with trillions of dollars, and most others (including union pension funds) lost their shirts. Madrick quotes John Bogle, founder of Vanguard Funds, concerning the bust of the high-tech IPO bubble: “If the winners raked in some $2.275 Trillion, who lost all the money?...The losers, of course, were those who bought the stocks and who paid the intermediation fees…the great American public” (332). The same scenario was played out again and again, in derivatives trading, in the housing boom, in the mortgage-backed securities boom, in the false evaluations of stock analysts like Jack Grubman, in the predatory mergers and subprime shenanigans of Citibank CEO Sandy Weill, and on and on, all with an ethic perfectly expressed in an email, made public by the SEC, commenting on how ‘the biz’ was now run:
“Lure the people into the calm and then totally fuck ‘em” (334).
That’s essentially the story here. And the sad ending, which most of us haven’t really digested yet, is that the very vipers who cleverly and maliciously calculated each new heist and made off with all the money while destroying the economy, then got federal guarantees and loans that came to more than $12 trillion, that’s trillion, to “save the country.” And now lobby for “austerity” and “leaner government” and fewer “wasteful social programs” like social security and Medicare, and fewer regulations so that their delicate business minds can feel safe enough to invest again. And save us all again with their unfettered greed.
In which case, I’ll sure feel protected. Won’t you?
Lawrence DiStasi
Tuesday, September 27, 2011
Avatars and Immortality
Anyone who has read or heard even a little history knows that the dream of immortality has existed among humans for a very long time. Most of these dreams (though not all, as the Christian fundamentalist notions of the “rapture,” and Islamic fundamentalist notions of a heaven full of virgins awaiting the martyrs who blow themselves and others up, prove) have been debunked in recent years, when even the Roman Catholic Church has pretty much abandoned its notion of an afterlife in fire for those who’ve been ‘bad’ (whether Catholics still believe in a blissful Heaven for those who’ve been ‘good’ remains unclear to me).
What’s astonishing is that this dream of living forever now exists in the most unlikely of places—among computer geeks and nerds who mostly profess atheism. It exists, that is, in two places: virtual reality, and the transformation of humans into cyborgs (though cyborgs don’t specifically promise immortality, they do promise to transform humans into machines, which is a kind of immortality—see Pagan Kennedy, “The Cyborg in Us All,” NY Times, 9.14.11). If you can create an avatar—a virtual computerized model—of yourself (as has been done for Orville Redenbacher, so that, though dead, he still appears in his popcorn commercials), you can in some sense exist forever. The title of the avatar game on the internet, “Second Life,” reveals this implicitly. So does the reaction of volunteers whom Jeremy Bailenson studied for a Stanford experiment purporting to create avatars that could be preserved forever. When the subjects found out that the science to create immortal avatars of themselves didn’t yet exist, many screamed their outrage. They had invested infinite hope in being among the first avatar-based immortals.
Before dismissing this as foolish dreamery, consider how far this movement has already gone. Right now, the video games that most kids engage in (my grandson has a Wii version of Star Wars in which he ‘becomes’ Lego-warrior avatars who destroy everything in sight) “consume more hours per day than movies and print media combined” (Jeremy Bailenson and Jim Blascovich, Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution, Morrow: 2011, p. 2) The key point about this, moreover, is that countless neuroscience experiments have proved that “the brain doesn’t much care if an experience is real or virtual.” Read that again. The brain doesn’t care whether an experience is “only virtual.” It reacts in much the same way as it does to “reality.”
Frankly, until I read Infinite Reality, all of this had pretty much passed me by. I had read about virtual-reality helmets such as the kind used to train pilots, but I had no idea that things had gone so far. I had no idea that millions of people sign up for the online site called “Second Life” (I tried; it seemed impossibly complex and stupid to me), and invest incredible amounts of time and emotional energy setting up an alternate personality (avatar) that can enter the website’s virtual world and interact in any way imaginable with other people’s avatars. Needless to say, most people equip their avatars with qualities they would like to have, or have wondered about having. Then they go looking for people (avatars) with whom to experiment in a wished-for interaction. The most common interaction, not surprisingly, seems to be sex with another avatar, or several others; but there’s also a lot of wheeling and dealing to gain wealth and prestige. Talk about “be all that you can be!”
Still, the really interesting stuff happens when you get into a virtual laboratory. Whereas “Second Life” takes place on a flat computer screen, virtual reality really comes into its own when you don a headset that can simulate real scenes in 3D fidelity so real that when people approach a simulated pit in front them, they invariably recoil (even though they’re “really” walking on a level floor). While virtual reality of this kind is expensive today, there can be little question that it soon will have become commonplace. Rather than spending tons of money traveling to China, say, one will be able to go there “virtually,” without having to endure the travails of travel, including bothersome other people. What makes this eerie is that video games are already working with this kind of VR, and creating avatars. In games like Pong, Wii, Move, and Kinect the game computer can already “track” a user’s physical movements and then “render” a world incorporating those movements into a virtual tennis scene that is authentic in all necessary details. So,
As Bailenson notes, “in a state of the art system, this process (of tracking and rendering the appropriate scene from the point of view of the subject) repeats itself approximately 100 times a second.” Everything in the virtual scene appears smooth and natural, including, in the game “Grand Theft Auto,” an episode where players can “employ a prostitute and then kill her to get their money back.” And remember, the brain reacts to all this in the same way it does when it is “really” happening.
The implications to a psychologist like Bailenson are profound. Short people, for example, who adopt a tall avatar for themselves, show definite improvements in their self-image, even after they’ve left the avatar behind. They also show improvements in competition: in real games held afterwards, the person whose avatar was taller became a more successful negotiator. Those who fashion a trim, beautiful avatar, show the same rise in self-esteem. Bailenson also notes the importance of people’s attributions of “mind” or reality to inanimate objects like computers, and this includes avatars. In one experiment, subjects were shown a real person named Sally, and then her avatar disfigured with a birthmark (neurophysiological studies show that interacting with a “stigmatized other,” even someone with a birthmark, causes a threat response). After four or five minutes interacting with Sally’s disfigured avatar, subjects displayed the heart-rate response indicating threat—even though they knew the real Sally had no birthmark. And the games sold to consumers keep getting more sophisticated in this regard. In the Sony PlayStation game, THUG 2 (over 1 million sold in U.S.) players can upload their photos onto the face of a character, and then have their “clones” perform amazing feats of skateboarding, etc. They can also watch them performing actions not under their control. This brings up the question of the effect of watching one’s “doppelganger” (a character with one’s appearance) do something in virtual reality. It appears to be profound: the more similar a virtual character is to the person observing, the more likely the observer is to mimic that character. This can be positive: watching a healthy person who seems similar can lead a person to adopt healthy behavior. But other possibilities are legion. Baileson mentions the commercial ones:
In short, we prefer what appears like us. Experiments showed that even subjects who knew their faces had been placed in a commercial, still expressed preference for the brand after the study ended. Can anyone imagine most corporations aren’t already planning for what could be a bonanza in narcissistic advertising?
More bizarre possibilities for avatars, according to Bailenson and Blascovich, seem endless. In the brave new world to come, “wearing an avatar will be like wearing contact lenses.” And these avatars will be capable of not only ‘seeing’ virtual objects and ‘feeling’ them (using ‘haptic’ devices), but of appearing to walk among us. More ominously, imposters can “perfectly re-create and control other people’s avatars” as has already happened with poor old Orville Redenbacher. Tracking devices—which can see and record every physical movement you make—make this not only possible, but inevitable. Everyone, with all physical essentials, will be archived.
All of this makes the idea of “the real world” rather problematic. Of course, neuroscience has already told us that the ‘world’ we see and believe in is really a model constructed by our brains, but still, this takes things several steps beyond that. For if, in virtual reality, “anybody can interact with anybody else in the world, positively or negatively,” then what does it mean to talk about “real” experience? If “everything everybody does will be archived,” what does privacy mean?
At the least, one can say this: a brave new world is already upon us (think of all those kids with video games; think of how much time you already spend staring at your computer screen), and you can bet that those with an eye to profiting from it are already busy, busy, busy. One can also say, take a walk in the real outdoors with real dirt, grass, trees, worms, bugs, and the sweet smell of horseshit; it may soon be only a distant memory.
Lawrence DiStasi
What’s astonishing is that this dream of living forever now exists in the most unlikely of places—among computer geeks and nerds who mostly profess atheism. It exists, that is, in two places: virtual reality, and the transformation of humans into cyborgs (though cyborgs don’t specifically promise immortality, they do promise to transform humans into machines, which is a kind of immortality—see Pagan Kennedy, “The Cyborg in Us All,” NY Times, 9.14.11). If you can create an avatar—a virtual computerized model—of yourself (as has been done for Orville Redenbacher, so that, though dead, he still appears in his popcorn commercials), you can in some sense exist forever. The title of the avatar game on the internet, “Second Life,” reveals this implicitly. So does the reaction of volunteers whom Jeremy Bailenson studied for a Stanford experiment purporting to create avatars that could be preserved forever. When the subjects found out that the science to create immortal avatars of themselves didn’t yet exist, many screamed their outrage. They had invested infinite hope in being among the first avatar-based immortals.
Before dismissing this as foolish dreamery, consider how far this movement has already gone. Right now, the video games that most kids engage in (my grandson has a Wii version of Star Wars in which he ‘becomes’ Lego-warrior avatars who destroy everything in sight) “consume more hours per day than movies and print media combined” (Jeremy Bailenson and Jim Blascovich, Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution, Morrow: 2011, p. 2) The key point about this, moreover, is that countless neuroscience experiments have proved that “the brain doesn’t much care if an experience is real or virtual.” Read that again. The brain doesn’t care whether an experience is “only virtual.” It reacts in much the same way as it does to “reality.”
Frankly, until I read Infinite Reality, all of this had pretty much passed me by. I had read about virtual-reality helmets such as the kind used to train pilots, but I had no idea that things had gone so far. I had no idea that millions of people sign up for the online site called “Second Life” (I tried; it seemed impossibly complex and stupid to me), and invest incredible amounts of time and emotional energy setting up an alternate personality (avatar) that can enter the website’s virtual world and interact in any way imaginable with other people’s avatars. Needless to say, most people equip their avatars with qualities they would like to have, or have wondered about having. Then they go looking for people (avatars) with whom to experiment in a wished-for interaction. The most common interaction, not surprisingly, seems to be sex with another avatar, or several others; but there’s also a lot of wheeling and dealing to gain wealth and prestige. Talk about “be all that you can be!”
Still, the really interesting stuff happens when you get into a virtual laboratory. Whereas “Second Life” takes place on a flat computer screen, virtual reality really comes into its own when you don a headset that can simulate real scenes in 3D fidelity so real that when people approach a simulated pit in front them, they invariably recoil (even though they’re “really” walking on a level floor). While virtual reality of this kind is expensive today, there can be little question that it soon will have become commonplace. Rather than spending tons of money traveling to China, say, one will be able to go there “virtually,” without having to endure the travails of travel, including bothersome other people. What makes this eerie is that video games are already working with this kind of VR, and creating avatars. In games like Pong, Wii, Move, and Kinect the game computer can already “track” a user’s physical movements and then “render” a world incorporating those movements into a virtual tennis scene that is authentic in all necessary details. So,
In a repetitive cycle, the user moves, the tracker detects that movement, and the rendering engine produces a digital representation of the world to reflect that movement…when a Wii tennis player swings her hand, the track wand detects the movement and the rendering engine draws a tennis swing. (p. 44)
As Bailenson notes, “in a state of the art system, this process (of tracking and rendering the appropriate scene from the point of view of the subject) repeats itself approximately 100 times a second.” Everything in the virtual scene appears smooth and natural, including, in the game “Grand Theft Auto,” an episode where players can “employ a prostitute and then kill her to get their money back.” And remember, the brain reacts to all this in the same way it does when it is “really” happening.
The implications to a psychologist like Bailenson are profound. Short people, for example, who adopt a tall avatar for themselves, show definite improvements in their self-image, even after they’ve left the avatar behind. They also show improvements in competition: in real games held afterwards, the person whose avatar was taller became a more successful negotiator. Those who fashion a trim, beautiful avatar, show the same rise in self-esteem. Bailenson also notes the importance of people’s attributions of “mind” or reality to inanimate objects like computers, and this includes avatars. In one experiment, subjects were shown a real person named Sally, and then her avatar disfigured with a birthmark (neurophysiological studies show that interacting with a “stigmatized other,” even someone with a birthmark, causes a threat response). After four or five minutes interacting with Sally’s disfigured avatar, subjects displayed the heart-rate response indicating threat—even though they knew the real Sally had no birthmark. And the games sold to consumers keep getting more sophisticated in this regard. In the Sony PlayStation game, THUG 2 (over 1 million sold in U.S.) players can upload their photos onto the face of a character, and then have their “clones” perform amazing feats of skateboarding, etc. They can also watch them performing actions not under their control. This brings up the question of the effect of watching one’s “doppelganger” (a character with one’s appearance) do something in virtual reality. It appears to be profound: the more similar a virtual character is to the person observing, the more likely the observer is to mimic that character. This can be positive: watching a healthy person who seems similar can lead a person to adopt healthy behavior. But other possibilities are legion. Baileson mentions the commercial ones:
…if a participant sees his avatar wearing a certain brand of clothing, he is more likely to recall and prefer that brand. In other words, if one observes his avatar as a product endorser (the ultimate form of targeted advertising), he is more likely to embrace the product. (119)
In short, we prefer what appears like us. Experiments showed that even subjects who knew their faces had been placed in a commercial, still expressed preference for the brand after the study ended. Can anyone imagine most corporations aren’t already planning for what could be a bonanza in narcissistic advertising?
More bizarre possibilities for avatars, according to Bailenson and Blascovich, seem endless. In the brave new world to come, “wearing an avatar will be like wearing contact lenses.” And these avatars will be capable of not only ‘seeing’ virtual objects and ‘feeling’ them (using ‘haptic’ devices), but of appearing to walk among us. More ominously, imposters can “perfectly re-create and control other people’s avatars” as has already happened with poor old Orville Redenbacher. Tracking devices—which can see and record every physical movement you make—make this not only possible, but inevitable. Everyone, with all physical essentials, will be archived.
All of this makes the idea of “the real world” rather problematic. Of course, neuroscience has already told us that the ‘world’ we see and believe in is really a model constructed by our brains, but still, this takes things several steps beyond that. For if, in virtual reality, “anybody can interact with anybody else in the world, positively or negatively,” then what does it mean to talk about “real” experience? If “everything everybody does will be archived,” what does privacy mean?
At the least, one can say this: a brave new world is already upon us (think of all those kids with video games; think of how much time you already spend staring at your computer screen), and you can bet that those with an eye to profiting from it are already busy, busy, busy. One can also say, take a walk in the real outdoors with real dirt, grass, trees, worms, bugs, and the sweet smell of horseshit; it may soon be only a distant memory.
Lawrence DiStasi
Friday, September 9, 2011
The Spirit of Capitalism
I have been reading Max Weber’s seminal work, The Protestant Ethic and the Spirit of Capitalism lately and it illuminates a great deal about the spirit of our times—a spirit that has been termed The Age of Greed by Jeff Madrick in his recent book of that name. And while what Madrick describes is really the transformation in the last 40 years of America from an industrialized society to a financialized one, it doesn’t address the origins that interest me here. Weber was interested in this too. His question really was not only ‘why do people work to begin with’ (primary cultures had no concept called “work” at all and only exerted themselves periodically in war or in short-term hunting and gathering), but more relevant to his time, ‘why do people in modern society identify themselves as laborers?’ How was it possible for western culture to transform itself from a traditional culture where labor hardly existed except as part of a manorial household, to post-1600s capitalist society where free laborers are yoked to paying jobs in capitalistic enterprises? More specifically, how could a state of mind that Weber finds best illustrated in Ben Franklin (a penny saved is a penny earned; time is money; credit is money—i.e. it is a duty to increase one’s capital) come to be adopted by whole societies when, in the Middle Ages and before, that state of mind would “have been proscribed as the lowest sort of avarice?” As sinful greed? To illustrate how remarkable this is, Weber compares traditional laborers with modern laborers. A farm owner, for example, who pays his workers at a piece-rate (like modern farm workers paid at so much per bushel), thinks to increase production by increasing rates. This works with modern workers, but when applied to traditional laborers, the increased rate backfires. The traditional worker, that is, not only does not increase his work rate, he decreases it—he works slower so as to still earn the same daily amount. As Weber summarizes it, “the opportunity of earning more was less attractive than that of working less.” Thus the attitude of traditionalism:
Weber then devotes his entire book to explaining how Protestantism, especially the Calvinist branch of the Reformation, changed this traditionalist attitude towards work. While a “surplus population which it can hire cheaply” is necessary for capitalism to develop and thrive, so, he says, is a “developed sense of responsibility.” That is, for capitalism to work, “labour must…be performed as if it were an absolute end in itself, a calling.” Far from being natural, or even the product of high or low wages, this attitude “can only be the product of a long and arduous process of education” (62). And the educating body was, originally at least, Protestantism. It is important to note that this education in work did not, at least at first, involve an education in greed, much less enjoyment. To the contrary, Weber makes clear that the essential ingredient, in the beginning, involved a kind of asceticism—not the asceticism of the monastery, but an asceticism in the world. To make labor a calling, that is, meant making labor an obligation in the service of God, of salvation. One was schooled in the idea that hard and constant work was an end in itself, the way of salvation for the average person, and that saving the money one earned was part of that obligation. In order to save, of course, one had to be frugal, buying only what was absolutely necessary. The asceticism that had been the mark of the otherworldly Catholic monastery, that is, was brought into the world. So one worked, one saved (“a penny saved is a penny earned”) and one eventually prospered. It is a commonplace that in the American colonies during the Puritan period (Boston, etc.), these essential elements were merged in such a way that prospering in business became synonymous with salvation—or rather, prospering became a sign of salvation. This is because though election (salvation) or damnation was pre-determined by God, the actual judgment was uncertain, and this uncertainty was almost intolerable. One’s prosperity thus became a sign, a way for the uncertainty to be resolved. The opposite was also true: poverty became a sign of damnation, making the poor doubly damned—both in this world and the next. The sad truth is that many Americans still maintain these essential attitudes.
Work as a calling then, work as a duty, and success in work as a sign of salvation are the essential elements of the Protestant ethic. They are also the essential elements of the spirit of capitalism. As Weber puts it,
This is not to say that Protestantism ignored the dangers of wealth. Weber cites the writings of Richard Baxter as illustrative. And there, the key to this danger involved idleness and the temptations of the flesh it exposed one to. As Weber interprets Baxter, “Waste of time is thus the first and in principle the deadliest of sins….Loss of time through sociability, idle talk, luxury, more sleep than is necessary for health..is worthy of absolute moral condemnation” (157). A person was thus led to work constantly, to save what he earned, never to enjoy the fruits of his labor, but rather invest those savings as an entrepreneur in new opportunities for more work (and wealth). So while the ethic frowned on wealth and the luxuries it fostered, it at the same time had the “psychological effect of freeing the acquisition of goods from the inhibitions of the traditionalist ethic. It broke the bonds of the impulse of acquisition in that it not only legalized it, but looked upon it as directly willed by God” (171).
Weber ends his work with the ironic contradiction involved in this religiously inspired ethic. He quotes John Wesley, the co-founder of Methodism, as follows:
The protestant entrepreneur, in this way, not only won the “feeling of God’s grace” for doing his duty in getting rich, but also a supply of “sober, conscientious, and unusually industrious workmen, who clung to their work as to a life purpose willed by God.” This ethic comforted the capitalist entrepreneur as well that the “unequal distribution of the goods of this world was a special dispensation of Divine Providence.” For had not Calvin himself said that ‘only when people, i.e. the mass of laborers and craftsmen, were poor did they remain obedient to God?’ (177). He had. So low wages, themselves, had been rationalized and justified by the divine.
The Protestant ethic, in sum, according to Weber, not only sanctified labor as a calling enabling a worker to be certain of his election, it also legalized, for the capitalist, the “exploitation of this specific willingness to work.” A Daily Double if there ever was one.
It takes little to see how these attitudes and rationalizations are still in use today. America sanctifies capitalism as literally the manifestation of both God’s will and the natural order of things. American media also lionizes those entrepreneurs who, at least according to their own myth, raise themselves by their own bootstraps to become rich—to become “elect” in modern society’s terms. Finally, American capitalism rationalizes the unequal distribution of wealth and goods in this world as simply the workings of natural or divine laws with which mere humans cannot quarrel.
To Max Weber’s credit, he ends his study with a scathing reminder that though this ethic began in the cloak of saintliness, its apotheosis in industrial capitalism became “an iron cage.” Had he known about capitalism’s most recent metamorphosis into an ongoing financial heist creating ever more inequality, his critique would have been far more savage.
Lawrence DiStasi
A man does not “by nature” wish to earn more and more money, but simply to live as he is accustomed to live and to earn as much as is necessary for that purpose. (60)
Weber then devotes his entire book to explaining how Protestantism, especially the Calvinist branch of the Reformation, changed this traditionalist attitude towards work. While a “surplus population which it can hire cheaply” is necessary for capitalism to develop and thrive, so, he says, is a “developed sense of responsibility.” That is, for capitalism to work, “labour must…be performed as if it were an absolute end in itself, a calling.” Far from being natural, or even the product of high or low wages, this attitude “can only be the product of a long and arduous process of education” (62). And the educating body was, originally at least, Protestantism. It is important to note that this education in work did not, at least at first, involve an education in greed, much less enjoyment. To the contrary, Weber makes clear that the essential ingredient, in the beginning, involved a kind of asceticism—not the asceticism of the monastery, but an asceticism in the world. To make labor a calling, that is, meant making labor an obligation in the service of God, of salvation. One was schooled in the idea that hard and constant work was an end in itself, the way of salvation for the average person, and that saving the money one earned was part of that obligation. In order to save, of course, one had to be frugal, buying only what was absolutely necessary. The asceticism that had been the mark of the otherworldly Catholic monastery, that is, was brought into the world. So one worked, one saved (“a penny saved is a penny earned”) and one eventually prospered. It is a commonplace that in the American colonies during the Puritan period (Boston, etc.), these essential elements were merged in such a way that prospering in business became synonymous with salvation—or rather, prospering became a sign of salvation. This is because though election (salvation) or damnation was pre-determined by God, the actual judgment was uncertain, and this uncertainty was almost intolerable. One’s prosperity thus became a sign, a way for the uncertainty to be resolved. The opposite was also true: poverty became a sign of damnation, making the poor doubly damned—both in this world and the next. The sad truth is that many Americans still maintain these essential attitudes.
Work as a calling then, work as a duty, and success in work as a sign of salvation are the essential elements of the Protestant ethic. They are also the essential elements of the spirit of capitalism. As Weber puts it,
the expansion of modern capitalism is not in the first instance a question of the origin of the capital sums which were available…but, above all, of the development of the spirit of capitalism (68).
This is not to say that Protestantism ignored the dangers of wealth. Weber cites the writings of Richard Baxter as illustrative. And there, the key to this danger involved idleness and the temptations of the flesh it exposed one to. As Weber interprets Baxter, “Waste of time is thus the first and in principle the deadliest of sins….Loss of time through sociability, idle talk, luxury, more sleep than is necessary for health..is worthy of absolute moral condemnation” (157). A person was thus led to work constantly, to save what he earned, never to enjoy the fruits of his labor, but rather invest those savings as an entrepreneur in new opportunities for more work (and wealth). So while the ethic frowned on wealth and the luxuries it fostered, it at the same time had the “psychological effect of freeing the acquisition of goods from the inhibitions of the traditionalist ethic. It broke the bonds of the impulse of acquisition in that it not only legalized it, but looked upon it as directly willed by God” (171).
Weber ends his work with the ironic contradiction involved in this religiously inspired ethic. He quotes John Wesley, the co-founder of Methodism, as follows:
“I fear, wherever riches have increased, the essence of religion has decreased in the same proportion. Therefore I do not see how it is possible, in the nature of things, for any revival of true religion to continue long. For religion must necessarily produce both industry and frugality, and these cannot but produce riches. But as riches increase, so will pride, anger, and love of the world in all its branches….So, although the form of religion remains, the spirit is swiftly vanishing away.” (175)
The protestant entrepreneur, in this way, not only won the “feeling of God’s grace” for doing his duty in getting rich, but also a supply of “sober, conscientious, and unusually industrious workmen, who clung to their work as to a life purpose willed by God.” This ethic comforted the capitalist entrepreneur as well that the “unequal distribution of the goods of this world was a special dispensation of Divine Providence.” For had not Calvin himself said that ‘only when people, i.e. the mass of laborers and craftsmen, were poor did they remain obedient to God?’ (177). He had. So low wages, themselves, had been rationalized and justified by the divine.
The Protestant ethic, in sum, according to Weber, not only sanctified labor as a calling enabling a worker to be certain of his election, it also legalized, for the capitalist, the “exploitation of this specific willingness to work.” A Daily Double if there ever was one.
It takes little to see how these attitudes and rationalizations are still in use today. America sanctifies capitalism as literally the manifestation of both God’s will and the natural order of things. American media also lionizes those entrepreneurs who, at least according to their own myth, raise themselves by their own bootstraps to become rich—to become “elect” in modern society’s terms. Finally, American capitalism rationalizes the unequal distribution of wealth and goods in this world as simply the workings of natural or divine laws with which mere humans cannot quarrel.
To Max Weber’s credit, he ends his study with a scathing reminder that though this ethic began in the cloak of saintliness, its apotheosis in industrial capitalism became “an iron cage.” Had he known about capitalism’s most recent metamorphosis into an ongoing financial heist creating ever more inequality, his critique would have been far more savage.
Lawrence DiStasi
Tuesday, August 23, 2011
Decision Fatigue, Anyone?
Among the several enlightening articles around last weekend, one stood out for me: John Tierney’s 8/17 NY Times piece on Decision Fatigue. It’s something everyone feels, but few of us understand that it’s a real syndrome, with roots in brain chemistry. That means that it’s not just some anecdotal phenomenon of people who complain, after shopping till dropping, that they’re exhausted—although that’s probably the most common experience for most of us. It’s far more general than that, and, apparently, far more universal (I always thought it was just me who hated shopping at whatever time of the year.) What this means is that the brain actually gets depleted of energy when it has to make lots of decisions—whether or not to eat another donut; whether or not to go online for a few more minutes; whether or not, as a judge, to grant parole to an inmate before you.
According to Tierney, the latter situation was a key one examined recently. In a report this year, two researchers looked into the decisions judges make, in an effort to account for why they rendered different judgments for defendants with identical records. After looking at the usual suspects (racism, other biases), they started to zero in on the time of day the judges made their decisions, and found that judges who made their rulings early in the day were far more likely to grant parole than those who saw a defendant late in the day. Looking even more closely, they found that if you were unlucky enough to appear before a judge just before the noon break, or just before closing time, you would likely have your parole plea rejected; if you saw the judge at the beginning of the day, or right after lunch, you were more likely to get your parole granted. The cause: decision fatigue. As the researchers noted, “the mental work of ruling on case after case, whatever their individual merits, wore them down.”
What this and other experiments have demonstrated is that each of us possesses “a finite store of mental energy for exerting self-control.” And self-control requires that old bugaboo “will power”—a form of mental energy that can, and often is, exhausted. If you’ve spent your day resisting desire—whether it’s a yen for a cigarette, a candy bar, or a trip onto the internet—you’re less capable of resisting other temptations. Nor is this just a curious finding. What researchers argue is that this kind of decision fatigue is “a major—and hitherto ignored—factor in trapping people in poverty.” People who are poor, that is, constantly have to make that hardest of decisions, the trade-off (can I afford this? can I afford that? Should I pay the gas bill or buy good food?), and such decisions sap their energies for other efforts like school, work or improving their job prospects. This is confirmed by images that have long been used to condemn the poor for their failure of effort: welfare mothers buying junk food, or indulging in snacks while shopping. Far from being a condemnation of “weak character,” however, such activities often indicate decision fatigue, which the poor experience more than the rich because of the increased number of trade-offs their lives require, and hence the decreased willpower left them to resist impulse buying.
The big surprise in this research, though, comes with the brain studies. Everyone knows that the brain is a great consumer of sugar, or glucose, for energy. But what no one had expected was the specific connection between glucose supply and willpower. In a series of experiments, researchers tested this by refueling the brains of some subjects performing tasks with sugary lemonade (glucose), and some with lemonade sweetened with diet sweetener (no glucose.) The results were clear: those who got the glucose found their willpower restored, and thus their ability to exercise self-control augmented. They made better choices, and even when asked to make financial decisions, they focused on long-term strategy rather than opting for a quick payoff. In short, more mental energy allowed them to persist in whatever task was at hand. Even more to the point, the researchers found that the effect of glucose was specific to certain areas of the brain. As Tierney puts it:
This is critical information, especially in our choice-and-distraction-filled culture. Yet another study in Germany, where subjects were monitored by frequently reporting their activity via their Blackberries, concluded that people at work spend as much as 4 hours a day “resisting desire.” The most common of these desires were “urges to eat and sleep, followed by the urge for leisure” (i.e. taking a break by playing a computer game, etc.). Sexual urges were next on the list, slightly higher than checking Facebook or email. The most popular general type of desire was to find a distraction—and of course, the workplace of large numbers of people these days centers on the computer, that click-of-the-mouse distraction machine.
And the trouble with all this is that willpower depletion doesn’t manifest with a specific symptom, like a runny nose, or a pain in the gut. As Tierney says:
Perhaps this is why tired politicians so often say, and do stupid things. Not to mention the howlers of our physicians, our generals, our corporate execs, and our media pundits. Perhaps, too, it explains why Ronald Reagan always kept a jar of jellybeans on his desk—though it is true that his decision-making stemmed from a malady of a different sort.
In any case, the lesson from all this might be: take breaks. Eat candy (or better still, protein). And don’t make important decisions when you’re exhausted (like responding to that nasty email). Most decisions can wait, and will profit from a glucose-rich, rather than a glucose-depleted brain.
Lawrence DiStasi
According to Tierney, the latter situation was a key one examined recently. In a report this year, two researchers looked into the decisions judges make, in an effort to account for why they rendered different judgments for defendants with identical records. After looking at the usual suspects (racism, other biases), they started to zero in on the time of day the judges made their decisions, and found that judges who made their rulings early in the day were far more likely to grant parole than those who saw a defendant late in the day. Looking even more closely, they found that if you were unlucky enough to appear before a judge just before the noon break, or just before closing time, you would likely have your parole plea rejected; if you saw the judge at the beginning of the day, or right after lunch, you were more likely to get your parole granted. The cause: decision fatigue. As the researchers noted, “the mental work of ruling on case after case, whatever their individual merits, wore them down.”
What this and other experiments have demonstrated is that each of us possesses “a finite store of mental energy for exerting self-control.” And self-control requires that old bugaboo “will power”—a form of mental energy that can, and often is, exhausted. If you’ve spent your day resisting desire—whether it’s a yen for a cigarette, a candy bar, or a trip onto the internet—you’re less capable of resisting other temptations. Nor is this just a curious finding. What researchers argue is that this kind of decision fatigue is “a major—and hitherto ignored—factor in trapping people in poverty.” People who are poor, that is, constantly have to make that hardest of decisions, the trade-off (can I afford this? can I afford that? Should I pay the gas bill or buy good food?), and such decisions sap their energies for other efforts like school, work or improving their job prospects. This is confirmed by images that have long been used to condemn the poor for their failure of effort: welfare mothers buying junk food, or indulging in snacks while shopping. Far from being a condemnation of “weak character,” however, such activities often indicate decision fatigue, which the poor experience more than the rich because of the increased number of trade-offs their lives require, and hence the decreased willpower left them to resist impulse buying.
The big surprise in this research, though, comes with the brain studies. Everyone knows that the brain is a great consumer of sugar, or glucose, for energy. But what no one had expected was the specific connection between glucose supply and willpower. In a series of experiments, researchers tested this by refueling the brains of some subjects performing tasks with sugary lemonade (glucose), and some with lemonade sweetened with diet sweetener (no glucose.) The results were clear: those who got the glucose found their willpower restored, and thus their ability to exercise self-control augmented. They made better choices, and even when asked to make financial decisions, they focused on long-term strategy rather than opting for a quick payoff. In short, more mental energy allowed them to persist in whatever task was at hand. Even more to the point, the researchers found that the effect of glucose was specific to certain areas of the brain. As Tierney puts it:
Your brain does not stop working when glucose is low. It stops doing some things and starts doing others. It responds more strongly to immediate rewards, and pays less attention to long-term prospects.
This is critical information, especially in our choice-and-distraction-filled culture. Yet another study in Germany, where subjects were monitored by frequently reporting their activity via their Blackberries, concluded that people at work spend as much as 4 hours a day “resisting desire.” The most common of these desires were “urges to eat and sleep, followed by the urge for leisure” (i.e. taking a break by playing a computer game, etc.). Sexual urges were next on the list, slightly higher than checking Facebook or email. The most popular general type of desire was to find a distraction—and of course, the workplace of large numbers of people these days centers on the computer, that click-of-the-mouse distraction machine.
And the trouble with all this is that willpower depletion doesn’t manifest with a specific symptom, like a runny nose, or a pain in the gut. As Tierney says:
Ego depletion manifests itself not as one feeling but rather as a propensity to experience everything more intensely. When the brain’s regulatory powers weaken, frustrations seem more irritating than usual. Impulses to eat, drink, spend and say stupid things feel more powerful.
Perhaps this is why tired politicians so often say, and do stupid things. Not to mention the howlers of our physicians, our generals, our corporate execs, and our media pundits. Perhaps, too, it explains why Ronald Reagan always kept a jar of jellybeans on his desk—though it is true that his decision-making stemmed from a malady of a different sort.
In any case, the lesson from all this might be: take breaks. Eat candy (or better still, protein). And don’t make important decisions when you’re exhausted (like responding to that nasty email). Most decisions can wait, and will profit from a glucose-rich, rather than a glucose-depleted brain.
Lawrence DiStasi
Subscribe to:
Posts (Atom)