Wednesday, June 1, 2016

Killer App


I have just finished reading Sherry Turkle’s recent book, Reclaiming Conversation: The Power of Talk in a Digital Age (Penguin: 2015). Being without a smartphone, and never having texted in my life, I found Turkle’s research into what smartphones are doing to young people (and to many of their parents) shocking. Consider some stats first: a) average Americans check their smartphones every 6-1/2 minutes (actually, college students in one of Turkle’s classes say that they can sometimes go 3 minutes without a phone check, but the more likely limit is 2 minutes!) b) fully one-fourth of American teens connect to a device within 5 minutes of waking (80% sleep with their phones). c) most teenagers send about 100 texts every day. d) 44 percent of teens do not “unplug” ever. Now let me quote what Turkle says about the power these smartphones have to enslave us: “It (the smartphone) is not an accessory. It’s a psychologically potent device that changes not just what you do but who you are” (319). Keep that in mind: using a smartphone changes who you are, it changes your brain, it changes how you behave, it changes how you talk and relate to and treat other people—and mostly not for the better. This is the sum and substance of Turkle’s (she is a professor of sociology and psychology at MIT who specializes in the effect of technology on modern life) book. Though it may not be too late (her title implies that we can, if we are determined to, ‘reclaim conversation’), Turkle’s research shows that things have gone very far indeed.
            Let me cite just a few of the examples Turkle provides. First of all are the rules that young people now live by—and here I should say that I found myself at first contemptuous of, and then feeling deep sympathy for, these kids (to me, even the thirty-somethings are kids who have grown up with technology) whose interactions like dinner or dating are now governed by their devices and the “apps” they apply there. The rules are ubiquitous and bizarre, but clearly necessary. There’s the “rule of two or three” at meals: students Turkle interviewed at one college, that is, make sure that at least two people in a group of seven or so at dinner are NOT on their phones before they allow themselves to check theirs; if fewer than two were paying attention to the conversation, it wouldn’t work. As Eleanor says of observing the rule, “It’s my way of being polite.” The corollary is that conversations, even at dinner, even among friends, are fragmented (and hence “lighter” of necessity): everyone is more interested in checking what might be on their phones, or who might be texting and require an immediate response, than in the people they’re actually with—much less what they’re saying.
            And this gets to one of the major points in the book. The ability to converse is atrophying among smartphone users. Family members at dinner constantly check their phones. Kids in class and on dates and at parties check their phones. And much of the checking involves texting—the major form of communication among phone users. That is, people don’t “talk” to each other on their phones; they “text” each other. And there are strict rules among friends—many kids number as many as 100 texting friends in their circle—who text. If someone texts you with an ‘emergency’ (for teens, every slight is an emergency), you have at most 5 minutes to respond. If you don’t comply within that time limit, then you risk losing that friend because your delay in responding is taken as an insult. So kids with phones tend to be hypervigilant—they don’t want to miss an important text, which is why they can’t stand not checking their phones every 2 minutes. The other reason they can’t stand not checking their phones is what they have acronymed FOMO: Fear of Missing Out. Something better than what’s happening here and now might be going on. FOMO haunts even those who are at parties, or in bed with a partner! One of Turkle’s informants described being at a party, but being compelled to check her phone (everyone was doing this for the same reason) to see if a friend was at another party that might be hotter. A college student described being in bed with a guy, who got up to go to the bathroom—which impelled her to take out her phone to check her Tinder app to see what men in her area might be interested in meeting, and more. Her comment: “I have no idea why I did this—I really like the guy…I want to date him, but I couldn't help myself. Nothing was happening on Facebook; I didn’t have any new emails” (38). A recent grad named Trevor told Turkle about his college graduation party where “people barely spoke” but “looked at their phones.” And this was okay because
Everyone knew that when they got home they would see the pictures of the party. They could save the comments until then. We weren’t really saying good-bye. It was just good-bye until we got to our rooms and logged onto Facebook (138).

In other words, life is not what’s happening in reality, face to face; it’s what gets reported on Facebook. Likewise, conversation doesn’t happen by talking face to face; that’s too risky; one might say something rash or erroneous. Conversation is what happens on Gchat or when texting—where one can edit one’s response (or breakup messages) and make them perfect. Real conversation is just too fraught with uncertainty, with emotion, with risk, with the mess that is human life.
            This is serious, America. These machines are changing the way human beings interact. They are changing the way humans feel about each other, literally changing if they can feel about each other. And that is another of Turkle’s major points here. Based on her research and consultations with middle schools in her area, she points out that without the give and take of face-to-face conversation, many young people are losing no less than the defining human capacity of empathy. A researcher at Stanford, Clifford Nass, specifically looked into the emotional capacity of freshmen at Stanford. He compared the emotional development of women who characterized themselves as “highly connected” to those spending less time online, and found that the former had a weaker ability to identify the feelings of other people (which is what empathy involves), and actually felt less accepted by their peers. As Turkle summarizes it, “Online life was associated with a loss of empathy and a diminished capacity for self-reflection” (41). And no wonder. Texting has become the substitute for having to look someone in the eye, of having to see emotions in their faces and bodies, especially when we have to discuss something that might be stirring or painful. Face-to-face conversation is “too risky”—that’s how most young people put it. Another person’s response to you might get too emotional. And it’s not just teenagers. Mothers and whole families now have fraught family discussions on Gchat so as to avoid possible eruptions of emotion or words that hurt. It takes the risk out of family dynamics, they say. One never has to face someone yelling. But what is being lost? is Turkle’s question. And her answer is that essentially, our human-ness is being lost. Children “are being deprived” she says, “not only of words but of adults who will look them in the eye.” And as countless volumes of research have shown, eye contact is vital to “emotional stability and social fluency: deprived of eye contact, infants become agitated, then withdrawn, then depressed” (108).  As to empathy, it seems to be more or less out the window. Turkle quotes teachers at a middle school she consults with:
When they hurt each other, they don’t realize it and show no remorse. When you try to help them, you have to go over it over and over with them, to try to role-play why they might have hurt another person. And even then, they don’t seem sorry. They exclude each other from social events, parties, school functions, and seem surprised when others are hurt…They are not developing that way of relating where they listen and learn how to look at each other and hear each other (164).

When one looks at how romance and other interactions are handled, one can see why. Turkle, for example, describes the NOTHING gambit. This refers to not responding to a flirtatious text. Just silence, nothing. One girl calls it “a way of driving someone crazy…you don’t exist.” And then the proper way to respond to nothing is to pretend, in turn, that it didn’t happen. Because trying to text again saying “Why don’t you get back to me” is simply “not cool.” It’s being a loser. So is responding too quickly to a text. Ryan says, for example, that if a woman responds to his text immediately, it might be good, but it might also mean “She’s psycho, man” (188).
            What a terrible burden this must be. The weight of always wanting to know if the other person is interested has always been a cause of anxiety in romantic encounters. But at least when the brushoff happens face to face, something is settled; a human interaction is, literally, faced. Here, nothing is. All simply dissolves in nothingness. You don’t exist. And having to wonder if even someone you apparently have a connection with is possibly checking out Tinder or some other app for a better party or a better partner (there are always dozens of ‘partners’ available on Tinder)--that must be agonizing.
            Of course, when technology becomes a problem—as Turkle suggests it is—there are always those who count on more technology to solve the problem. Robots seem to be the current solution of choice among MIT engineers. Here is where Turkle takes her story in the end, and it is not encouraging. Apparently, AI engineers are working hard to design robots that can actually provide the “eye contact” and “human” conversation that we are no longer getting from our apps. Speculation is rife that robots will soon be able to perform the daunting caretaking tasks that a fast-growing older population requires. Not enough humans to do the dirty work? Design robots to provide what’s lacking, even as babysitters. And there is data from some primitive robots already among us about how it’s working. In one encounter, Turkle describes what happened when a 12-year-old girl named Estelle was used as a subject to interact with a robot named Kismet. Kismet ‘listens’ attentively to Estelle (elsewhere, Turkle notes that the “feeling that no one is listening to me” plays a large part in what we try to solve with technology), simulates human facial expressions, pretends deep interest in what Estelle says. All goes well until a glitch in Kismet’s program leads to a break in the contact, and Kismet turns away. Estelle is deeply disappointed and shows it by eating cookies voraciously. When pushed to explain, she laments tearfully that Kismet didn’t like her; the robot turned away. The researchers explain that it was simply a technical problem and had nothing to do with her, but Estelle is not consoled. She thinks it’s her failure that Kismet doesn’t “like” her (the “like” on Facebook has become the standard for judging ourselves and our appeal). Another instance involves a young girl named Tara who, in all her actions, is the “perfect child.” But her mother notices that she sometimes talks to Siri, the talking app developed by Apple. And when she does, she vents all the anger on Siri that she has suppressed elsewhere. Tara compartmentalizes, in other words: she has to be perfect with people, but can be angry with Siri. Turkle comments thusly: “if Tara can ‘be herself’ only with a robot, she may grow up believing that only an object can tolerate her truth” (347).
            Is this the place we're getting to, one where only machines can tolerate us? This would be a brave new world, indeed. And the grim truth seems to be that some of us are already there. Sherry Turkle wants us to change. She wants us to “reclaim conversation” with each other, before it’s too late. She draws hope from the fact that the human brain is capable of changing. She has seen this in summer camps where children are prevented from having devices of any kind, and where after a short time, they do reclaim their human interest in each other and what’s around them. But she has a warning as well, especially as regards our attempts to fill the well of human loneliness with robots (or machines of any kind). This is because engineers have already shown that they can build toys for children (Furbies, etc.) that feign feelings, and that, by getting the child to care for them, instill feelings of attachment in the child for what is a lifeless object, a machine. This takes advantage of a well-known phenomenon: humans tend to impute human feelings to that which seems human—like talking apps that simulate conversation, like our computers. And so, one of her conclusions goes like this:
Nurturance turns out to be a “killer app.” Once we take care of a digital creature or teach or amuse it, we become attached to it, and then behave “as if” the creature cares for us in return (352).

            Killer app indeed. There are many of them already, busily doing their grim work as human surrogates. For, as Turkle writes, “Now we have to ask if we become more human when we give our most human jobs away” (362). It ought to be clear what Turkle thinks about this, and what I think. The question is, do enough others care enough to think about it? How about you (having been reached, of course, electronically)?

Lawrence DiStasi

Friday, May 27, 2016

Immensity

-->
It is really a funny thing to be a human being. We embody what is called the “human condition” but we’re at a loss, usually, to explain what that is. What does it mean to have thoughts, to have emotions, to have senses, to have ‘intimations of immortality’ as Wordsworth titled one of his poems, and how do these things fit together? What is the logic or power that creates us, moves us, drives us, sustains us? We don’t really know. We have lots of sciences and social sciences that give us schemes for how the universe works, how life works, how our own life works, how our psyches work, how our brains work, but most of it seems to be temporary guesswork—pretty impressive guesswork at times, to be sure—but in the end doesn’t really tell us what we want to know. That’s because what we actually want to know comes down to the really simple questions that science doesn’t answer very well. Why are we here? How did we get here, really? What happens to here when we are no longer here? Does it matter? Does it even matter what we do while we are here? And is this the only ‘here’ in the vastness of space and time? Are we the only beings contemplating these questions? And are these questions worth a damn in the first place? Garrison Keillor refers humorously to his detective character Guy Noir as a man who is still “pondering life’s persistent questions” and we, the audience, are clearly meant to smile at a grown man bothering with questions suitable to a teenager. But though most of us put aside such “childish things” in favor of making a living and reproducing ourselves and making our mark on the world, the persistence of these questions, if we are honest, never truly goes away. Or perhaps it does, but only at the last moment when, as Gertrude Stein was reported to have said on her deathbed in response to someone who asked, “Gertrude, what is the answer?” 
            “What was the question?” was the great one’s response.
            Whether or not we like poetry, or literature, or Gertrude Stein, we recognize, I think, the wisdom here. It is the kind of wisdom most of us would call ‘spiritual.’ Many wise and/or spiritual leaders have come up with an answer that is similar: there is really no answer to the question of life. Life simply is. Katagiri Roshi, a Zen teacher in Minnesota, once had a similar answer: 
            “What is just is,” he said. And again, we recognize the wisdom.
            Still, we also recognize, if we are honest, that the questions persist. Especially at difficult or conflictual or depressing times, we find ourselves up against the same question: What is the use? What is the point? What is all this struggle for? All vanishes in the end in any case. And if we can’t answer the why question—why bother?— and we usually can’t, how about the what question? What is this “is” that is? And how do we fit into it? What are we in the first place? All our busyness, all our effort and worry and struggle and despair and joy—how do we consider it, make sense of it, comprehend it? It. What is it? Is all this living, struggling activity really just the product of chemical soups and photons and DNA and gravity and some process we have named evolution? Exploding galaxies? Black holes and worm holes? And to what end? That is, what we really want to know is simply this: What am I engaged in? involved in? Me. Aside from science fiction speculation about our being a product of some computer-generated projection, what about me, here and now, in this life. What am I really? Where do I begin and end? And is there some way to find out?
            This, I take it, is why religions have arisen ever since homo sapiens began to leave residues of his existence on earth. For some thousands or hundreds of thousands of years, humans have painted and carved and drawn and built monuments to this quest, to these questions. Or rather, monuments to their hoped-for answers. And evidently, none of these answers has satisfied in the long run, for the questing and the questioning continues to the present day. It is going on here, now. And what has prompted it, for me, has been something that has been nagging at me for several months, years, perhaps, of my pursuit to see. To understand. And that something is the uncanny feeling (the problem with such attempts to express this “it” is that language has developed out of our brain’s sensory equipment, and therefore must be cast in sensory language: “feeling”, the “sense that”, even when these don’t quite get to “it” or “isness”)—the uncanny feel or cognition or recognition that an immensity exists, an immensity of which I am part, which I am, and that that immensity is beyond comprehension (i.e. it is not sensory or cognitive or logical), but at the same time and in some way comprehensible. The way this occurs to me is, most often, in meditation, a practice I have been engaged in for about forty years. And contrary to the common misconception, the access doesn’t occur as a blinding strike of lightning. It doesn’t occur as a blockbuster of an insight, of a “knowledge” that is finished and permanent and which I can deposit in my memory bank or any other bank. For me, at least, it occurs in glimpses, back-of-the-head sensations (again, the language is sensory), inklings or strange empty tastings of space that is beyond physical space and dimension. Nor does it occur as “thought,” as we generally refer to thought (though often our attempt to grasp and possess it does occur as thought). In fact, thoughts are contrary to its occurrence and chase it away. So are feelings. So are sensory inputs. So is the interior viewpoint from which we normally inspect things. And this is why this kind of experience (if this overused word is really accurate) does not appear to compute with normal brain activity. Normal brain activity occurs in terms of sensory inputs, or cognitive inputs, or emotional inputs. This is none of those. Which is why in Zen, this ‘thing’ is very often expressed negatively. Not this, not that. Or simply, ‘not’ (the famous Mu koan turns on this word ‘mu,’ meaning, roughly, ‘not.’)
            But here, it is important to make a crucial point. This ‘thing’ does not occur as some ‘experience’ that blots out other forms of experience. It does not suddenly mean that the everyday sensorium is either invalidated or transcended. It is not of the nature of logic, of “either-or,” of “A” or “not-A”.  It is of the nature of simultaneity. Both/and. Yin/Yang. That is part of the nature of its heuristic or salvational (if that word means anything in a context where there is nothing to be saved from) value. What we know—and the type of knowing here, again, departs from our traditional ideas of knowing, of epistemology—is that this background, this space, this being-ness-that-we-are exists at the same time, is at the root of, does not invalidate or supersede in any way the normal sensory world. A Zen koan says, Sun-faced Buddha/Moon-faced Buddha. Or Samsara is Nirvana is Samsara. Or, the relative and the absolute are one, fitting like a box and its lid. That, I take it, is why this ‘experience,’ if we succumb to calling it that, has always been so difficult to convey. It violates all our norms of discourse. All our norms of human behavior, thought, feeling, being. And so it has most often been expressed in poetic or noetic or symbolic or metaphoric terms: “It is like…” But of course, the problem is that it is not like anything we know. It is not of the nature of “like.” Not of the nature of comparison. It is of itself. Of ‘what is just is’.
            And so here, all I can do is employ the term that came to me recently and to which I alluded above: immensity. We are involved in some sort of overwhelming immensity. All of us. Those who have had a glimpse of it, and those who haven’t. Those who would seem to deserve it, and those who don’t. Those who live long and those who live for only an instant. We are all involved (a more current term might be “entangled” in the sense that quantum particles once in relationship remain, even when separated, superluminally connected) in this immensity (the great Zen master Huang Po once compared it to a jewel that we have in our foreheads, always, but which we can’t see and so search for desperately until we realize that the entire search was unnecessary for ‘it’ has been here all along). Even as we are also all involved in the petty, stupid, day-to-day trials and tribulations and, yes, glories of everyday life. Even as, even during, even before and after everyday everything, even when and if or not when and if, always we are all entangled in it and cannot be otherwise. No matter what we do, we cannot escape or disqualify or abandon it. It is what we are. Immensity: some impossibly grand immensity. And to me, at least, and I’m not sure why, there is something reassuring, comforting, glorious in it, even when we wish we could blow up this groaning, pitiful, hate-mongering, mother of an earthworld out of disgust, and start again. Even then. It is there, this immensity, and so are we. And that’s something. Or nothing. As you wish.

Lawrence DiStasi

P.S.: as one way of metaphorically alluding to this ‘thing,’ I append here a link to a youtube video of the Mammoth Rubbing Stones still at large amongst us—huge stones which woolly mammoths used to rub against to scratch themselves, presumably, and which still bear the shiny evidence of that prehistoric scratching. An inkling of immensity, perhaps.
Here’s the link:

           


Monday, May 16, 2016

Dark Money's Poison

-->
What has occurred to me this morning as I contemplate what to write about Dark Money is that “ignorance truly is bliss.” In some ways, that is, I would be more comfortable and relaxed if I didn’t know about the swinish billionaire class whose devilish machinations are the subject of Jane Mayer’s recent book, Dark Money (Doubleday, 2016). But I do, and now I’m compelled to try to write about it and them. You probably already know who they are: the Koch Brothers, Richard Mellon Scaife, John Olin and his tribe, the DeVos family (Amway) and countless others most of us have never heard of. And what the book describes is the underhanded methods these oligarchs have used in the last forty or so years to change the political landscape to such an extent that they now control the debate over policy, over what can even be discussed, and consequently over the fate of billions of people on this planet. A quote from an unnamed environmental lawyer puts it well: 

“You take corporate money and give it to a neutral-sounding think tank,” which “hires people with pedigrees and academic degrees who put out credible-seeming studies. But they all coincide perfectly with the economic interests of their funders.” (Mayer, p. 153).

The question is, how can these rich bastards keep succeeding with their subterfuge? Aren’t there laws that control what a think tank can be, how profits in family trusts can be used? Well yes. But the rich have always had ways to shelter and/or hide their money, and “donating” it to foundations and think tanks is only one of the ploys they use to get tax exemptions while at the same time getting to control what passes for “research,” and hence the rules of the game.
            The story actually begins with John D. Rockefeller. The grand old man of American oligarchy started what was probably the first private or nonprofit foundation in 1909. And its purpose, like all the ones that followed, was simple: give the appearance of “donating” part of his wealth to a foundation ostensibly devoted to promoting the general welfare, while at the same time providing the donor with tax deductions and subsidies, i.e. lower income tax. Still, the first foundations—like Rockefeller’s and Ford’s—actually did take their charitable role semi-seriously, supporting the arts, museums, media, schools, and other ostensibly “public-interest” causes. But in the 1970s, and even before, the more recent oligarchs discovered the idea that not only could their foundations save them tax money, they could promote their favorite anti-regulatory and anti-government policies as well.
            The poster boys for this type of “philanthropy” are the Koch brothers, starting with the father, engineer Fred Koch, who built the original fortune. As I’ve noted elsewhere, Fred was a right-winger with a vengeance, a founding member of the John Birch Society who saw Communists under every carpet, especially those in the White House. In his 1960 pamphlet distributed to over 2 million sympathizers, Fred Koch referred to “the colored man who looms large in the Communist plan to take over America” and actually predicted a “vicious race war” in America, all while characterizing income taxes as nothing less than “socialism.” Taxes were his obsession, and in order to escape having to pay estate taxes on his fortune—earned, not incidentally, by helping sweet guys like Stalin and Hitler set up refineries in their home countries—he established a “charitable lead trust” whereby he could pass on his estate without taxes as long as his heirs (he had four sons, Charles and David plus Bill and Fred) for twenty years donated the interest on the principle to charity. As Jane Mayer puts it, “tax avoidance was thus the original impetus for the Koch brothers’ extraordinary philanthropy.” As to that philanthropy, it might be useful to point out that not everyone was happy with this new ability of the rich to protect their fortunes via pretend charity. As Teddy Roosevelt said at the time in response to the Rockefeller ploy: “No amount of charity in spending such fortunes can compensate in any way for the misconduct in acquiring them” (Mayer, p. 70). Amen to that. It might also be added that whereas paying taxes cedes control over the spending of it to the government (which can distribute funds to those who need it), stashing money in a private foundation means that the owner can distribute funds where he or she wants to—and get thanked for the ‘generosity.’ Perhaps that is why private foundations have multiplied like rabbits since the early days: in 1930, according to Mayer, there were only 200 private foundations; by 1950, there were 2,000; by 1985, 30,000; and in 2013 their number had swelled to 100,000, with combined assets of over $800 billion. As to the Kochs—and second son Charles is really the leader of this dog pack—they have established foundations at every opportunity. Charles was early attracted to the faux-anarchy of ‘libertarianism,’ by which he really meant freedom from government interference, especially regulations of any kind, in his business dealings. One of his early forays was the establishment of something called the Freedom School, devoted to his brand of libertarianism (he later funded and pretty much controlled the Cato Institute). As one writer said of him, “He was driven by some deeper urge to smash the one thing left in the world that could discipline him: the government” (p. 54). In the process, he often smashed people too: as in the case of Donald Carlson, a longtime tank-cleaner at the Kochs’ Pine Bend Refinery in Minnesota, who was finally dismissed in 1994 with six months pay (his accumulated sick pay, but no workmen’s compensation) because he could no longer work due to benzene poisoning (his compulsory blood tests had shown the poisoning since at least 1990, but he was never notified). Koch Industries fought his claims to the bitter end—Carlson died of leukemia, a predictable outcome of his work with benzene, in 1997—and only under court threat agreed to give his widow some money conditioned on a confidentiality agreement. After it expired, Doreen Carlson spoke out: “And they want less regulations? Can you imagine? What they want is things that benefit them. They never cut into their profits.”
            Other oligarchs Mayer focuses on were following similar patterns. In 1973, taking their cue from the famous Powell memo of 1971 urging the wealthy to go to war with the anti-capitalist forces then thought to be opposed to business, Richard Mellon Scaife (heir to the vast Andrew Mellon banking fortune) and Joseph Coors (the beer magnate) financed the launching of the Heritage Foundation. Unlike previous think tanks like Brookings that were careful to maintain at least a veneer of scholarly objectivity, the Heritage Foundation was devoted to waging a battle of ideas, of selling “a predetermined ideology to politicians and the public [rather] than undertaking scholarly research” (p. 78). The problem, of course, is that most people and the media do not make such distinctions; thus, the “scholars” from Heritage and other right-wing foundations like the American Enterprise Institute are regularly invited to appear on talk and news shows as if they are objective investigators of fact. The overall project would come to be known as “movement philanthropy,” where great fortunes could be spent promoting a kind of free-market fundamentalism, especially including anti-regulatory, anti-tax and anti-government warfare.
            A variant to this movement was started by another of the ‘philanthropists’ Mayer profiles, the industrialist John M. Olin. An industrial giant that made most of its money peddling explosives and other armaments in both WWI and WWII (Winchester rifles, hydrazine rocket fuel, etc.), the Olin corporation, with its newly acquired subsidiaries (Mathieson Chemical, Squibb) turned out to be one of the first targets of the new EPA in 1973 (founded under Richard Nixon). Olin not only produced DDT in Alabama, it was also involved in multiple mercury-pollution capers, one in fouling the Niagara River in upstate New York, and another decimating an impoverished company town called Saltville in Virginia. An Appalachian hamlet in southwestern Virginia, Saltville was owned lock, stock and barrel by its Olin overlords: 2,199 residents rented their houses, shopped at the company store and got their water from the company. Everyone worked at the chlorine plant that used mercury in its production process—a process that leaked something like 100 pounds of toxic mercury into the public waterway (the north fork of the Holston River) every day for about 20 years, resulting in mercury in fish, not to mention the poor humans who lived there. In addition, the company also dumped 53,000 pounds of mercury into an open sediment pond. One local said: “We all played with the mercury as children. Daddy brought it home from the chemical plant.” And though the company issued gas masks to workers, their use was never enforced. After the publicity about mercury poisoning in Japan’s Minamata Bay, Virginia passed strict pollution standards, but Olin said it couldn’t meet them and so the company announced it would cease operations in Saltville in 1972 (leaving Saltville as one of the first  “superfund” sites). Life Magazine’s article about the “end of a company town” implied that it was environmental activists who had destroyed a way of life. But lives had already been destroyed. As one native said: 

“The Olin Company was dirty and treated the people bad, not like people. Most of the workers were poorly educated, and they led them around like sheep. A lot of people got sick, and there were more birth defects in Saltville than in other parts of the state” (p. 99).

            Of course Olin denied it was in any way at fault and also denied that its foundation money had any connection to its long history of pollution, but the record suggests otherwise. Here is what John Olin said about his foundation campaign:

“My greatest ambition now is to see free enterprise re-established in this country. Business and the public must be awakened to the creeping stranglehold that socialism has gained here since WWII” (100).

Clearly, re-establishing “free enterprise” meant giving business free rein to use any and all environmental poisons without some pesky government agency infringing on its 'freedom.' And this was before Olin got going with his enduring contribution: taking aim at the “liberal establishment” (liberalism and socialism were synonymous to Olin) dominating colleges and universities in order to establish a kind of “counter-intelligentsia” devoted to conservative thought. To effect this sea change, he hired William Simon (energy czar and Treasury Secretary under both Nixon and Ford) to be head of his Olin Foundation in 1977. Simon had always nursed a deep hatred for the liberal elite, claiming that a secret system of academics, media types, and bureaucrats ran the nation to such an extent that “Our freedom is in dire peril.” With Simon in the lead, the Olin Foundation began its campaign to establish “beachheads” (the military language is not accidental; this was seen as a war) not just in small colleges but at the most elite institutions like Harvard, Yale and Princeton. Amazingly, they were able to prevail, with institutional coups like the “James Madison Program in American Ideals and Institutions” at Princeton; the “Program on Constitutional Government” at Harvard (run by Harvey Mansfield); and the “John M. Olin Institute for Strategic Studies” also at Harvard (and run by hawk Samuel Huntington). Ostensibly neutral (note the language), these were all as ideological as the real coup de grace, their impact in law schools with something they called “Law and Economics Theory.” Nursed by Olin contributions of over $68 million to law schools at Harvard, Chicago, and elsewhere, Olin fellows from the likes of Harvard’s “John M. Olin Center for Law, Economics and Business” then branched out to teach at Cornell, Dartmouth, Georgetown, MIT and beyond. Among these legal ‘fellows’ was John Yoo (of the infamous “torture memo”), and another supposed intellectual, John R. Lott Jr., who went on to write a book called More Guns, Less Crime. In it, Lott argued that more guns actually reduce crime and promised that legalizing concealed weapons would make people safer. On inspection (by Adam Winkler of the book Gunfight), Lott’s study turned out to be based on no data whatever. When pressed to produce his data, that is, Lott claimed it had been lost in a computer crash. In addition to such fake scholarship, one of the major contributions of the Law and Economics caper were its infamous “seminars” for judges, initiated by the ideologue Henry Manne, by then dean of the George Mason University School of Law (a haven for right-wing ideologues). These “seminars” were two-week all-expenses-paid junkets for indoctrination in law and economics in places like the Ocean Reef Club in Key Largo, Florida. Something like 660 judges were treated to these pleasure-cum-indoctrination vacations, including future Supreme Court Justice Clarence Thomas. As one of Olin’s accolytes himself put it, “Economic analysis tends to have conservatizing effects…it seems neutral, but it isn’t in fact” (108). A case in 1997, where the EPA had moved to reduce surface ozone as air pollution caused by refinery emissions, demonstrates the point. An economist at the Koch-funded Mercatus Center, Susan Dudley, challenged the EPA ruling, arguing that the federal agency had not considered that, by blocking the sun, smog cut down on cases of skin cancer: If pollution were controlled, she said, it would cause up to 11,000 additional skin cancer cases each year. Incredibly, the Circuit Court for the District of Columbia (whose majority judges had all tasted the seminar cool-aid) embraced Dudley’s argument, finding that the EPA had “explicitly disregarded the possible health benefits of ozone” (154)! Fortunately, the Supreme Court eventually overruled the circuit court, saying that the Clean Air Act’s standards cannot be subject to cost-benefit analysis.
            Enough said. What Jane Mayer’s book demonstrates—and I have only been able to provide a tiny taste of its voluminous contents—is that big money from a determined oligarchy can profoundly affect, shape, and ultimately destroy democracy and much else besides. It can defy reason to the point that President Obama’s attempts to pass even the minimal cap-and-trade legislation to begin to reduce global warming were defeated even before they had a chance for a public hearing. The same has happened with the non-stop attempts to defeat his health-care-for-all bill, which continues to be attacked and distorted to the present day (a Circuit Court has recently ruled against its provision to provide government aid to those who can’t afford their premiums). And all is done under the guise of philanthropy, all disguised with market-tested language and emotional appeals that convince the masses (see the Tea Party) that it is in their interest to side with the richest, most immoral, and, at times, criminal class in the nation. And what it demonstrates is that democracy—if there is any democracy left at this stage in the republic—must be defended just as vigorously as the attacks by the dogs who would pervert, undermine and destroy it. And even then. Even then, I say, especially when contemplating the power that great gobs of money have to corrupt, or reflecting on the apparently bottomless lust of those with it to want always more even if they have to sacrifice the entire planet to get it—even then, it may be necessary to bring back the guillotine. 

Lawrence DiStasi


Sunday, May 1, 2016

The Uriah Heep of Our Politics

-->
In this nauseating season of presidential primaries, one finds oneself straining to find comparisons to do justice to the pit of vipers aspiring to the ultimate prize. Though I started out thinking that any one of the dozen or so Republican idiots on stage would be better than Donald Trump, I have since changed my mind. With only Trump and the unctuous Ted Cruz left in contention for the nomination, I have been forced to conclude that even Trump would be better than Cruz—though it should be said that for the life of me, I cannot figure out how any reasonable person could choose either one. Nonetheless, if it’s between the Drumpf and ‘Lyin’ Ted,’ I would prefer that Trump be the Republican nominee—even considering that the other “hold-your-nose” candidate on the Democratic side, Hillary Clinton, also appears to be a shoe-in. Which makes one wonder: what has happened to so degrade American democracy that we are left with the choice between Trump or Cruz and Hillary?
            But I digress. What I really wanted to do was register my increasing astonishment that anyone could possibly choose Ted Cruz as a potential ‘leader of the free world’ (and I have a good friend who seems to be opting for precisely that). That’s because Cruz really is one of the most despicable candidates—using only the assessments of his Republican colleagues in the Congress—ever to get this close to the top. He reminds me of a sewer rat that has somehow slithered out of his dark den and, by sheer persistence and pretension (slicking back his foul hair and uncrossing his beady eyes), managed to persuade many Republican primary voters that he would be the best alternative to Trump. Even if we discount what former speaker of the house John Boehner called him (“Lucifer in the flesh”) as a bit hyperbolic, Ted Cruz still remains the Uriah Heep of modern politics.
            For those who may have forgotten, Uriah Heep is one of Charles Dickens’ most memorable and loathsome characters. He appears in David Copperfield, and though he ends up getting his just deserts (sentenced to prison for committing fraud on the Bank of England), for a time he manages to convince many people in the novel of his “‘umbleness,” his sincerity, and even his honesty. Dickens describes his face as “cadaverous,” something that would fit Ted Cruz perfectly (isn’t there something about his eyes that chills the soul?) And though Heep protests constantly about his humility, though he advances with Mr. Wickfield because of his determination and willingness to work zealously—even teaching himself law at night—he shows his true colors by resorting to blackmail to finally gain control of Wickfield’s business. He is, in short, motivated almost exclusively by greed, selfishness and self-aggrandizement.
            From everything we have read about Ted Cruz, he is quite similar. The man seems to have no working morals—except for those he pretends to revere as a fundamentalist Christian conservative. When he saw that there was a chance to elevate his stature in the Senate by threatening to shut down the government to defund Obamacare, he simply ignored the damage it would do to his own party and to his own colleagues in favor of his personal agenda. Everyone knew, and he allegedly knew as well, that his plan to threaten Democrats and Obama himself with a government shutdown if they refused to cancel the health care law was doomed to fail from the outset. And yet he persisted in holding the government hostage for sixteen days, until finally his colleagues, sensing that they were going to suffer an even bigger loss in public support than they did the last time they threw this sort of tantrum, caved in and overruled him. In response to which Cruz publicly berated them all as wimps (in contrast to himself, of course). This is one of the reasons Speaker Boehner called him “Lucifer in the flesh.” It is why New York Congressman Peter King vowed he would “take cyanide” if Cruz gets the nomination. King was eloquent about why he “hates” Cruz:
“If you come up with a strategy that’s going to shut down the government of the United States and you have no way of winning, you’re either a fraud or you’re totally incompetent, so he can have his choice as to what he is,” he frostily told Piers Morgan. At other times, he (King) has said, “He’s a false leader; he’s led people down a false path here,” and “Ted Cruz has decided to be the center of his own universe, to live in his own world.”
And finally, King told Wolf Blitzer when Cruz announced his campaign for president in 2015: “He’s shown no qualifications, no legislation passed, no leadership, and he has no real experience…So to me, he’s just a guy with a big mouth and no results.”
            In sum, Ted Cruz, apparently driven by a father’s (himself a fundamentalist preacher) indoctrination that he would be great, will coldly betray anyone and any group in order to advance what he considers to be his destiny. Calling him a reptile is to insult a whole species (even Satanists have repudiated the association with Cruz, arguing that calling him Satan is an insult to them as well.) Calling him anything is an insult to whatever one calls him. Uriah Heep, Flem Snopes (the scabrous arriviste from Faulkner’s novels), snake, rodent—none really does justice to the living, breathing pus bag that is Ted Cruz. I had a friend once who coined the most vivid epithet I’ve ever heard to describe people such as Cruz, when he described one of our fellow editors at Harcourt Brace as having “halitosis of the soul.” That would seem to be the type that Cruz epitomizes—even granting that it’s a type more common among hypocrite politicians than any other ‘profession.’ It is a type that seems impervious to rebuke or criticism or insult. It is a type of human being—if one can really call such types ‘human’—that is so besotted with its own slime that it hardly knows it is being spat upon.
            And yet: this is the naked opportunist who is willing to cashier his whole party to be president of the United States, and who is actually succeeding in getting former enemies like Sen. Lindsay Graham to support him, so desperate are they to stop Donald Trump. The entire nauseating spectacle is enough to make one forget politics and all concern with politics forever. Except for the fact that the outcome of this presidential contest will matter so deeply to so many people, to so much of the planet. How can it possibly be ignored?
            This is really the larger question that Ted Cruz raises. Politics is said to be the “art of the possible.” But when democratic politics throws up from its depths such loathsome creatures as Ted Cruz, and offers them for our consideration, it forces us to wonder whether another system—no matter how corrupt—could possibly be worse. History is full of examples of idiot kings, of vicious opportunists who have used their power to devastate countless nations. If his past is any indication, Ted Cruz bids fair—if he should ever get such power—to stand with the worst of them. So what does that say about democracy? What does that say about the voice of the people—millions of whom are even now lining up to support him? It is terrifying to contemplate.

Lawrence DiStasi

Friday, April 15, 2016

Water Pirates

-->
I have read about the coming water wars before. In fact, a few days ago, a PBS News Hour report on the water crisis in India brought home some of the horror of what is happening in the world’s most populous country. There is almost no drinkable water left for poor people. The water they can get from the most common public sources is polluted to a horrifying degree. One entrepreneur has come up with a partial solution: with a kind of credit card, Indians can buy water that has been purified through reverse osmosis. But as always, if you have no money (as the Flint, MI water debacle proved), you’re out of luck.
            Now comes a piece in Reader Supported News (“We’re Running out of Water and It’s Causing Countries to Fall Into Chaos,” 4/15/2016, originally in Newsweek) that lays out the problem worldwide. Nathan Halverson first points out that many of the conflicts in the Middle East have their origins in drought and the drying up of aquifers. Both Yemen and Syria are in the midst of wars that have turned large parts of their populations into refugees, and in both countries the cause can be at least partly attributed to failing water supplies. Predictions from agencies like the CIA indicate that these and similar organizations are trying to prepare for even more chaos as desperate people in countries running out of water, and therefore out of food, will be more and more prone to resorting to riots and violence and migration due to hunger and thirst.
            But the part of the piece that got to me most was that portion devoted to Saudi Arabia. I’m already more sick of this modern kingdom made rich from oil than almost any other nation. Saudi Arabia, in fact, was the origin of most of the 9/11 hijackers—financed by either the monarchy itself  or wealthy Saudis or both—and is now supporting not only the fanatics of ISIS but is waging almost single-handedly (with United States military equipment) the deadly assault on Yemen’s Houthi rebels. Their indiscriminate bombing of civilians has outraged most of the world (except, of course, the Saudis’ U.S. suppliers; war is, after all, good for business). Here, however, we learn about another bit of Saudi chicanery that has to do with water. It begins with the revelation of a 

classified U.S. cable from Saudi Arabia in 2008 [which] shows that King Abdullah directed Saudi food companies to search overseas for farmland with access to fresh water and promised to subsidize their operations. The head of the U.S. Embassy in Riyadh concluded that the king’s goal was “maintaining political stability in the Kingdom.”

Ah, what can’t be justified in the name of “maintaining stability.” But the details are really the key here, and to get those one needs to follow the link to a site called Reveal News, part of the Center for Investigative Reporting.
            In two articles published by Reveal News in 2015, we learn about the background to this Saudi need to find farmland overseas. The Saudis, flush with their oil money, several decades ago decided that they could grow wheat in their desert lands, the only requirement being water. And they had water deep underground, in a huge aquifer. By drilling into the aquifer—deeper each year of course; they were drilling as much as a mile deep near the end—they were able to not only feed their own population, but to export tons of wheat to the world market. So, beginning in the late 1970s, “Saudi landowners were given free rein to pump the aquifers so that they could transform the desert into irrigated fields” (www.revealnews.org/article/what-california-can-learn-from-saudi-arabias-water-mystery).  Within a very short time, Saudi Arabia—a desert—became one of the world’s premier wheat exporters (the sixth largest), its wealthy landowners growing wheat as well as forage for their dairy industry, and growing richer in the process.       
            The problem was, the farmers couldn’t or wouldn’t control their need for water, so by the 1990s, they were pumping 5 trillion gallons of precious aquifer water to the surface for irrigation each year. The Tayama Oasis, once a green source of water and health that had sustained humans for millennia, “was drained in one generation.” In other words, modern greed and the modern global market totally bankrupted a critical source of water, and thereby life, almost overnight. It was at this point that Saudi’s King Abdullah decided to act. He stopped the domestic wheat growing, and directed and subsidized his food companies to find farmland overseas. And where did they find it? You guessed it, in Arizona. Last year, in fact, Almarai, Saudi Arabia’s largest dairy company, “bought 9,600 acres of land in a desert” in Arizona, and “converted it into hay fields to feed…its cows back home.” So while, technically, the Saudis aren’t bottling and shipping American water back to Saudi Arabia, they are in fact doing so by shipping the alfalfa that requires all that water back to their now-depleted desert. In doing so, they are also greatly increasing the chance that the aquifers in Arizona, not to mention the Colorado River which is the source of much of the water now used by the entire Southwest, will soon be depleted as well.
            In fact, the aquifers in California (which bears a good deal of resemblance to Saudi Arabia in that its Central Valley is a kind of desert that likewise relies on pumped water rather than rainfall) are predicted to dry up in the very near future. Though California is luckier than Saudi Arabia in having another source of farm water—the snowpack in the Sierras, whose melt comes to the Central Valley in canals—its days as the primary exporter of vegetables, milk, and meat seem numbered. The water, especially the water from aquifers, cannot last at the current rate of its exploitative farming of water-intensive crops like almonds. The key thing about aquifers, of course, is that they take eons to fill; simple rainfall won’t replace what’s been squandered for years. And this makes the problem global. As the first article points out, it’s not just Saudi Arabia and California facing this crisis. I’ve mentioned India earlier. China is also at risk. An Earth Policy Institute study estimated that China feeds about 130 million of its billion people by “overpumping and depleting its sinking aquifers.” When those aquifers run out, China will need to rely on foreign sources of food and water to feed those 130 million people, who will then be competing with the 30 million Saudi Arabia now has to feed. China has already purchased Smithfield Foods, America’s largest pork producer. All that pork, in turn, requires a lot of water to grow the grain the pigs eat, which means that, in effect, China, like Saudi Arabia, is importing water from the United States.
            And this is just the beginning.
            All of which is to say, that though smart humans have always thought they could triumph over natural limits with intelligence and ingenuity and technology, there are some limits that even the brainy ape can’t quite overcome. In the coming years, we may find out that water—that signature element that virtually defines earth and the life it sustains—comprises one very big limit indeed.

Lawrence DiStasi

Sunday, March 20, 2016

Stone Age Brains

-->
I have just finished reading a fascinating book that helps explain the Trump phenomenon (though not in an encouraging way). It’s called Political Animals, How Our Stone Age Brain Gets In The Way Of Smart Politics, by Rick Shenkman. The thesis is fairly simple, though a bit startling: basically, we humans retain a brain that, despite outward appearances and our professed allegiance to reason, operates in a way suitable to our stone-age, hunter-gatherer ancestors of the Pleistocene (the age that lasted roughly from 2.5 million BCE to about 10,000 years ago). Given the rate of evolutionary change, that means that the mere 10,000 years from the Stone Age to complex civilizations isn’t nearly long enough for us to have evolved brains more suited to our current physical and social environment. As evolutionary psychologists Leda Cosmides and John Tooby put it in Shenkman’s book: “Our modern skulls house a stone age mind” (xvi). This means that in political situations, most voters do not behave as rationally as we like to think. All the labor to craft political messages embodying truth and fact make—for the majority of people—very little or no impact. Rather, most voters are moved by events, by the way a candidate looks, by their biases which they stick to with alarming persistence. They also use their brains, which Daniel Kahneman proposes work on basically two systems—the fast-thinking System 1 (mostly instinctive) and the slower-thinking System 2 (reasoning)—in an essentially stone-age way. They make quick judgments (System 1) that completely bypass reasoning or fact or information and rely on instinctive, mostly visual cues.
            Shenkman starts out with an analysis, recently done by Christopher Achen, of the election of 1916 in which Woodrow Wilson ran for re-election. In that summer, there were several shark attacks on swimmers at the New Jersey shore. Wilson won the election, but in the two towns—Spring Lake and Beach Haven—where the shark attacks occurred, the President’s support dropped by nine to eleven points. It was the same effect, Shenkman points out, that the Great Depression had on New Jersey voters in 1932. What happened? The huge drop was due to the fact that the voters felt threatened, regardless of the fact that Wilson had nothing whatever to do with it. Just the threat led voters to vote against the incumbent, Woodrow Wilson. And it wasn’t only in 1916 New Jersey. Achen and a colleague then analyzed the Florida vote in Bush v. Gore in 2000, to take into consideration negative weather events like drought and flood, and came up with the same startling pattern: “voters suffering from either floods or droughts registered a strong bias against incumbents.” In 2000, according to Achen, roughly 2.7% of the electorate, or about 2.8 million people “voted against Gore because their states were too dry or too wet” (xxiv). And this pattern was found to operate as far back as 1896: simple events that felt threatening to voters, regardless of whether incumbents could do anything about them or not, were blamed on incumbents. In short, politics in large part involves not what candidates promise to do or have done; it’s about how our stone-age brains are working at the time of an election.
            Now we have an election in which a billionaire named Trump is conducting a campaign that has most political observers scratching their heads. How can he be winning people over? How can his simple, and simple-minded message—“We’re going to be great again. We’re going to win, win, win. We’re going to build a wall and keep immigrants out.”—possibly persuade voters that this man is even remotely suitable, not to mention minimally prepared to be the most powerful leader in the world? The answer lies in those stone-age brains. In those instant, System 1 opinions. And lest we be too quick to condemn those who fall for this nonsense as “stupid,” Shenkman makes the important point that it is not ‘stupidity’ but ‘ignorance’ that is the problem. Being ignorant means lacking the information needed to make an informed decision. And why are most Americans ignorant? Because they aren’t interested enough to pay attention—and this, again, has to do with those brains suitable to the stone age.
            The Pleistocene, that is, was marked by humans who gathered in groups of 150 individuals, more or less. Why 150? It appears to be the optimum size of a group that the human brain can keep track of (there is a brain-to-network ratio that has been worked out about this). Our brains, like the brains of all primates, evolved their size to be social—to be able to keep track of and relate with and dominate as many other people as possible. That, in modern evolutionary thinking, is why human brains evolved to be so large. The brain size that evolution apparently favored was the size that could keep relatively solid track of 150 individuals. The problem in the modern world is that almost no one lives in a group or village of 150 people anymore. We live in megalopolises that number in the millions, and our concerns extend even further to millions of our allies and essentially the entire globe. But our brains are still operating at the 150-person level. So most of us simply cannot be bothered with all it takes to be well-informed. Instead, we get impressions from photographs, from TV ads or interviews or debates. And what the research shows is that an alarming number of people decide almost instantly about who is suitable and who is not, and once they’ve decided, stick to that first opinion. How fast are these key decisions made? Shenkman cites the research of a political scientist name Todorov who sought to find out. Todorov showed subjects a still photo of a political figure. He discovered, first of all, that it takes just 1/10 of a second to “draw an inference about someone’s traits.” One-tenth of a second. Given more time, subjects just grow more confident in the opinion they’ve come up with. Even more startling, Todorov found that we begin to form opinions about people from a photo in a mere 33 milliseconds, and that “we finish forming an opinion by 167 milliseconds” (62). A millisecond is a thousandth of a second! This is faster than it takes to blink, which takes 300 milliseconds at least. And needless to say, such opinions are formed subconsciously, before a person even knows he’s decided.
            So we should not be surprised that Trump supporters like the way he looks (people generally favor candidates with square jaws in times of trouble), or the simple way he sounds either. Because another series of studies shows that people don’t favor the candidate who seems smart or well-informed, but rather the candidate who makes them (the voters) feel smart. That is to say, according to social scientist Howard Gardner, stories are the gold standard for a politician: they “constitute the single most powerful weapon in the leader’s literary arsenal” (135). And the best stories in this regard are simple ones, ones that represent the binary world view (good vs. evil; dark vs. light) of typical 5-year-olds. Why are these stories best? Because everyone can understand them. They make voters feel smart (‘I understand the story and hence the complex problems of the world around me’ is the idea). Ronald Reagan knew this. So, he used the simple name from the popular film Star Wars to name his solution to the nuclear threat everyone feared. Star Wars: the magical shield that would make us invulnerable to nukes. It was classic Good vs. Evil. America vs. the Evil Empire of the Russians. It was a brilliantly simple (and simpleminded) story designed to comfort those who were worried, and make them feel smart. U.S.A., U.S.A, we’re invulnerable, invincible.
            Now we have Donald Trump doing something similar. Worried about ISIS? We’ll bomb the shit out of them, not worrying about collateral damage like some politically correct egghead. Worried about immigrants taking your jobs, your country? We’ll build a wall on the border and deport the 11 million who’ve snuck in previously. Worried about your jobs going to China? We’ll just bring ‘em all back by force, threatening the foreigners, demanding the corporations do it or leave. It’s all simple and simple-minded, and those who are disaffected from the political process, from eggheads who are too afraid or too politically correct to “tell it like it is,” flock to his message and defend it and their choice against all contrary information or mistakes. He’s our guy, he’s got the balls to do what he says, he will save us from the evil (pick one: Russians, Muslims, terrorists, Ragheads, Wetbacks, Blacks, Chinks, etc etc.) ones who have taken our country from us.
            Stone-age brains. It’s oddly fitting when you think about it. Instead of us bombing the wogs (the North Vietnamese) back to the Stone Age, as General LeMay once put it, we find ourselves—at the apex of the modern world—following a philandering huckster down the path to Armageddon on the strength of our not-so-sophisticated-after-all Pleistocene brains.

Lawrence DiStasi
           




Friday, March 4, 2016

Scalia Disgraziato

-->
At the risk of beating a dead horse, I’d like to address the accepted public narrative about Antonin Scalia and his vaunted reverence for the U.S. Constitution. Repeatedly we are told that this man may have been sharp and sometimes cruel in his opinions and dissents and queries to petitioners, but it was always in the service of his deep and abiding respect for the great founding document of the United States. Most of us take this at face value, having too little understanding of constitutional law and too little time to look into either the law or Scalia’s many rulings to judge its validity. But Renata Adler, a renowned journalist who has written for the New Yorker and many other publications (and is by no means ‘liberal,’ skewering 60s leftists mercilessly), needed only one of Scalia’s major decisions—it was actually a concurrence in which he and the Supreme Court, in Bush v. Gore, essentially handed the election of 2000 to George W. Bush—to demonstrate that the armor of originalism in which Scalia cloaked himself was so full of holes it might as well have been cheesecloth, or something more scatological. The article, collected in a new book of Adler’s pieces entitled After the Tall Timber (NY Review Books: 2014), is titled “Irreparable Harm,” and first appeared in The New Republic on July 30, 2001. What Adler concludes is that the Supreme Court’s decision to stop the hand counting of votes in Florida, thus granting Bush’s petition for a “stay” and thereby handing him the election, was “the most lawless decision in the history of the Supreme Court.” In the end, this may have been most fitting: not only was George W. Bush the most lawless president in our history, he was made president by an equally lawless Supreme Court under Chief Justice William Rehnquist.
            But to get back to Scalia. First, we must know that the decision in Bush v. Gore was made by the Supreme Court per curiam: which means it was an unsigned decision by the whole Court, with no single justice writing the decision (and thus taking responsibility for it), but with “concurrences” by Justices Rehnquist, Scalia and Thomas. Scalia’s concurrence is what Adler goes after most severely, though she also slams Rehnquist’s words as well. She first points out that, historically, a “stay” was granted only in an emergency so dire that allowing someone to continue doing the act at issue threatened “irreparable harm” to the petitioner—harm that could not be undone. It also had to be the case that granting the “stay” would not harm the public interest. Thus, in his concurrence, this was the issue that the great Justice Scalia addressed. If the manual counting of votes in Florida continued, he wrote, it

“does in my view threaten irreparable harm to the petitioner [i.e. Bush], and to the country, by casting a cloud upon what he claims to be the legitimacy of his election.”

A quick look at what Scalia has written will explain why Renata Adler jumps on this like the proverbial dog on a bone. Scalia doesn’t write that the irreparable harm will strike the petitioner due to any objective or legal merit in his case. The alleged “irreparable harm” will come from what the petitioner [Bush] “claims to be the legitimacy” of his election. And the harm will take the form of “casting a cloud” over this claimed or alleged or premature (the vote count was ongoing) legitimacy. Adler’s scorn can hardly be contained: “Well there it is,” she writes:

The irreparable harm of “casting a cloud.” In the long and honorable tradition of injunctions and stays, this “irreparable injury” is a new one. Not just a cloud, but a cloud on “what he claims to be the legitimacy” of what he is claiming. By that standard, of course, every litigant in every case should be granted an injunction to halt the proceeding that offends him: the prosecutor casts a cloud on a claim of innocence; the civil plaintiff, a cloud on the defendant’s claim that he has already paid him. And of course vice versa, the defendants casting clouds on plaintiffs and prosecutors. The whole adversary system consists of a casting of clouds  (p. 185, Adler; Emphasis added).

In other words, what Scalia and his fellow justices have done is to essentially undermine the entire justice system of the United States and most of the world. That is because if this case were taken as a precedent, then every plaintiff and every defendant could start claiming that his opponent’s claim, if granted, would cause his own claim (of innocence or legitimacy) irreparable harm and should thus be stopped! (‘Your claim that I owe you money would irreparably harm my claim that I don’t.’) And this decision—whose actual consequences have been so catastrophic for both the United States and the world (think only of Bush nominating both Samuel Alito and John Roberts to the Supreme Court; of Bush invading Iraq and throwing the entire world into turmoil; of Bush presiding over the collapse of Wall Street and world financial markets)—was made by and on behalf of those conservatives who have ranted endlessly about their respect for the rule of law and the Constitution’s original intent and the sanctity of legal precedent. 
            But Adler isn’t through yet. Actually, legal precedent is an equally fundamental issue she goes after; because the decision in Bush v. Gore has a final element of judicial bullshit. That is, in order to limit the institutional damage it seems to know it is causing, the Supremes added this little disclaimer: 

“Our consideration is limited to the present circumstances, for the problem of equal protection in election processes generally presents many complexities.”

This sentence drives Adler completely apoplectic. That’s not only because it makes almost no sense, but also because what it apparently says is that the decision in Bush v. Gore is not to be taken as a precedent for other cases. For this one case, the justices assert, precedent is wiped out, null and void. Adler really goes after this one. “If this were so,” she says (i.e. if precedent could simply be eliminated):
it would undermine, at one stroke, the whole basis of American and Anglo-Saxon law. That each case has precedential value, must have precedential value, is the bedrock of our system of justice. Otherwise, each case can be decided ad hoc, at the caprice of judges—non-elected, federal judges with lifelong tenure. The Constitution and even the Magna Carta would be superseded, the justices would be kings (488).

That pretty much says it all. What the conservative Supreme Court—the court that embodies the conservative objection to “activist judges” like the ones on the Warren Court that passed Roe v. Wade, and Brown v. Board of Education, and all those ‘liberal’ guarantees of due process that allow criminals to flourish—that court, with Scalia’s concurrence in the lead, had just shattered the principle of precedent: the “bedrock of our system of justice.”
            No wonder Adler calls it “the most lawless decision in the history of the Court.” No wonder she concludes that the Rehnquist Court, by taking the decision about who would be President of the United States not only away from the voters (who had made their decision which the count was trying to determine) but also away from those whom the Constitution has ordered to make the decision if it remains in doubt—the elected U.S. Congress or even the chief Executive of the state in question—by seizing power in this way, the unelected justices of the Supreme Court had also undermined the sacred (especially to them) separation of powers. The Supreme Court in Bush v. Gore, that is, had usurped the Constitution and taken on the mantle of kings and despots. And the bitter irony of this is contained in Adler’s headnote. It is a quote from Antonin Scalia’s scathing dissent in Morrison v. Olson in 1988, and reads in part: “Without a secure structure of separated powers, our Bill of Rights would be worthless.” As it turned out, Scalia and the Court’s decision did, in fact, within a very short time, make the Bill of Rights worthless. It made a whole lot more worthless as well, when it selected George W. Bush as President by lawlessly, unconstitutionally stopping the manual vote count in Florida.
            So next time you hear encomiums about Antonin Scalia’s reverence for the law, for the Constitution, think about whether he more aptly deserves the old raspberry. Or the moniker he might get in Italian: disgraziato.

Lawrence DiStasi