Thursday, July 28, 2016

Death of a Comrade

-->
I’m not sure what I want to write about it, about the death yesterday of my old friend, Gian; except that I feel the need to write something. It brings to mind the opening piece I wrote for Una Storia Segreta (Heyday: 2001), which I titled after a line in Prospero Cecconi’s notebook, “Morto il camerata Protto.” Cecconi was referring to the death of his friend Giuseppe Protto, when both were domestic internees (yes, so-called ‘enemy aliens’ of Italian descent were interned during that war after being judged “potentially dangerous”) imprisoned at Camp Forrest, TN during World War II. Cecconi, too, felt the need to write something in his clandestine notebook, though all he could bring himself to write was that simple line, Morto il camerata Protto; ‘my comrade Protto is dead.’ It was enough; years later, we can feel the pain, the loss, the loneliness in that lone line.
            Now, my friend Gian is dead. È morto lui. And though I have more time and more skill with language to write volumes about it, there really is very little to add. He’s dead. My friend Gian, whom I’ve known since the mid 1970s, and with whom I’ve laughed and joked and written and studied and cooked and celebrated our common heritage—we organized a little group we called the circolo in the early 1990s; and would gather once a month to cook together (they were sumptuous feasts) and laugh together and reminisce about our Italian parents and childhoods and the foods we used to eat—my friend Gianni is gone.
            At first I took it rather philosophically. Yes, I knew he was ill and in hospital where I got to see him a week or two ago. Yes, I knew he had been placed on nothing but palliative care and was certain to slip into oblivion sooner rather than later. Yes, I had been expecting the call for days. And yes, I have more or less accepted the fact of death, the fact that we all die, that nothing is more certain than the death which is a necessity of our existence and often a blessing. But when it came, something internal shifted. I didn’t even notice it at first. I busied myself with finding some mementos I could contribute to an expected memorial service, some of his drawings, some writings about him and his vintage kitchen and vintage 1940s décor and vintage humor that kept me busy most of the afternoon yesterday. But in the night I began to realize that I was grieving, albeit not in the way we think of as grieving: no tears, no depression to speak of, no laments about the futility of life or the too-early death of this life, or how I would miss him. No. There was mainly this sense of drift. I suddenly felt unmoored. It was as if an anchor in my very life had come loose—but not a literal anchor; some inner anchor that was more like a void or an eraser that had left me, or part of me, vacant. Adrift. Easily blown away. This happens more, perhaps, when one is older and the friends and relatives that remain get fewer and farther between. I don’t know. All I know is that Gianni was my close friend, someone I could always count on to be in my imagined gallery of people to speak to or places like his unique house and kitchen to go to, to sit and drink a companionable glass of wine and complain or joke or laugh or cogitate about the follies of the world with. For instance, there was the time I had been on my zen walk and was heading home through Berkeley, and simply dropped in for a quick rest and a cup of coffee and he immediately saw something different in me, some spiritual light in me that no one else would have seen much less valued.
            And now that real space, that imagined space is gone. Empty. The weight of it, that’s what strikes me most. The weight it provided in my life, the ballast that kept things secure and at least partly known—which is what a friend is, what a relative is, a known weight or quantity to keep one firmly in place—was gone; is gone. He is no more. Though I can still conjure up his looks and his speech patterns and his laughter, the man himself is no more. The weight of him. The solidity of him. The actual belly and blood of him.
            And how peculiar it really is, this sense of another. We can imagine it sometimes. We can still find the outlines in our image drawer in the mind. But we know, after death, that it is only an image. It has no flesh to it. No bones to it. No scent or feel to it. No response to it because it can’t talk back. It can’t provide the answering weight of itself to our own presence because it has no presence anymore. And presence—what a person actually is in the flesh—though it’s almost impossible to express, is something we know. Know without any reflection or reason that that’s what a person is, really. That presence. And it is not captured in drawings or photos or films or any medium but itself. Part of it can be captured, the part that is analogous to what we can conjure on our mental screens. But the real fullness of it, the living presence of another human or animal or tree or flower—that is never, cannot be ever captured in any of our media. We are fooled that it is. We are fooled into thinking that we really know those we see on TV or on our computer screens or in our smartphones. We don’t. All we know is shadows, poor bereft shadows that have no weight, no depth, no life. Which is what we’re left with when someone dies. Shadows. I still have the shadow of Gianni. But his presence, his weight, his laugh, his life—that is gone forever. And in some terrible way, it makes me lighter, more fleeting, more adrift.
            That is what we grieve. We grieve, I grieve the loss of that unique, indispensable, never-to-be-repeated presence that was Gian Banchero. That will never come again. So simple: È morto lui. So commonplace: È morto lui. And yet so deeply, unfathomably vacant, empty, weightless, gone.

Lawrence DiStasi

Tuesday, July 26, 2016

The Gene and its Discontents

-->
Notwithstanding its beautifully-rendered history of how scientists finally, after 2500 years of speculation, finally discovered and named the “gene” as the mechanism of heredity (Darwin had no idea of this mechanism, speculating about tiny things he called “gemmules”) for me, the most fascinating parts of The Gene, by Siddhartha Mukerjee (Scribner: 2016), are the materials on Eugenics. Originated by Francis Galton (Darwin’s cousin) in the late 19th century, “eugenics” refers to the idea that humans should try to select the “best” genes from among human populations and selectively advance those “good” genes and eliminate the “bad” ones to produce a race of “perfect” humans. In a lecture at the London School of Economics in 1904, Galton proposed that Eugenics “had to be introduced to the national consciousness like a new religion.” Arguing that it was always better to be healthy than sick, and ‘good’ rather than ‘bad’ specimens of their kind, he proposed that mankind should be engaged in selectively breeding the best, the good, the strong. As Mukerjee quotes him: “If unsuitable marriages from the eugenic point of view were banned socially…very few would be made” (p. 73), the mechanism to promote ‘suitable marriages’ being a kind of golden studbook from which the “best” men and women could be chosen to breed their optimal offspring. No less a figure than H.G. Wells agreed with Galton, as did many others in England who even then were expressing fear about the inferior working classes out-breeding the better classes. Galton founded the Eugenics Review in 1909 to further advance his ideas, but died the next year before he could really get eugenics going in England. But other countries like Germany and the United States were already taking steps to follow Galton’s lead. Indeed, at the first International Conference on Eugenics that was held in London in 1912, one of the main presenters was an American named Bleecker Van Wagenen. Van Wagenen spoke enthusiastically about efforts already underway in the United States to eliminate “defective strains” (of humans) in America, one of which involved confinement centers—called “colonies”—for the genetically unfit. These were the target of committees formed to consider the sterilization of ‘unfit’ humans such as epileptics, criminals, deaf-mutes, and those with various ‘defects’ of the eyes, bones, and mind (schizophrenics, manic depressives, the generally insane.) As Van Wagenen suggested, 

Nearly ten percent of the total population…are of inferior blood, and they are totally unfitted to become the parents of useful citizens…In eight of the states of the Union, there are laws, authorizing or requiring sterilization (77).

            Van Wagenen was not kidding. The United States continued its misreading of Darwin and its enthusiasm for sterilizing the  ‘unfit’ well into the 20th century, and it was not just the lunatic fringe that was involved. Mukerjee cites a famous case that came before the Supreme Court in 1927, Buck v. Bell. This case concerned one Carrie Buck, a Charlottesville, Virginia woman whose mother, Emma Buck, had been placed in the Virginia State Colony for Epileptics and the Feebleminded after she was accused of immorality, prostitution and having syphilis. In fact, Emma Buck was simply a poor white woman with three children who had been abandoned by her husband. No matter; she was judged ‘unfit’ and, with her mother confined, little Carrie was placed in a foster home, was removed from school by her foster parents to work, and at age 17 became pregnant. Her foster parents, John and Alice Dobbs, then had her committed to the same State Colony for the Feebleminded on the grounds of feeblemindedness and promiscuity, where Carrie gave birth in March 1924 to a daughter, Vivian. But having been declared mentally incompetent, Carrie was unable to stop the Dobbs from adopting her baby. (One reason the Dobbs may have wanted the baby was that it later turned out that Carrie’s pregnancy was the result of a rape by the Dobbs’s  nephew). Carrie was quickly scheduled to be sterilized, and the Supreme Court case of Buck v. Bell was brought to test the sterilization law—the 1924 Virginia Sterilization Act—to which Carrie Buck was subject, being already in a state institution for the feebleminded. Astonishingly, with the ‘great’ Oliver Wendell Holmes presiding, the Supreme Court voted 8 to 1 that the Sterilization Act did not violate the U.S. Constitution’s due process provisions—since Carrie Buck had been given a hearing, and since she was already confined to a state institution. Mukerjee cites some of the now-infamous ruling by Holmes:

It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes…Three generations of imbeciles is enough (83-4).

In accordance with the Supreme Court’s ruling, on October 19, 1927, Carrie Buck was sterilized by tubal ligation. The fact that her daughter Vivian—the ‘third generation imbecile’ Holmes referred to—had performed adequately in the school she attended, being of decidedly average intelligence, did not save her; nor, for that matter, did it save Carrie’s sister Doris, who was also sterilized, without her knowledge, when she had her appendix removed. After this, sterilization was free to spread in the United States; in 1927, for instance, the great state of Indiana revised an earlier sterilization law to cover “confirmed criminals, idiots, imbeciles and rapists,” with other states following suit. Pre-marital genetic fitness tests became widespread, as did Better Babies contests at State Fairs. With the help of practical ‘genetics,’ America was out to produce a race of perfect humans fitted to its already ‘perfect’ political system and ‘perfect’ society.
            The logical next step in the Eugenics movement came, of course, in Nazi Germany. In 1933, Mukerjee tells us, the Nazis enacted the Law for the Prevention of Genetically Diseased Offspring, aka the Sterilization Law. Its premises were borrowed directly from America’s own program: “Anyone suffering from a hereditary disease can be sterilized by a surgical operation,” the diseases to include mental deficiency, schizophrenia, epilepsy, depression, blindness, deafness, and other serious deformities (121). Any cases in dispute were referred to a Eugenics Court, whose rulings allowed for no appeal. With films like Das Erbe (the Inheritance, 1935) propagandizing in its favor, the law became a grim model of  efficiency, with 5,000 adults being sterilized each month by 1934. And as with its other better-known programs, the Nazis moved smoothly and efficiently to the next step—euthanasia. A Scientific Registry of Serious Hereditary and Congenital Illnesses was set up, devoted to euthanizing (i.e. killing) defectives permanently to ‘purify’ the gene pool. The Nazis coined a macabre euphemism to justify all this, by perverting Socrates’ famous dictum about “the unexamined life not being worth living” into its macabre opposite: the euthanized were characterized as having lebesunwertes Leben, ‘lives unworthy of living.’ Though at first the targets were limited to children under three, soon the net was extended to adolescents, then juvenile delinquents, and finally in October 1939 to adults, with Jews at first conveniently labeled “genetically sick.” Typically, the Nazis set aside a villa, No. 4 Tiergartenstrasse in Berlin, as the official HQ of their euthanasia program, a place eventually known as Aktion T4, for its street address. Mukerjee at this point gives us one of his trademark elegant sentences: 

But it is impossible to separate this apprenticeship in savagery from its fully mature incarnation; it was in this kindergarten of eugenic barbarism that the Nazis learned the alphabets of their trade….The dehumanization of the mentally ill and physically disabled (“they cannot think or act like us”) was a warm-up act to the dehumanization of Jews (“they do not think or act like us.”) 125.  (my emphasis).

            There is other fascinating material in this altogether fascinating book, but I will leave most of that to other readers to discover. What I should like to stress is what Mukerjee himself stresses about genes, the genetic code, and eugenics. First, that genes, contrary to common perceptions, are not blueprints that form every element of an organism. Rather, they are like recipes—in that, just as recipes provide instructions about the process of cooking something, similarly, genes provide instructions about the process of building an organism. And as with a recipe, lots of chance or even intentional events can produce all sorts of variants. And, of course, the chance event par excellence, is the mutation. The problem is that humans, and especially humans who get seduced by the prospect of either eliminating “bad” mutations, or selecting for the “best” ones, misinterpret what mutations are and how they function in evolution. Citing the realization of Dr. Victor McKusick, Mukerjee makes the critical distinction that he wants everyone to grasp—that a mutation is a “statistical entity, not a pathological or moral one.” A mutation doesn’t imply something bad, like disease, nor even a gain or loss of function:

In a formal sense, a mutation is defined only by its deviation from the norm (the opposite of “mutant” is not “normal” but “wild type”—i.e. the type or variant found more commonly in the wild). A mutation is thus a statistical, rather than normative, concept. A tall man parachuted into a nation of dwarfs is a mutant, as is a blond child born in a country of brunettes—and both are “mutants” in precisely the same sense that a boy with Marfan syndrome is a mutant among non-Marfan, i.e., “normal,” children (264).

This distinction is critical, especially as regards the benighted attempts to create perfect humans or a race of normal humans. What we call “normal” is merely that which seems to be fitted to a given time, place, and conditions. To try to select for this “normalcy” is to completely misunderstand what genetics and evolution tell us. The “fittest” are not those who have won some sort of evolutionary or genetic race that is good for all time. They are simply those who may have turned out to be well-adapted to a given set of environmental and social circumstances. The worst conclusion one could draw from such “fitness” would be a) to decide to select only for those adaptations and exclude all others; or b) to try to interfere in genomes and eliminate all genetic variants in the vain hope that humans could be bred free of all illness or ‘unfitness.’ Conditions inevitably change. We have no idea what conditions might eventuate that might require some of the variants that we would like to prune out of existence—and prune is the accurate word here, leading us, as it does, to our modern mania to favor certain varieties of, say, apples or corn or wheat, while completely obviating the thousands of varieties that have evolved over centuries. This is a kind of ‘vegetable eugenics’ that many botanists have warned could leave the world without staple crops in the event of a pathogen that wipes out the now-dominant varieties. In short, a diverse gene pool is an absolute necessity for evolution to proceed.
            Yet despite the disfavor that eugenics has encountered in our time, the kind of thinking that fosters it is far from dead. Mukerjee cites a case from 1969, where a woman named Hetty Park gave birth to a daughter with polycystic kidney disease, leading to the child’s rapid death. Park’s obstetrician thereupon assured her that the disease was not genetic, and that there was no reason she should not have another healthy child. Park conceived again, but sadly the same result ensued; whereupon Park sued her obstetrician, for bad advice, and won. The court ruled that “the right of a child to be born free of [genetic] anomalies is a fundamental right.” Mukerjee points out that “this was eugenics reincarnated.” In other words, the court had ratified an expectation that the particular genetic mutation that caused harm to the Park family violated their rights—in effect, that that mutation should not exist. In the coming world of gene manipulation, we can expect that many mutations that now are classed as “abnormal” will be similarly classified and excised from existence. But as Mukerjee reminds us again and again, if we can expect anything, we can expect that conditions will certainly change. What appears “normal” now may one day be considered to have had only temporary value, suited to a very specific time and place. As Mukerjee notes at the end of his book, “Normalcy is the antithesis of evolution” (481). That is, though we have come to distrust and despise “mutations” that compromise what we consider ‘normal,’ evolution absolutely requires them, requires a gene pool that is as varied and diverse as it can be. Mutations are the lifeblood of such diversity, the bank on which evolution relies to adapt to always new circumstances. And equally important, evolution does not proceed according to human wants or needs or the wants or needs of any organism. Evolution proceeds according to what works, what is adaptable to a given circumstance at a given point in time. There is no good or bad adaptation. There is no good or bad mutation. There is no “normal” much less “best” genome or genetic code. No one can ever know what might be needed. So before humans go about eliminating that which appears negative or useless in any given era, they should think twice about eliminating precisely that which might one day prove salvational. Here is how Mukerjee puts it towards the end of his book:

“Gene editing,” the stem cell biologist George Daley noted, “raises the most fundamental issues about how we are going to view our humanity in the future and whether we are going to take the dramatic step of modifying our own germ line and in a sense take control of our genetic destiny, which raises enormous perils for humanity” (479).

Siddartha Mukerjee employs the perils of eugenics to serve as an object lesson that those ‘enormous perils’ of humans ‘modifying our own germ line’ are perils not just for humans, but for all the life on this planet.

Lawrence DiStasi

Friday, July 22, 2016

TrumpSpeak

-->
My title, as many of you will recognize, is a variant of the word “NewSpeak” from George Orwell’s dystopian novel, 1984. Whether one should credit Donald Trump with coining a new form of speech may be questionable, but watching his performance last night, I was struck not so much by the laughable misrepresentation of almost all his alleged “facts” (if you want a good rundown of how each of Trump’s ‘factoids’ were grossly exaggerated, de-contextualized or outright lied about, see the Washington Post piece here: https://www.washingtonpost.com/news/fact-checker/wp/2016/07/22/fact-checking-donald-trumps-acceptance-speech-at-the-2016-rnc/), but by his speech patterns. (In case you’ve forgotten, one of the reasons fact-checking doesn’t matter much for a Trump audience has to do with their ‘stone-age brains.’ Briefly, most people employ a quick, instinctive estimate, done in milliseconds, of a politician’s looks and/or manner, and completely bypass the reasoning process behind the information he delivers. This accords with the stone-age brains most of us still work with in interpersonal relations.)
            So, for now, let’s bypass the howlers Trump spouted in his overly long but factually empty speech, and attend instead to the patterns of rhetoric he used. To begin with, the man seems to be mostly driven—both in his domestic critiques and his foreign ones—by the notion of “getting a good deal.” This would figure, since his life seems to have been devoted to deal-making in the high-risk world of (mostly) Manhattan real estate. It is a world dominated by con men and hucksters who are always out to screw the naïve or the unwary. The New Yorker, therefore, must always be on his guard to make sure he’s not being screwed. This applies to all New Yorkers in all areas of life, but especially to those engaged in the dog-eat-dog world of real estate developing. Accordingly, Donald Trump’s rhetoric is full of critiques of his predecessors like Hillary and Obama and Bill Clinton for “not getting a good deal.” In his eyes, they gave away the store in the Iran nuclear deal; they gave away the store in Libya and Syria and Russia and China and especially in trade deals like NAFTA and the upcoming TPP. In short, previous political leaders succumbed to the cardinal sin in Trump’s world: they didn’t negotiate hard or cleverly enough; weren’t willing enough to play hardball; weren’t willing enough to talk tough and walk away and threaten and harangue. Now, of course, Trump has no way of knowing this; he wasn’t there; has never been engaged in any diplomatic activity or anything remotely political; and certainly is not about to consider the way that the United States has totally dominated and exploited almost every relationship it has entered in the post-World War II years. No. All he’s willing to bray about is how weak the nation has become, i.e. how it can no longer dictate the terms of every agreement due to its position as the biggest, baddest, most powerful nation on the globe. So he claims that he, the great real estate wheeler-dealer, will be able to make ‘better deals’—even, presumably, with those shirkers at home who want a free lunch.  
            And that brings us to the second noticeable rhetorical pattern. Trump never explains exactly how he’s going to accomplish all this. All he does is, first, exaggerate the problem—we’re besieged by criminals and loafers domestically and by terrorists from abroad, our cities are falling apart, our industry has all left for cheaper shores due to bad trade deals, cops are being murdered at the highest rate ever—and then assert that he’s the one who, with his superior deal-making ability, will fix the problem. Crime will end. Immigration will end. Terrorism will end. Globalization will end. Inner-city poverty will end. And he, Donald Trump, will end it.
            But how? These are complex, difficult problems that Republicans and Democrats alike have been promising to solve for decades. Not for Trump. The language is simple, the problems are simple, the solution is simple: Put Trump in Charge. And soon, trillions of dollars will be pouring into the nation’s coffers, taxes will be far lower saving everyone more trillions, roads will be built, infrastructure will be modernized, onerous regulations will disappear freeing up our energy sources (never mind the pollution or global warming) and pouring in even more trillions, and American Will Be Great Again.
            It is simple. And it is simpleminded. And the stone-age brains crowding the Republican Convention could not cheer loud enough or stomp hard enough or chant USA! USA! USA! often enough to roar their approval. Their devotion, even. Their lord and savior was saying it. He was saying it with confidence and certainty and with his jaw jutting out like some latter day Benito Mussolini, and they were ecstatic (as Mussolini’s crowds often were). He would talk tough. He would be tough. He would just take those over-educated fancy-nancy diplomats and bureaucrats by the throat, saying ‘fuck your reasoning and diplomacy and equity,’ and force them to give him a good deal. And if they didn’t, he’d bomb the shit out of them.
            And that’s it. After the longest speech in convention history, Donald Trump managed to say virtually nothing but the same posturing, simple-minded crap he’s been spouting throughout his primary campaign. Leaving the rest of us, the ones searching for some sort of program or plan or logic to his meandering speech, to wonder: how can they swallow this infantile pap? How can they not see that this guy has no capacity for any thought that’s longer than a sentence or two? Did you notice that? He never stayed with one subject for any sustained length of time: it was all quick cuts, as in a commercial. Crime in the streets. Shooting cops. Terrorists. Hillary and Libya, Iraq, Syria, Egypt. NAFTA. China. Back to high unemployment. Obama care. It reminded me of what was revealed in Jane Mayer’s recent article in the New Yorker where she interviewed Trump’s ghostwriter Tony Schwartz (he wrote The Art of the Deal for Trump)—i.e. that Trump had no capacity whatever to focus on anything for longer than a minute or two. Trying to interview Trump, said Schwartz, was like trying to interview a chimp with ADHD (my metaphor). The man had no capacity to concentrate at all, so Schwartz ended up following Trump around, listening in on phone calls and interactions and inspections, to scare up material for the book. The other thing Schwartz noticed—after Trump threatened him with a lawsuit and demanded that he return all the royalties Schwartz had earned from the bestseller—is that Trump’s famously thin skin demands that he instantly attack anyone who criticizes him. We all saw that in this Spring’s Republican debates. What Schwartz reminds us is how frightening this quality would be in a President: 

“The fact that Trump would take time out of convention week to worry about a critic is evidence to me not only of how thin-skinned he is, but also of how misplaced his priorities are,” Schwartz wrote. He added, “It is axiomatic that when Trump feels attacked, he will strike back. That’s precisely what’s so frightening about his becoming president.” (Jane Mayer, “Donald Trump Threatens the Ghostwriter of ‘The Art of the Deal’”, New Yorker, July 20, 2016.)

            Donald Trump, in short, gave one of the most consistently alarmist acceptance speeches in American political history last night. But what we should truly be alarmed about is ever ceding the enormous responsibility and power of the American presidency to a man who is so ill-equipped—emotionally, mentally, and morally—to handle it. For if, with the help of a gang of speechwriters, he is unable or unwilling to put together a cogent argument that at least attempts to fill in some of the missing spaces of TrumpSpeak, then every American with an ounce of sense should be terrified about how those missing spaces might eventually take some reckless, cataclysmic shape.

Lawrence DiStasi

Friday, July 8, 2016

From Chilcot to ISIS

-->
The bombshell in Britain in recent days has been the long-awaited (seven years in the making) report by Sir John Chilcot condemning Britain’s role in the 2003 invasion of Iraq. Most Britons, like most Americans, have long since concluded that the invasion was a disaster. But though the report fails to assign legal culpability (which many Britons who lost loved ones in the invasion hope to get), it does roast former prime minister Tony Blair pretty thoroughly. It says, in part, that his

 “judgements about the severity of the threat posed by Iraq’s weapons of mass destruction—WMD—were presented with a certainty that was not justified” and “Despite explicit warnings, the consequences of the invasion were underestimated….It is now clear that policy on Iraq was made on the basis of flawed intelligence and assessments. They were not challenged, and they should have been.”

It also explicitly condemns Blair (known in Britain as ‘Bush’s poodle’), for blindly following the lead of President Bush, citing a letter Blair wrote in July 2002 promising that “I will be with you whatever…” This constituted Blair’s only success according to the report, i.e., successfully appeasing George W. Bush.
            That the report took seven years to appear is in part attributed (by a 2003 report in London’s Independent cited in Alternet’s account of the Chilcot release) to a “fierce battle” waged by the U.S. State Department and the White House as early as 2003 to block release of the report because it allegedly contained “classified information.” Whether the release of the report in 2003 would have saved lives, either British or Iraqi, is not known, but it might at least have caused some re-evaluation of the Bush administration’s rationale for the invasion, which in turn might have led to Bush’s defeat in the 2004 election. Instead, of course, we got four more years of the worst presidency in history.
            If this were the end of it, the Iraq war blunder would still count as a horror costing millions of lives, but not as grave or extended a one as what it subsequently turned out to be. For the current plague of ISIS attacks in Iraq, Syria and now throughout the Middle East and the world, stems directly from the hubris and secrecy of the Bush Administration during that time. This is made clear in a recent (first aired May 17, and again, when I saw it, on July 5) Frontline documentary: The Secret History of ISIS  (http://www.pbs.org/wgbh/frontline/film/the-secret-history-of-isis/). What the documentary reveals is how ISIS was able to thrive and grow through a series of blunders—mainly driven by “optics”—regarding its first leader, one Abu Musab al Zarqawi. We learn that Zarqawi was known to the CIA even before the invasion in 2003: according to Nada Bakos, a CIA analyst charged with looking into his background, Zarqawi was a tough kid who grew up in a tough neighborhood in Jordan, one who appeared on his way to a lifetime in prison as a thug, pimp, and general hardass covered with tattoos. But one stint in prison radically changed him: he became a jihadist, a holy warrior; and to demonstrate his zeal, he actually removed his tattoos by using a razor blade to cut off his outer layer of skin. After that, he left Jordan for Kandahar in Afghanistan, determined to join up with Osama bin Laden. But bin Laden ignored this wannabe from Jordan and in 2002, Zarqawi saw a chance to strike out on his own, this time in Iraq. He set himself up near the Iran/Iraq border and began building his den of crazies. Fortunately, the CIA had an informant in Zarqawi’s camp, saw him as a definite threat in the event of an invasion, particularly as Zarqawi’s group was apparently trying to build chemical and biological weapons. CIA analyst Sam Faddis, assigned to the case, therefore formed a plan to take him out, and forwarded the attack plan to the White House for approval.
            But the White House, in the person of VP Dick Cheney and his aide Scooter Libby, wanted no part of the takeout, especially before the big invasion, so Cheney and Libby drove to the CIA to undermine the CIA’s information. From their aggressive questioning, it was clear that the White House had more in mind than simply worry about a strike that might pre-empt their war plans. They had cocked up a narrative concerning Saddam Hussein’s al Quaeda connection and involvement in 9/11 as a big part of their casus belli. And when the CIA said there was no connection, it was clear that Cheney/Libby badly wanted there to be one. This would eventually lead to Colin Powell’s memorable speech at the UN, in which the Secretary of State besmirched his reputation by accepting the White House’s script—which he, uncharacteristically, read verbatim at the UN. And though the White House appeared to follow protocol by sending the script to the CIA for vetting, Nada Bakos testifies in the documentary that the White House simply ignored the CIA’s corrections and stayed with their required script. As Colin Powell authoritatively put it: 

“…there’s a sinister nexus between Iraq and terrorist networks. Iraq today harbors a deadly terrorist network headed by Abu Musab al Zarqawi, an associate and collaborator of Osama bin Laden and his lieutenants.”

When confronted in the Frontline documentary about this clear fabrication in his UN speech, Colin Powell claims that his memory now is vague, but insists that his references to Zarqawi were unimportant to his general case. The truth is that a full seven minutes of the Powell speech were devoted to Zarqawi who is mentioned no less than 21 times, thus firmly connecting Iraq and Saddam to the terrorist network that had already attacked the United States on 9/11. Not incidentally, Powell’s speech also transformed Zarqawi into a major terrorist directing a worldwide terror organization. It is almost as if Colin Powell created Zarqawi, and ISIS, at that very moment.
            From this point, everything that the United States did played into Zarqawi’s hands. First came shock and awe, tearing apart a nation. Then came Paul Bremer, the moron placed in charge of the Iraq Provisional Authority, who not only dismantled the entire governmental structure of Iraq, but then fired the entire military, leaving some quarter of a million experienced soldiers without a job or means of livelihood. Zarqawi wasted no time in recruiting thousands of these Sunni ex-soldiers, and they today form a major portion of the ISIS forces. Even General David Petraeus testifies in the documentary that the effect of Bremer’s move was “devastating” and planted the seeds of the insurgency. Zarqawi’s attacks began almost immediately, with devastating car bombs that turned Baghdad and the rest of Iraq into a charnel house of raging sectarian war. That he planned to do this was clear from a letter Zarqawi wrote laying out his plans. He wanted Iraq torn apart by sectarian conflict, he wrote, that would leave it vulnerable to his more ambitious plans to create a caliphate. Bombing the UN headquarters added to the chaos because both the UN and all the NGO organizations that might have provided some protection and order, immediately fled Iraq.
            It was at this point that Nada Bakos sent a briefing document to the White House saying specifically that Zarqawi was responsible for the major attacks and was looking to foment a civil war. It got to Scooter Libby, who then called Bakos and summoned her to his office, clearly to pressure her to change her main conclusion, i.e., that there was an insurgency in Iraq that threatened the entire American project. It was that word, insurgency, that the White House found toxic. It implied that the Iraqi people weren’t completely overjoyed about the American invasion. Again, it was the optics that the White House wanted to change. So the White House, especially Donald Rumsfeld in press conferences, ridiculed news reports that only focused on the alleged chaos—which they vociferously denied. The denial, of course, made it impossible to combat the insurgency, which was allowed to grow unhindered.  
            Zarqawi made the most of such denial. He instituted a reign of terror that had never been seen before, beheading American Nicholas Berg on camera to establish his credentials as a slaughterer of epic proportions (one of his monikers was the Sheikh of the Slaughterers). And though even Osama bin Laden tried to slow him down, objecting to the killing of muslims by other muslims, Zarqawi’s response was to blow up the most sacred Shia site in Iraq, the Golden Dome of Samara. This was the final straw for Shias, and all-out sectarian war ensued—exactly what Zarqawi wanted. Shortly thereafter, he showed himself on camera firing an American automatic weapon to emphasize his power and ruthlessness, as well as  his plan to set up an Islamic state as the first step in forming a global caliphate.
            We know the rest. Even though Abu musab al Zarqawi was finally killed in a drone strike, his fiendish methods and plans have been continued by his even more ruthless successor, Abu Bakr al Baghdadi. What’s most disturbing is that all of this—the destruction of Iraq in the first place, the refusal to take out Zarqawi when the CIA wanted to, the idiocy of disbanding and setting adrift a quarter million potential fighters from the former Iraqi army, the mania to sanitize and justify the whole bit of lunacy in the first place—all of it might have been prevented if saner heads had prevailed. But of course, that is what marks the late lamented Bush Administration: lunacy and hubris (and an optimistic savagery) from top to bottom. At this point—with so many lives lost or ruined, and the Middle East in unprecedented chaos—all we can do is hope we shall never see its like again.

Lawrence DiStasi

Saturday, July 2, 2016

'Killer App' Addendum

-->                                             
Scanning my college alumni magazine, I came across a piece by Judith Hertog  called “A Monitored State.” Since it relates closely to my earlier blog, Killer App, I thought its report might be useful here as a gloss on that piece. “A Monitored State” describes Dartmouth professor Andrew Campbell’s experiment monitoring student behavior via the smartphones that virtually all carry and use constantly. A paper he wrote described how smartphone sensor data “contain such detailed information about a user’s behavior that researchers can predict the user’s GPA (grade point average) or identify a user who suffers from depression or anxiety.” In this study, called Student Life, 48 student volunteers allowed Campbell’s team to gather a stream of data via an app installed on their smartphones. The app “tracked and downloaded information from each phone’s microphone, camera, light sensor, GPS, accelerometer and other sensors” and then uploaded it to a database. By analyzing the data, Campbell’s researchers were able to record details about each student’s location, study habits, parties attended, exercise programs, and sleep patterns. For at least two students, Campbell was even able to see signs of depression: “I could see they were not interacting with other people, and one was not leaving his room at all,” Campbell said. Both failed to show up for finals, whereupon Campbell gave them incompletes and encouraged them to return in the fall to complete his and other courses with success. What Campbell draws from this is that, in the future, not only will universities be able to intervene to help students in such situations, but such information will be available in real time to monitor everything, including the state of every student’s mental well-being.
            Campbell has also collaborated with brain science colleagues “to discover how smartphone sensor data can be combined with information from fMRI scans” in order to eventually create apps that not only identify mental problems but also “intervene before a breakdown occurs.” In fact, in a follow-up phase of his study, he got student volunteers to submit to fMRI scans, and wear a Microsoft smart band that collected body signals like heart rate, body temperature, sleep patterns, and galvanic skin response—all associated with stress. Thus, more than simple behaviors, today’s technologies can (and already do) detect, grossly at least, an individual’s state of mind. One of Campbell’s colleagues predicts that in addition to being able to predict which individuals are “most susceptible to weight gain,” smartphones of the future will be able to warn when “its owner enters a fast-food restaurant.”
            The potential threat from all these technologies has not been lost on Campbell and his colleagues. His collaborator, Prof. Todd Heatherton, is already worried about a future determined by the constant collection of the data monitored by smartphones, and its use by companies, insurance underwriters, for instance, to determine who gets insurance and how much they pay for it. Heatherton was also shocked by how casual students were about sharing such personal data for his study. But clearly, this generation is already used to sharing just about everything on apps like Find Friends (an app that broadcasts one’s location to everyone in one’s network). For Heatherton and others, this raises important questions about the ethics of all this technology and how far it can be used to monitor every detail of our lives. James Moore, a Dartmouth philosophy professor specializing in ethics, worries how information about a person’s entire life could be used by governments wanting, for just one example, to monitor those on welfare. Or totalitarian governments that could use such data to keep potentially rebellious populations under rigid control.
            Campbell himself worries about the same thing, hoping that legislation will be forthcoming that will at least give individuals ownership of their own data (now being used by Google and many others for commercial purposes and more). People need to think about this, he says, and realize that “we are turning into a monitored state.” Or perhaps already are.
            Even George Orwell couldn’t have imagined such an easily ‘big-brothered’ state—and all thanks to those adorable smartphones.  

Lawrence DiStasi