Thursday, July 28, 2016

Death of a Comrade


I’m not sure what I want to write about it, about the death yesterday of my old friend, Gian; except that I feel the need to write something. It brings to mind the opening piece I wrote for Una Storia Segreta (Heyday: 2001), which I titled after a line in Prospero Cecconi’s notebook, “Morto il camerata Protto.” Cecconi was referring to the death of his friend Giuseppe Protto, when both were domestic internees (yes, so-called ‘enemy aliens’ of Italian descent were interned during that war after being judged “potentially dangerous”) imprisoned at Camp Forrest, TN during World War II. Cecconi, too, felt the need to write something in his clandestine notebook, though all he could bring himself to write was that simple line, Morto il camerata Protto; ‘my comrade Protto is dead.’ It was enough; years later, we can feel the pain, the loss, the loneliness in that lone line.
            Now, my friend Gian is dead. È morto lui. And though I have more time and more skill with language to write volumes about it, there really is very little to add. He’s dead. My friend Gian, whom I’ve known since the mid 1970s, and with whom I’ve laughed and joked and written and studied and cooked and celebrated our common heritage—we organized a little group we called the circolo in the early 1990s; and would gather once a month to cook together (they were sumptuous feasts) and laugh together and reminisce about our Italian parents and childhoods and the foods we used to eat—my friend Gianni is gone.
            At first I took it rather philosophically. Yes, I knew he was ill and in hospital where I got to see him a week or two ago. Yes, I knew he had been placed on nothing but palliative care and was certain to slip into oblivion sooner rather than later. Yes, I had been expecting the call for days. And yes, I have more or less accepted the fact of death, the fact that we all die, that nothing is more certain than the death which is a necessity of our existence and often a blessing. But when it came, something internal shifted. I didn’t even notice it at first. I busied myself with finding some mementos I could contribute to an expected memorial service, some of his drawings, some writings about him and his vintage kitchen and vintage 1940s décor and vintage humor that kept me busy most of the afternoon yesterday. But in the night I began to realize that I was grieving, albeit not in the way we think of as grieving: no tears, no depression to speak of, no laments about the futility of life or the too-early death of this life, or how I would miss him. No. There was mainly this sense of drift. I suddenly felt unmoored. It was as if an anchor in my very life had come loose—but not a literal anchor; some inner anchor that was more like a void or an eraser that had left me, or part of me, vacant. Adrift. Easily blown away. This happens more, perhaps, when one is older and the friends and relatives that remain get fewer and farther between. I don’t know. All I know is that Gianni was my close friend, someone I could always count on to be in my imagined gallery of people to speak to or places like his unique house and kitchen to go to, to sit and drink a companionable glass of wine and complain or joke or laugh or cogitate about the follies of the world with. For instance, there was the time I had been on my zen walk and was heading home through Berkeley, and simply dropped in for a quick rest and a cup of coffee and he immediately saw something different in me, some spiritual light in me that no one else would have seen much less valued.
            And now that real space, that imagined space is gone. Empty. The weight of it, that’s what strikes me most. The weight it provided in my life, the ballast that kept things secure and at least partly known—which is what a friend is, what a relative is, a known weight or quantity to keep one firmly in place—was gone; is gone. He is no more. Though I can still conjure up his looks and his speech patterns and his laughter, the man himself is no more. The weight of him. The solidity of him. The actual belly and blood of him.
            And how peculiar it really is, this sense of another. We can imagine it sometimes. We can still find the outlines in our image drawer in the mind. But we know, after death, that it is only an image. It has no flesh to it. No bones to it. No scent or feel to it. No response to it because it can’t talk back. It can’t provide the answering weight of itself to our own presence because it has no presence anymore. And presence—what a person actually is in the flesh—though it’s almost impossible to express, is something we know. Know without any reflection or reason that that’s what a person is, really. That presence. And it is not captured in drawings or photos or films or any medium but itself. Part of it can be captured, the part that is analogous to what we can conjure on our mental screens. But the real fullness of it, the living presence of another human or animal or tree or flower—that is never, cannot be ever captured in any of our media. We are fooled that it is. We are fooled into thinking that we really know those we see on TV or on our computer screens or in our smartphones. We don’t. All we know is shadows, poor bereft shadows that have no weight, no depth, no life. Which is what we’re left with when someone dies. Shadows. I still have the shadow of Gianni. But his presence, his weight, his laugh, his life—that is gone forever. And in some terrible way, it makes me lighter, more fleeting, more adrift.
            That is what we grieve. We grieve, I grieve the loss of that unique, indispensable, never-to-be-repeated presence that was Gian Banchero. That will never come again. So simple: È morto lui. So commonplace: È morto lui. And yet so deeply, unfathomably vacant, empty, weightless, gone.

Lawrence DiStasi

Tuesday, July 26, 2016

The Gene and its Discontents


Notwithstanding its beautifully-rendered history of how scientists finally, after 2500 years of speculation, finally discovered and named the “gene” as the mechanism of heredity (Darwin had no idea of this mechanism, speculating about tiny things he called “gemmules”) for me, the most fascinating parts of The Gene, by Siddhartha Mukerjee (Scribner: 2016), are the materials on Eugenics. Originated by Francis Galton (Darwin’s cousin) in the late 19th century, “eugenics” refers to the idea that humans should try to select the “best” genes from among human populations and selectively advance those “good” genes and eliminate the “bad” ones to produce a race of “perfect” humans. In a lecture at the London School of Economics in 1904, Galton proposed that Eugenics “had to be introduced to the national consciousness like a new religion.” Arguing that it was always better to be healthy than sick, and ‘good’ rather than ‘bad’ specimens of their kind, he proposed that mankind should be engaged in selectively breeding the best, the good, the strong. As Mukerjee quotes him: “If unsuitable marriages from the eugenic point of view were banned socially…very few would be made” (p. 73), the mechanism to promote ‘suitable marriages’ being a kind of golden studbook from which the “best” men and women could be chosen to breed their optimal offspring. No less a figure than H.G. Wells agreed with Galton, as did many others in England who even then were expressing fear about the inferior working classes out-breeding the better classes. Galton founded the Eugenics Review in 1909 to further advance his ideas, but died the next year before he could really get eugenics going in England. But other countries like Germany and the United States were already taking steps to follow Galton’s lead. Indeed, at the first International Conference on Eugenics that was held in London in 1912, one of the main presenters was an American named Bleecker Van Wagenen. Van Wagenen spoke enthusiastically about efforts already underway in the United States to eliminate “defective strains” (of humans) in America, one of which involved confinement centers—called “colonies”—for the genetically unfit. These were the target of committees formed to consider the sterilization of ‘unfit’ humans such as epileptics, criminals, deaf-mutes, and those with various ‘defects’ of the eyes, bones, and mind (schizophrenics, manic depressives, the generally insane.) As Van Wagenen suggested, 

Nearly ten percent of the total population…are of inferior blood, and they are totally unfitted to become the parents of useful citizens…In eight of the states of the Union, there are laws, authorizing or requiring sterilization (77).

            Van Wagenen was not kidding. The United States continued its misreading of Darwin and its enthusiasm for sterilizing the  ‘unfit’ well into the 20th century, and it was not just the lunatic fringe that was involved. Mukerjee cites a famous case that came before the Supreme Court in 1927, Buck v. Bell. This case concerned one Carrie Buck, a Charlottesville, Virginia woman whose mother, Emma Buck, had been placed in the Virginia State Colony for Epileptics and the Feebleminded after she was accused of immorality, prostitution and having syphilis. In fact, Emma Buck was simply a poor white woman with three children who had been abandoned by her husband. No matter; she was judged ‘unfit’ and, with her mother confined, little Carrie was placed in a foster home, was removed from school by her foster parents to work, and at age 17 became pregnant. Her foster parents, John and Alice Dobbs, then had her committed to the same State Colony for the Feebleminded on the grounds of feeblemindedness and promiscuity, where Carrie gave birth in March 1924 to a daughter, Vivian. But having been declared mentally incompetent, Carrie was unable to stop the Dobbs from adopting her baby. (One reason the Dobbs may have wanted the baby was that it later turned out that Carrie’s pregnancy was the result of a rape by the Dobbs’s  nephew). Carrie was quickly scheduled to be sterilized, and the Supreme Court case of Buck v. Bell was brought to test the sterilization law—the 1924 Virginia Sterilization Act—to which Carrie Buck was subject, being already in a state institution for the feebleminded. Astonishingly, with the ‘great’ Oliver Wendell Holmes presiding, the Supreme Court voted 8 to 1 that the Sterilization Act did not violate the U.S. Constitution’s due process provisions—since Carrie Buck had been given a hearing, and since she was already confined to a state institution. Mukerjee cites some of the now-infamous ruling by Holmes:

It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes…Three generations of imbeciles is enough (83-4).

In accordance with the Supreme Court’s ruling, on October 19, 1927, Carrie Buck was sterilized by tubal ligation. The fact that her daughter Vivian—the ‘third generation imbecile’ Holmes referred to—had performed adequately in the school she attended, being of decidedly average intelligence, did not save her; nor, for that matter, did it save Carrie’s sister Doris, who was also sterilized, without her knowledge, when she had her appendix removed. After this, sterilization was free to spread in the United States; in 1927, for instance, the great state of Indiana revised an earlier sterilization law to cover “confirmed criminals, idiots, imbeciles and rapists,” with other states following suit. Pre-marital genetic fitness tests became widespread, as did Better Babies contests at State Fairs. With the help of practical ‘genetics,’ America was out to produce a race of perfect humans fitted to its already ‘perfect’ political system and ‘perfect’ society.
            The logical next step in the Eugenics movement came, of course, in Nazi Germany. In 1933, Mukerjee tells us, the Nazis enacted the Law for the Prevention of Genetically Diseased Offspring, aka the Sterilization Law. Its premises were borrowed directly from America’s own program: “Anyone suffering from a hereditary disease can be sterilized by a surgical operation,” the diseases to include mental deficiency, schizophrenia, epilepsy, depression, blindness, deafness, and other serious deformities (121). Any cases in dispute were referred to a Eugenics Court, whose rulings allowed for no appeal. With films like Das Erbe (the Inheritance, 1935) propagandizing in its favor, the law became a grim model of  efficiency, with 5,000 adults being sterilized each month by 1934. And as with its other better-known programs, the Nazis moved smoothly and efficiently to the next step—euthanasia. A Scientific Registry of Serious Hereditary and Congenital Illnesses was set up, devoted to euthanizing (i.e. killing) defectives permanently to ‘purify’ the gene pool. The Nazis coined a macabre euphemism to justify all this, by perverting Socrates’ famous dictum about “the unexamined life not being worth living” into its macabre opposite: the euthanized were characterized as having lebesunwertes Leben, ‘lives unworthy of living.’ Though at first the targets were limited to children under three, soon the net was extended to adolescents, then juvenile delinquents, and finally in October 1939 to adults, with Jews at first conveniently labeled “genetically sick.” Typically, the Nazis set aside a villa, No. 4 Tiergartenstrasse in Berlin, as the official HQ of their euthanasia program, a place eventually known as Aktion T4, for its street address. Mukerjee at this point gives us one of his trademark elegant sentences: 

But it is impossible to separate this apprenticeship in savagery from its fully mature incarnation; it was in this kindergarten of eugenic barbarism that the Nazis learned the alphabets of their trade….The dehumanization of the mentally ill and physically disabled (“they cannot think or act like us”) was a warm-up act to the dehumanization of Jews (“they do not think or act like us.”) 125.  (my emphasis).

            There is other fascinating material in this altogether fascinating book, but I will leave most of that to other readers to discover. What I should like to stress is what Mukerjee himself stresses about genes, the genetic code, and eugenics. First, that genes, contrary to common perceptions, are not blueprints that form every element of an organism. Rather, they are like recipes—in that, just as recipes provide instructions about the process of cooking something, similarly, genes provide instructions about the process of building an organism. And as with a recipe, lots of chance or even intentional events can produce all sorts of variants. And, of course, the chance event par excellence, is the mutation. The problem is that humans, and especially humans who get seduced by the prospect of either eliminating “bad” mutations, or selecting for the “best” ones, misinterpret what mutations are and how they function in evolution. Citing the realization of Dr. Victor McKusick, Mukerjee makes the critical distinction that he wants everyone to grasp—that a mutation is a “statistical entity, not a pathological or moral one.” A mutation doesn’t imply something bad, like disease, nor even a gain or loss of function:

In a formal sense, a mutation is defined only by its deviation from the norm (the opposite of “mutant” is not “normal” but “wild type”—i.e. the type or variant found more commonly in the wild). A mutation is thus a statistical, rather than normative, concept. A tall man parachuted into a nation of dwarfs is a mutant, as is a blond child born in a country of brunettes—and both are “mutants” in precisely the same sense that a boy with Marfan syndrome is a mutant among non-Marfan, i.e., “normal,” children (264).

This distinction is critical, especially as regards the benighted attempts to create perfect humans or a race of normal humans. What we call “normal” is merely that which seems to be fitted to a given time, place, and conditions. To try to select for this “normalcy” is to completely misunderstand what genetics and evolution tell us. The “fittest” are not those who have won some sort of evolutionary or genetic race that is good for all time. They are simply those who may have turned out to be well-adapted to a given set of environmental and social circumstances. The worst conclusion one could draw from such “fitness” would be a) to decide to select only for those adaptations and exclude all others; or b) to try to interfere in genomes and eliminate all genetic variants in the vain hope that humans could be bred free of all illness or ‘unfitness.’ Conditions inevitably change. We have no idea what conditions might eventuate that might require some of the variants that we would like to prune out of existence—and prune is the accurate word here, leading us, as it does, to our modern mania to favor certain varieties of, say, apples or corn or wheat, while completely obviating the thousands of varieties that have evolved over centuries. This is a kind of ‘vegetable eugenics’ that many botanists have warned could leave the world without staple crops in the event of a pathogen that wipes out the now-dominant varieties. In short, a diverse gene pool is an absolute necessity for evolution to proceed.
            Yet despite the disfavor that eugenics has encountered in our time, the kind of thinking that fosters it is far from dead. Mukerjee cites a case from 1969, where a woman named Hetty Park gave birth to a daughter with polycystic kidney disease, leading to the child’s rapid death. Park’s obstetrician thereupon assured her that the disease was not genetic, and that there was no reason she should not have another healthy child. Park conceived again, but sadly the same result ensued; whereupon Park sued her obstetrician, for bad advice, and won. The court ruled that “the right of a child to be born free of [genetic] anomalies is a fundamental right.” Mukerjee points out that “this was eugenics reincarnated.” In other words, the court had ratified an expectation that the particular genetic mutation that caused harm to the Park family violated their rights—in effect, that that mutation should not exist. In the coming world of gene manipulation, we can expect that many mutations that now are classed as “abnormal” will be similarly classified and excised from existence. But as Mukerjee reminds us again and again, if we can expect anything, we can expect that conditions will certainly change. What appears “normal” now may one day be considered to have had only temporary value, suited to a very specific time and place. As Mukerjee notes at the end of his book, “Normalcy is the antithesis of evolution” (481). That is, though we have come to distrust and despise “mutations” that compromise what we consider ‘normal,’ evolution absolutely requires them, requires a gene pool that is as varied and diverse as it can be. Mutations are the lifeblood of such diversity, the bank on which evolution relies to adapt to always new circumstances. And equally important, evolution does not proceed according to human wants or needs or the wants or needs of any organism. Evolution proceeds according to what works, what is adaptable to a given circumstance at a given point in time. There is no good or bad adaptation. There is no good or bad mutation. There is no “normal” much less “best” genome or genetic code. No one can ever know what might be needed. So before humans go about eliminating that which appears negative or useless in any given era, they should think twice about eliminating precisely that which might one day prove salvational. Here is how Mukerjee puts it towards the end of his book:

“Gene editing,” the stem cell biologist George Daley noted, “raises the most fundamental issues about how we are going to view our humanity in the future and whether we are going to take the dramatic step of modifying our own germ line and in a sense take control of our genetic destiny, which raises enormous perils for humanity” (479).

Siddartha Mukerjee employs the perils of eugenics to serve as an object lesson that those ‘enormous perils’ of humans ‘modifying our own germ line’ are perils not just for humans, but for all the life on this planet.

Lawrence DiStasi

Friday, July 22, 2016

TrumpSpeak


My title, as many of you will recognize, is a variant of the word “NewSpeak” from George Orwell’s dystopian novel, 1984. Whether one should credit Donald Trump with coining a new form of speech may be questionable, but watching his performance last night, I was struck not so much by the laughable misrepresentation of almost all his alleged “facts” (if you want a good rundown of how each of Trump’s ‘factoids’ were grossly exaggerated, de-contextualized or outright lied about, see the Washington Post piece here: https://www.washingtonpost.com/news/fact-checker/wp/2016/07/22/fact-checking-donald-trumps-acceptance-speech-at-the-2016-rnc/), but by his speech patterns. (In case you’ve forgotten, one of the reasons fact-checking doesn’t matter much for a Trump audience has to do with their ‘stone-age brains.’ Briefly, most people employ a quick, instinctive estimate, done in milliseconds, of a politician’s looks and/or manner, and completely bypass the reasoning process behind the information he delivers. This accords with the stone-age brains most of us still work with in interpersonal relations.)
            So, for now, let’s bypass the howlers Trump spouted in his overly long but factually empty speech, and attend instead to the patterns of rhetoric he used. To begin with, the man seems to be mostly driven—both in his domestic critiques and his foreign ones—by the notion of “getting a good deal.” This would figure, since his life seems to have been devoted to deal-making in the high-risk world of (mostly) Manhattan real estate. It is a world dominated by con men and hucksters who are always out to screw the naïve or the unwary. The New Yorker, therefore, must always be on his guard to make sure he’s not being screwed. This applies to all New Yorkers in all areas of life, but especially to those engaged in the dog-eat-dog world of real estate developing. Accordingly, Donald Trump’s rhetoric is full of critiques of his predecessors like Hillary and Obama and Bill Clinton for “not getting a good deal.” In his eyes, they gave away the store in the Iran nuclear deal; they gave away the store in Libya and Syria and Russia and China and especially in trade deals like NAFTA and the upcoming TPP. In short, previous political leaders succumbed to the cardinal sin in Trump’s world: they didn’t negotiate hard or cleverly enough; weren’t willing enough to play hardball; weren’t willing enough to talk tough and walk away and threaten and harangue. Now, of course, Trump has no way of knowing this; he wasn’t there; has never been engaged in any diplomatic activity or anything remotely political; and certainly is not about to consider the way that the United States has totally dominated and exploited almost every relationship it has entered in the post-World War II years. No. All he’s willing to bray about is how weak the nation has become, i.e. how it can no longer dictate the terms of every agreement due to its position as the biggest, baddest, most powerful nation on the globe. So he claims that he, the great real estate wheeler-dealer, will be able to make ‘better deals’—even, presumably, with those shirkers at home who want a free lunch.  
            And that brings us to the second noticeable rhetorical pattern. Trump never explains exactly how he’s going to accomplish all this. All he does is, first, exaggerate the problem—we’re besieged by criminals and loafers domestically and by terrorists from abroad, our cities are falling apart, our industry has all left for cheaper shores due to bad trade deals, cops are being murdered at the highest rate ever—and then assert that he’s the one who, with his superior deal-making ability, will fix the problem. Crime will end. Immigration will end. Terrorism will end. Globalization will end. Inner-city poverty will end. And he, Donald Trump, will end it.
            But how? These are complex, difficult problems that Republicans and Democrats alike have been promising to solve for decades. Not for Trump. The language is simple, the problems are simple, the solution is simple: Put Trump in Charge. And soon, trillions of dollars will be pouring into the nation’s coffers, taxes will be far lower saving everyone more trillions, roads will be built, infrastructure will be modernized, onerous regulations will disappear freeing up our energy sources (never mind the pollution or global warming) and pouring in even more trillions, and American Will Be Great Again.
            It is simple. And it is simpleminded. And the stone-age brains crowding the Republican Convention could not cheer loud enough or stomp hard enough or chant USA! USA! USA! often enough to roar their approval. Their devotion, even. Their lord and savior was saying it. He was saying it with confidence and certainty and with his jaw jutting out like some latter day Benito Mussolini, and they were ecstatic (as Mussolini’s crowds often were). He would talk tough. He would be tough. He would just take those over-educated fancy-nancy diplomats and bureaucrats by the throat, saying ‘fuck your reasoning and diplomacy and equity,’ and force them to give him a good deal. And if they didn’t, he’d bomb the shit out of them.
            And that’s it. After the longest speech in convention history, Donald Trump managed to say virtually nothing but the same posturing, simple-minded crap he’s been spouting throughout his primary campaign. Leaving the rest of us, the ones searching for some sort of program or plan or logic to his meandering speech, to wonder: how can they swallow this infantile pap? How can they not see that this guy has no capacity for any thought that’s longer than a sentence or two? Did you notice that? He never stayed with one subject for any sustained length of time: it was all quick cuts, as in a commercial. Crime in the streets. Shooting cops. Terrorists. Hillary and Libya, Iraq, Syria, Egypt. NAFTA. China. Back to high unemployment. Obama care. It reminded me of what was revealed in Jane Mayer’s recent article in the New Yorker where she interviewed Trump’s ghostwriter Tony Schwartz (he wrote The Art of the Deal for Trump)—i.e. that Trump had no capacity whatever to focus on anything for longer than a minute or two. Trying to interview Trump, said Schwartz, was like trying to interview a chimp with ADHD (my metaphor). The man had no capacity to concentrate at all, so Schwartz ended up following Trump around, listening in on phone calls and interactions and inspections, to scare up material for the book. The other thing Schwartz noticed—after Trump threatened him with a lawsuit and demanded that he return all the royalties Schwartz had earned from the bestseller—is that Trump’s famously thin skin demands that he instantly attack anyone who criticizes him. We all saw that in this Spring’s Republican debates. What Schwartz reminds us is how frightening this quality would be in a President: 

“The fact that Trump would take time out of convention week to worry about a critic is evidence to me not only of how thin-skinned he is, but also of how misplaced his priorities are,” Schwartz wrote. He added, “It is axiomatic that when Trump feels attacked, he will strike back. That’s precisely what’s so frightening about his becoming president.” (Jane Mayer, “Donald Trump Threatens the Ghostwriter of ‘The Art of the Deal’”, New Yorker, July 20, 2016.)

            Donald Trump, in short, gave one of the most consistently alarmist acceptance speeches in American political history last night. But what we should truly be alarmed about is ever ceding the enormous responsibility and power of the American presidency to a man who is so ill-equipped—emotionally, mentally, and morally—to handle it. For if, with the help of a gang of speechwriters, he is unable or unwilling to put together a cogent argument that at least attempts to fill in some of the missing spaces of TrumpSpeak, then every American with an ounce of sense should be terrified about how those missing spaces might eventually take some reckless, cataclysmic shape.

Lawrence DiStasi

Friday, July 8, 2016

From Chilcot to ISIS


The bombshell in Britain in recent days has been the long-awaited (seven years in the making) report by Sir John Chilcot condemning Britain’s role in the 2003 invasion of Iraq. Most Britons, like most Americans, have long since concluded that the invasion was a disaster. But though the report fails to assign legal culpability (which many Britons who lost loved ones in the invasion hope to get), it does roast former prime minister Tony Blair pretty thoroughly. It says, in part, that his

 “judgements about the severity of the threat posed by Iraq’s weapons of mass destruction—WMD—were presented with a certainty that was not justified” and “Despite explicit warnings, the consequences of the invasion were underestimated….It is now clear that policy on Iraq was made on the basis of flawed intelligence and assessments. They were not challenged, and they should have been.”

It also explicitly condemns Blair (known in Britain as ‘Bush’s poodle’), for blindly following the lead of President Bush, citing a letter Blair wrote in July 2002 promising that “I will be with you whatever…” This constituted Blair’s only success according to the report, i.e., successfully appeasing George W. Bush.
            That the report took seven years to appear is in part attributed (by a 2003 report in London’s Independent cited in Alternet’s account of the Chilcot release) to a “fierce battle” waged by the U.S. State Department and the White House as early as 2003 to block release of the report because it allegedly contained “classified information.” Whether the release of the report in 2003 would have saved lives, either British or Iraqi, is not known, but it might at least have caused some re-evaluation of the Bush administration’s rationale for the invasion, which in turn might have led to Bush’s defeat in the 2004 election. Instead, of course, we got four more years of the worst presidency in history.
            If this were the end of it, the Iraq war blunder would still count as a horror costing millions of lives, but not as grave or extended a one as what it subsequently turned out to be. For the current plague of ISIS attacks in Iraq, Syria and now throughout the Middle East and the world, stems directly from the hubris and secrecy of the Bush Administration during that time. This is made clear in a recent (first aired May 17, and again, when I saw it, on July 5) Frontline documentary: The Secret History of ISIS  (http://www.pbs.org/wgbh/frontline/film/the-secret-history-of-isis/). What the documentary reveals is how ISIS was able to thrive and grow through a series of blunders—mainly driven by “optics”—regarding its first leader, one Abu Musab al Zarqawi. We learn that Zarqawi was known to the CIA even before the invasion in 2003: according to Nada Bakos, a CIA analyst charged with looking into his background, Zarqawi was a tough kid who grew up in a tough neighborhood in Jordan, one who appeared on his way to a lifetime in prison as a thug, pimp, and general hardass covered with tattoos. But one stint in prison radically changed him: he became a jihadist, a holy warrior; and to demonstrate his zeal, he actually removed his tattoos by using a razor blade to cut off his outer layer of skin. After that, he left Jordan for Kandahar in Afghanistan, determined to join up with Osama bin Laden. But bin Laden ignored this wannabe from Jordan and in 2002, Zarqawi saw a chance to strike out on his own, this time in Iraq. He set himself up near the Iran/Iraq border and began building his den of crazies. Fortunately, the CIA had an informant in Zarqawi’s camp, saw him as a definite threat in the event of an invasion, particularly as Zarqawi’s group was apparently trying to build chemical and biological weapons. CIA analyst Sam Faddis, assigned to the case, therefore formed a plan to take him out, and forwarded the attack plan to the White House for approval.
            But the White House, in the person of VP Dick Cheney and his aide Scooter Libby, wanted no part of the takeout, especially before the big invasion, so Cheney and Libby drove to the CIA to undermine the CIA’s information. From their aggressive questioning, it was clear that the White House had more in mind than simply worry about a strike that might pre-empt their war plans. They had cocked up a narrative concerning Saddam Hussein’s al Quaeda connection and involvement in 9/11 as a big part of their casus belli. And when the CIA said there was no connection, it was clear that Cheney/Libby badly wanted there to be one. This would eventually lead to Colin Powell’s memorable speech at the UN, in which the Secretary of State besmirched his reputation by accepting the White House’s script—which he, uncharacteristically, read verbatim at the UN. And though the White House appeared to follow protocol by sending the script to the CIA for vetting, Nada Bakos testifies in the documentary that the White House simply ignored the CIA’s corrections and stayed with their required script. As Colin Powell authoritatively put it: 

“…there’s a sinister nexus between Iraq and terrorist networks. Iraq today harbors a deadly terrorist network headed by Abu Musab al Zarqawi, an associate and collaborator of Osama bin Laden and his lieutenants.”

When confronted in the Frontline documentary about this clear fabrication in his UN speech, Colin Powell claims that his memory now is vague, but insists that his references to Zarqawi were unimportant to his general case. The truth is that a full seven minutes of the Powell speech were devoted to Zarqawi who is mentioned no less than 21 times, thus firmly connecting Iraq and Saddam to the terrorist network that had already attacked the United States on 9/11. Not incidentally, Powell’s speech also transformed Zarqawi into a major terrorist directing a worldwide terror organization. It is almost as if Colin Powell created Zarqawi, and ISIS, at that very moment.
            From this point, everything that the United States did played into Zarqawi’s hands. First came shock and awe, tearing apart a nation. Then came Paul Bremer, the moron placed in charge of the Iraq Provisional Authority, who not only dismantled the entire governmental structure of Iraq, but then fired the entire military, leaving some quarter of a million experienced soldiers without a job or means of livelihood. Zarqawi wasted no time in recruiting thousands of these Sunni ex-soldiers, and they today form a major portion of the ISIS forces. Even General David Petraeus testifies in the documentary that the effect of Bremer’s move was “devastating” and planted the seeds of the insurgency. Zarqawi’s attacks began almost immediately, with devastating car bombs that turned Baghdad and the rest of Iraq into a charnel house of raging sectarian war. That he planned to do this was clear from a letter Zarqawi wrote laying out his plans. He wanted Iraq torn apart by sectarian conflict, he wrote, that would leave it vulnerable to his more ambitious plans to create a caliphate. Bombing the UN headquarters added to the chaos because both the UN and all the NGO organizations that might have provided some protection and order, immediately fled Iraq.
            It was at this point that Nada Bakos sent a briefing document to the White House saying specifically that Zarqawi was responsible for the major attacks and was looking to foment a civil war. It got to Scooter Libby, who then called Bakos and summoned her to his office, clearly to pressure her to change her main conclusion, i.e., that there was an insurgency in Iraq that threatened the entire American project. It was that word, insurgency, that the White House found toxic. It implied that the Iraqi people weren’t completely overjoyed about the American invasion. Again, it was the optics that the White House wanted to change. So the White House, especially Donald Rumsfeld in press conferences, ridiculed news reports that only focused on the alleged chaos—which they vociferously denied. The denial, of course, made it impossible to combat the insurgency, which was allowed to grow unhindered.  
            Zarqawi made the most of such denial. He instituted a reign of terror that had never been seen before, beheading American Nicholas Berg on camera to establish his credentials as a slaughterer of epic proportions (one of his monikers was the Sheikh of the Slaughterers). And though even Osama bin Laden tried to slow him down, objecting to the killing of muslims by other muslims, Zarqawi’s response was to blow up the most sacred Shia site in Iraq, the Golden Dome of Samara. This was the final straw for Shias, and all-out sectarian war ensued—exactly what Zarqawi wanted. Shortly thereafter, he showed himself on camera firing an American automatic weapon to emphasize his power and ruthlessness, as well as  his plan to set up an Islamic state as the first step in forming a global caliphate.
            We know the rest. Even though Abu musab al Zarqawi was finally killed in a drone strike, his fiendish methods and plans have been continued by his even more ruthless successor, Abu Bakr al Baghdadi. What’s most disturbing is that all of this—the destruction of Iraq in the first place, the refusal to take out Zarqawi when the CIA wanted to, the idiocy of disbanding and setting adrift a quarter million potential fighters from the former Iraqi army, the mania to sanitize and justify the whole bit of lunacy in the first place—all of it might have been prevented if saner heads had prevailed. But of course, that is what marks the late lamented Bush Administration: lunacy and hubris (and an optimistic savagery) from top to bottom. At this point—with so many lives lost or ruined, and the Middle East in unprecedented chaos—all we can do is hope we shall never see its like again.

Lawrence DiStasi

Saturday, July 2, 2016

'Killer App' Addendum

-->                                             
Scanning my college alumni magazine, I came across a piece by Judith Hertog  called “A Monitored State.” Since it relates closely to my earlier blog, Killer App, I thought its report might be useful here as a gloss on that piece. “A Monitored State” describes Dartmouth professor Andrew Campbell’s experiment monitoring student behavior via the smartphones that virtually all carry and use constantly. A paper he wrote described how smartphone sensor data “contain such detailed information about a user’s behavior that researchers can predict the user’s GPA (grade point average) or identify a user who suffers from depression or anxiety.” In this study, called Student Life, 48 student volunteers allowed Campbell’s team to gather a stream of data via an app installed on their smartphones. The app “tracked and downloaded information from each phone’s microphone, camera, light sensor, GPS, accelerometer and other sensors” and then uploaded it to a database. By analyzing the data, Campbell’s researchers were able to record details about each student’s location, study habits, parties attended, exercise programs, and sleep patterns. For at least two students, Campbell was even able to see signs of depression: “I could see they were not interacting with other people, and one was not leaving his room at all,” Campbell said. Both failed to show up for finals, whereupon Campbell gave them incompletes and encouraged them to return in the fall to complete his and other courses with success. What Campbell draws from this is that, in the future, not only will universities be able to intervene to help students in such situations, but such information will be available in real time to monitor everything, including the state of every student’s mental well-being.
            Campbell has also collaborated with brain science colleagues “to discover how smartphone sensor data can be combined with information from fMRI scans” in order to eventually create apps that not only identify mental problems but also “intervene before a breakdown occurs.” In fact, in a follow-up phase of his study, he got student volunteers to submit to fMRI scans, and wear a Microsoft smart band that collected body signals like heart rate, body temperature, sleep patterns, and galvanic skin response—all associated with stress. Thus, more than simple behaviors, today’s technologies can (and already do) detect, grossly at least, an individual’s state of mind. One of Campbell’s colleagues predicts that in addition to being able to predict which individuals are “most susceptible to weight gain,” smartphones of the future will be able to warn when “its owner enters a fast-food restaurant.”
            The potential threat from all these technologies has not been lost on Campbell and his colleagues. His collaborator, Prof. Todd Heatherton, is already worried about a future determined by the constant collection of the data monitored by smartphones, and its use by companies, insurance underwriters, for instance, to determine who gets insurance and how much they pay for it. Heatherton was also shocked by how casual students were about sharing such personal data for his study. But clearly, this generation is already used to sharing just about everything on apps like Find Friends (an app that broadcasts one’s location to everyone in one’s network). For Heatherton and others, this raises important questions about the ethics of all this technology and how far it can be used to monitor every detail of our lives. James Moore, a Dartmouth philosophy professor specializing in ethics, worries how information about a person’s entire life could be used by governments wanting, for just one example, to monitor those on welfare. Or totalitarian governments that could use such data to keep potentially rebellious populations under rigid control.
            Campbell himself worries about the same thing, hoping that legislation will be forthcoming that will at least give individuals ownership of their own data (now being used by Google and many others for commercial purposes and more). People need to think about this, he says, and realize that “we are turning into a monitored state.” Or perhaps already are.
            Even George Orwell couldn’t have imagined such an easily ‘big-brothered’ state—and all thanks to those adorable smartphones.  

Lawrence DiStasi

Wednesday, June 29, 2016

Brexit's Tectonics

-->
Like just about everyone else on the planet, I have been trying to sort out my reactions to Brexit—the British vote in favor of exiting the European Union. And what occurred to me even at the very time I heard it was this: maybe it’s a necessary warning sign to the neoliberal-powers-that-be that globalization, not democracy, is what’s run amuck. Far from being just a protest vote against the influx of “foreigners” and migrants—in other words, the racist reaction from the great unwashed of the British lower classes—it may well be far deeper. It may be, that is, a cry of the heart from those who do not want to be homogenized in the great likeness machine of global corporatocracy that seeks to make everyone a stamped-out cog in the consumer-exploiting Walmarts of the world. That’s what occurred to me almost instantly when I heard the news. Nationhood may be anachronistic or even dangerous in this ever-more connected world, but it’s also one of the few things that has a chance of keeping different sections of the globe unique. And what we need now is more of it not less—more distinctiveness in separate populations, more distinctiveness in dress, language, buildings, customs, ways of doing and being. Skyscrapers in Dubai, no matter how marvelous the technical skill they demonstrate, simply strike one as completely out of touch with their surroundings. We need people and populations that are more in touch with their surroundings, more unique to the particular flora and fauna in which they arise.  And this may be what Brexit and the Trump phenomenon in our own country are more deeply about.
            We hear about the chaos in the financial markets: the British pound dropping like a stone, the stock market here and elsewhere dropping similarly, the financial and economic mavens predicting more and more dire outcomes from uncertainty. And though no one wants to see another financial crash, what we need to do is understand that perhaps this is precisely what’s needed to wake these guys up. Just consider what the outcome of the financial wheeling and dealing of the past few decades has led to: conditions of inequality in both Britain and the United States that are almost unprecedented. The corporate CEOs, the banksters, the hedge fund managers are making obscene amounts of money and living like oriental potentates, while the working slobs have been going steadily backwards. More and more people lose their jobs to foreign countries whose workers slave away at wages that completely obviate competition. More and more corporations rush to have their goods made in these foreign factories, shifting them whenever another country offers yet lower wages. And the gulf between the very wealthy few running things and the masses of impoverished working stiffs racing to the bottom grows ever wider. There is a professional class which manages to stay reasonably solvent—the bank managers, the professoriat, the politicos. But they hew to the party line of whoever’s in power, conservatives or laborites, democrats or republicans, and maintain their insider edge regardless of who’s got the reins (see Thomas Frank and his recent exposè of the Democratic Party in Listen, Liberal.) Meantime, those trying to catch up find themselves always deeper in debt, even the middle classes who have to incur a lifetime of debt to afford a college education. So there’s a logic to chaos in financial markets. These usurers should find themselves in chaos. They should find themselves at the bottom of a pit. But of course, they usually don’t. And those who land in the pit are the suckers who buy into the myths of progress and globalization and trade deals making everyone richer, and all the other myths about the benefits of trade we’re constantly sold.
            What occurs to me, then, is that this isn’t about politics so much as economics. And there is a difference. I, for one, am in favor of the United Nations and attempts to keep the violence of the world under reasonable control. This requires that nation states give up some of their sovereignty, which always elicits protests and anguish from the breast-beaters on the Right. But by and large, with some notable exceptions such as Israel’s continuing occupation and ethnic cleansing in Palestine, the system has worked fairly well. Invasions by one aggressive state of another’s territory have pretty much been limited—though not entirely eliminated. The condemnation attaching to naked aggression such as we saw in the 1930s and before have made such ventures too costly to most nations’ global reputations. This, again, is not to say that such aggression has been totally foreclosed, but it has, for the most part, been priced too high for most nations to incur lightly. The loss of sovereignty is worth the gain in peace (or at least accommodation).
            In the economic sphere, however, the situation is almost diametrically opposed. And it is not nations that are at issue so much as trans-national corporations. This tends to be the result mainly of trade agreements like NAFTA and the still-not-ratified TPP. In other words, in the economic sphere, it is not nation-states that are the major offenders, but corporations and the aptly-named ‘vulture capitalists’, many of which have simply transcended national boundaries. Indeed, the terms of trade agreements in a globalized world have meant that national sovereignty has become subservient to corporate rights—the right to make a profit. One example says this loud and clear: the recent lawsuit filed by TransCanada. The corporation that had planned the Keystone XL oil pipeline from Canada through the United States has just filed suit demanding $15 billion as compensation for its losses from the “expected profits” it stood to make if the deal was approved. This accords with the boilerplate agreement in such trade deals: corporations have been essentially granted the right to “expect” profits from planned ventures in any nation they choose, and if such plans come into conflict with a nation’s determination to prevent the despoliation of its territory or people, then too bad. The corporation has prior rights here—a right to sue for damages to its profit—while a nation has no right to prevent damage to its land or water or environment—or the globe itself in the case of global warming. This, to me, is about as outrageous as capitalism gets. The underlying notion is that profit is sacrosanct, and takes precedence over considerations of human health or the health of the nation and planet itself.
            This, I think, is what is really at issue in the Brexit vote and the Trump/Sanders phenomenon in the United States. The people who are being crushed by the depredations of big corporations—which pursue profit anywhere and everywhere, no matter the damage to the nations in which they reside—have begun, if only dimly, to catch on. Perhaps they don’t see beyond the slogans and xenophobia. Perhaps they can’t or won’t articulate what is really at the heart of their malaise. But on some level they understand. This is why they want to “take back their country.” What they want is some control, or someone they elect to control the monsters called corporations and financial institutions that seem able to roll over anything in their way with no consequences. What they want is some control over the obscene redistribution of wealth upwards that has taken place in the past half-century. What they want is something like fairness in the way their lives are disbursed, some more direct connection to what they know, to what they can see, rather than some distant insider decision-making that is invisible to them. What they want is some indication that their vote can have some effect on the levers of power despite the fact that they are not mega-rich. Brexit  for once gave many in England that indication. And my guess—and my hope—is that more Brexits are on the way—especially if the powers that be do not wake up to the earthquake that has just struck them.

Lawrence DiStasi
           

Wednesday, June 1, 2016

Killer App


I have just finished reading Sherry Turkle’s recent book, Reclaiming Conversation: The Power of Talk in a Digital Age (Penguin: 2015). Being without a smartphone, and never having texted in my life, I found Turkle’s research into what smartphones are doing to young people (and to many of their parents) shocking. Consider some stats first: a) average Americans check their smartphones every 6-1/2 minutes (actually, college students in one of Turkle’s classes say that they can sometimes go 3 minutes without a phone check, but the more likely limit is 2 minutes!) b) fully one-fourth of American teens connect to a device within 5 minutes of waking (80% sleep with their phones). c) most teenagers send about 100 texts every day. d) 44 percent of teens do not “unplug” ever. Now let me quote what Turkle says about the power these smartphones have to enslave us: “It (the smartphone) is not an accessory. It’s a psychologically potent device that changes not just what you do but who you are” (319). Keep that in mind: using a smartphone changes who you are, it changes your brain, it changes how you behave, it changes how you talk and relate to and treat other people—and mostly not for the better. This is the sum and substance of Turkle’s (she is a professor of sociology and psychology at MIT who specializes in the effect of technology on modern life) book. Though it may not be too late (her title implies that we can, if we are determined to, ‘reclaim conversation’), Turkle’s research shows that things have gone very far indeed.
            Let me cite just a few of the examples Turkle provides. First of all are the rules that young people now live by—and here I should say that I found myself at first contemptuous of, and then feeling deep sympathy for, these kids (to me, even the thirty-somethings are kids who have grown up with technology) whose interactions like dinner or dating are now governed by their devices and the “apps” they apply there. The rules are ubiquitous and bizarre, but clearly necessary. There’s the “rule of two or three” at meals: students Turkle interviewed at one college, that is, make sure that at least two people in a group of seven or so at dinner are NOT on their phones before they allow themselves to check theirs; if fewer than two were paying attention to the conversation, it wouldn’t work. As Eleanor says of observing the rule, “It’s my way of being polite.” The corollary is that conversations, even at dinner, even among friends, are fragmented (and hence “lighter” of necessity): everyone is more interested in checking what might be on their phones, or who might be texting and require an immediate response, than in the people they’re actually with—much less what they’re saying.
            And this gets to one of the major points in the book. The ability to converse is atrophying among smartphone users. Family members at dinner constantly check their phones. Kids in class and on dates and at parties check their phones. And much of the checking involves texting—the major form of communication among phone users. That is, people don’t “talk” to each other on their phones; they “text” each other. And there are strict rules among friends—many kids number as many as 100 texting friends in their circle—who text. If someone texts you with an ‘emergency’ (for teens, every slight is an emergency), you have at most 5 minutes to respond. If you don’t comply within that time limit, then you risk losing that friend because your delay in responding is taken as an insult. So kids with phones tend to be hypervigilant—they don’t want to miss an important text, which is why they can’t stand not checking their phones every 2 minutes. The other reason they can’t stand not checking their phones is what they have acronymed FOMO: Fear of Missing Out. Something better than what’s happening here and now might be going on. FOMO haunts even those who are at parties, or in bed with a partner! One of Turkle’s informants described being at a party, but being compelled to check her phone (everyone was doing this for the same reason) to see if a friend was at another party that might be hotter. A college student described being in bed with a guy, who got up to go to the bathroom—which impelled her to take out her phone to check her Tinder app to see what men in her area might be interested in meeting, and more. Her comment: “I have no idea why I did this—I really like the guy…I want to date him, but I couldn't help myself. Nothing was happening on Facebook; I didn’t have any new emails” (38). A recent grad named Trevor told Turkle about his college graduation party where “people barely spoke” but “looked at their phones.” And this was okay because
Everyone knew that when they got home they would see the pictures of the party. They could save the comments until then. We weren’t really saying good-bye. It was just good-bye until we got to our rooms and logged onto Facebook (138).

In other words, life is not what’s happening in reality, face to face; it’s what gets reported on Facebook. Likewise, conversation doesn’t happen by talking face to face; that’s too risky; one might say something rash or erroneous. Conversation is what happens on Gchat or when texting—where one can edit one’s response (or breakup messages) and make them perfect. Real conversation is just too fraught with uncertainty, with emotion, with risk, with the mess that is human life.
            This is serious, America. These machines are changing the way human beings interact. They are changing the way humans feel about each other, literally changing if they can feel about each other. And that is another of Turkle’s major points here. Based on her research and consultations with middle schools in her area, she points out that without the give and take of face-to-face conversation, many young people are losing no less than the defining human capacity of empathy. A researcher at Stanford, Clifford Nass, specifically looked into the emotional capacity of freshmen at Stanford. He compared the emotional development of women who characterized themselves as “highly connected” to those spending less time online, and found that the former had a weaker ability to identify the feelings of other people (which is what empathy involves), and actually felt less accepted by their peers. As Turkle summarizes it, “Online life was associated with a loss of empathy and a diminished capacity for self-reflection” (41). And no wonder. Texting has become the substitute for having to look someone in the eye, of having to see emotions in their faces and bodies, especially when we have to discuss something that might be stirring or painful. Face-to-face conversation is “too risky”—that’s how most young people put it. Another person’s response to you might get too emotional. And it’s not just teenagers. Mothers and whole families now have fraught family discussions on Gchat so as to avoid possible eruptions of emotion or words that hurt. It takes the risk out of family dynamics, they say. One never has to face someone yelling. But what is being lost? is Turkle’s question. And her answer is that essentially, our human-ness is being lost. Children “are being deprived” she says, “not only of words but of adults who will look them in the eye.” And as countless volumes of research have shown, eye contact is vital to “emotional stability and social fluency: deprived of eye contact, infants become agitated, then withdrawn, then depressed” (108).  As to empathy, it seems to be more or less out the window. Turkle quotes teachers at a middle school she consults with:
When they hurt each other, they don’t realize it and show no remorse. When you try to help them, you have to go over it over and over with them, to try to role-play why they might have hurt another person. And even then, they don’t seem sorry. They exclude each other from social events, parties, school functions, and seem surprised when others are hurt…They are not developing that way of relating where they listen and learn how to look at each other and hear each other (164).

When one looks at how romance and other interactions are handled, one can see why. Turkle, for example, describes the NOTHING gambit. This refers to not responding to a flirtatious text. Just silence, nothing. One girl calls it “a way of driving someone crazy…you don’t exist.” And then the proper way to respond to nothing is to pretend, in turn, that it didn’t happen. Because trying to text again saying “Why don’t you get back to me” is simply “not cool.” It’s being a loser. So is responding too quickly to a text. Ryan says, for example, that if a woman responds to his text immediately, it might be good, but it might also mean “She’s psycho, man” (188).
            What a terrible burden this must be. The weight of always wanting to know if the other person is interested has always been a cause of anxiety in romantic encounters. But at least when the brushoff happens face to face, something is settled; a human interaction is, literally, faced. Here, nothing is. All simply dissolves in nothingness. You don’t exist. And having to wonder if even someone you apparently have a connection with is possibly checking out Tinder or some other app for a better party or a better partner (there are always dozens of ‘partners’ available on Tinder)--that must be agonizing.
            Of course, when technology becomes a problem—as Turkle suggests it is—there are always those who count on more technology to solve the problem. Robots seem to be the current solution of choice among MIT engineers. Here is where Turkle takes her story in the end, and it is not encouraging. Apparently, AI engineers are working hard to design robots that can actually provide the “eye contact” and “human” conversation that we are no longer getting from our apps. Speculation is rife that robots will soon be able to perform the daunting caretaking tasks that a fast-growing older population requires. Not enough humans to do the dirty work? Design robots to provide what’s lacking, even as babysitters. And there is data from some primitive robots already among us about how it’s working. In one encounter, Turkle describes what happened when a 12-year-old girl named Estelle was used as a subject to interact with a robot named Kismet. Kismet ‘listens’ attentively to Estelle (elsewhere, Turkle notes that the “feeling that no one is listening to me” plays a large part in what we try to solve with technology), simulates human facial expressions, pretends deep interest in what Estelle says. All goes well until a glitch in Kismet’s program leads to a break in the contact, and Kismet turns away. Estelle is deeply disappointed and shows it by eating cookies voraciously. When pushed to explain, she laments tearfully that Kismet didn’t like her; the robot turned away. The researchers explain that it was simply a technical problem and had nothing to do with her, but Estelle is not consoled. She thinks it’s her failure that Kismet doesn’t “like” her (the “like” on Facebook has become the standard for judging ourselves and our appeal). Another instance involves a young girl named Tara who, in all her actions, is the “perfect child.” But her mother notices that she sometimes talks to Siri, the talking app developed by Apple. And when she does, she vents all the anger on Siri that she has suppressed elsewhere. Tara compartmentalizes, in other words: she has to be perfect with people, but can be angry with Siri. Turkle comments thusly: “if Tara can ‘be herself’ only with a robot, she may grow up believing that only an object can tolerate her truth” (347).
            Is this the place we're getting to, one where only machines can tolerate us? This would be a brave new world, indeed. And the grim truth seems to be that some of us are already there. Sherry Turkle wants us to change. She wants us to “reclaim conversation” with each other, before it’s too late. She draws hope from the fact that the human brain is capable of changing. She has seen this in summer camps where children are prevented from having devices of any kind, and where after a short time, they do reclaim their human interest in each other and what’s around them. But she has a warning as well, especially as regards our attempts to fill the well of human loneliness with robots (or machines of any kind). This is because engineers have already shown that they can build toys for children (Furbies, etc.) that feign feelings, and that, by getting the child to care for them, instill feelings of attachment in the child for what is a lifeless object, a machine. This takes advantage of a well-known phenomenon: humans tend to impute human feelings to that which seems human—like talking apps that simulate conversation, like our computers. And so, one of her conclusions goes like this:
Nurturance turns out to be a “killer app.” Once we take care of a digital creature or teach or amuse it, we become attached to it, and then behave “as if” the creature cares for us in return (352).

            Killer app indeed. There are many of them already, busily doing their grim work as human surrogates. For, as Turkle writes, “Now we have to ask if we become more human when we give our most human jobs away” (362). It ought to be clear what Turkle thinks about this, and what I think. The question is, do enough others care enough to think about it? How about you (having been reached, of course, electronically)?

Lawrence DiStasi