Saturday, June 30, 2012

Affordable Care for Whom?

So we now have the official word from the reactionary Supreme Court, in the person of John Roberts, no less: President Obama’s signature piece of legislation, the Affordable Care Act, passes constitutional muster not, as expected, because it legally regulates interstate commerce (Roberts said it did not, because it tried to punish people for not buying something—a precedent that would allow ‘do-gooders’ to force everyone to eat organic foods), but because it legally qualifies under Congress’s authority to tax. That is, forcing people who don’t buy health insurance to pay a tax (not a penalty) on their income tax form was somehow ok. I fail to see how this is any different from enforcing the activity under the Commerce Clause, but then I’m not a lawyer. Indeed, what I have always wondered is why Democrats have never made the argument that seems most logical to me: if Americans can be forced to buy automobile insurance (it is a condition of getting a driver’s license and is rationalized in the same way as the health insurance idea: if some drivers do not have insurance, the cost of their accidents has to be borne by everyone else, an unfair burden), then why can’t they be forced to buy health insurance for the same reasons? But they never have, until Friday when Steny Hoyer finally did mention this precedent on the News Hour.
            Still, no matter the rationale, it seems inarguable that this unlikely decision by the conservative Chief Justice was salvational for Barack Obama’s chances for re-election. Without it, he would have been a one-term president for sure, and we would be left with that forged-in-plastic excuse for a human, Mitt Romney, as president. Given the damage already done by the Supremes nominated by Bush, the chance for another reactionary to pack the court with still more political operatives would have been catastrophic. Still might. And even if Obama wins, we’re still stuck with a health care “reform” that does one thing above all: it preserves the privatized system of health care—citizens of the richest nation in the world forced to buy half-assed care from huckster insurance companies—that is the shame of the modern world. Fully 18% of the American economy is spent on health care, and the care overall puts us at the level of the poorest third-world countries—37th overall I think. More precisely, Canadians, who have a single-payer system, spend half of what we do on health care that covers everyone, and get superior treatment whose effectiveness and efficiency is gauged by an average life expectancy far greater than ours. And the Affordable Care Act will now force millions more to sign on to the ridiculous system that makes profit-making its top corporate priority, and in effect transfers more billions from the people to the HMOs and their CEOs.
            The only hope is that once this ‘improved’ health care system is in place, making it possible for millions more to be covered, the defects in the system—especially its continuing rise in cost—will become ever more apparent, and a single-payer system, aka Medicare For All, will become possible. And what will that take? For one thing, it is going to take a shift in the language used by advocates. This is what still astonishes anyone paying attention to the discussion in America, not only over health care, but over the economic situation in general. That is, the word “poor” has become verboten to politicians on both sides of the aisle. This is most noticeable among Democrats. Does Obama ever mention that health care is desperately needed by those who can’t afford it—the poor? Not on your life. He mentions only the middle class. The middle class needs relief. The middle class needs health care. The middle class needs jobs. The middle class kids who go to college need protection from a rise in interest rates on their loans.
            Whatever happened to the poor and working classes? Have they all graduated to the middle class? Have they been outsourced to some third world country, like everything else? Are there no longer any humans living below the poverty line or in inner cities? And if there are, why have they become—especially for this African American president whose election was seen as so paradigm-shifting, so emotionally liberating by precisely those people—invisible? Unmentionable, even by the party that owes its strength and its moral leverage to its historic pact with and for them? 
            The truth, of course, is that the poor and working classes are still with us—their condition steadily worsening, especially in the climate of economic penury resulting from the 2007 financial collapse (just for reminders: there are 20 million Americans with incomes below half the poverty level; 6 million living only on food stamps; unions losing benefits on every front). The truth is that they are, as many have pointed out, in danger of becoming a permanent underclass due to the outsourcing of those manufacturing enterprises that once provided them real jobs and improved living conditions. The bitter truth is that they are in even more serious danger of being wiped out as a class, their purchase on the American promise weakening to the point where their providers have no prospects, their children are abandoned in declining schools, their young men are slammed into prisons and third-class citizenship for minor offenses. The final truth is that their vote—when they are not stricken from voter rolls by Republican sleight of hand and outright fraud—is becoming increasingly as meaningless as their (in)ability to buy elections.
            It is for this reason that no one—not Obama, not Democrats running for office, not social architects of reform—mentions them anymore. Mentioning the poor or working classes has become equivalent to mentioning Marxism or socialism or atheism or, god forbid, abortion. It just isn’t done by the better sort. Except, of course, when Republicans refer to them obliquely as an unbearable burden—as when they lament that the government, with Obamacare, is going to have to subsidize their health care, to the point where it will cost trillions of dollars we can’t afford. Or where states insist that they will refuse the new expansion of Medicaid (again, for the poor), even though the Feds give them the money to finance it! It’s only for those immoral deadbeats after all.  
            Thus an entire class is in the process of being not merely marginalized—we can’t afford the poor anymore—but wiped out. One wonders, grimly, how long it will take, and if anyone (of the better sort) will notice or lift a finger to prevent it, as it does.

Lawrence DiStasi

Tuesday, June 26, 2012

Land Grab in Africa...Princess Grab in America

There really is no obvious connection between the two elements in my title, but both are on my mind so we’ll see if they come together.
            Africa first. The continent whence we all came (and not that long ago; most archeologists now think it was a mere 50,000 years), Mommy Africa, has been the object of colonial exploitation for centuries, so you’d think that in this “post-colonial era” our continent of origin might be given a break. Not a chance. Now that the slave catchers and British plantation owners and apartheiders have been given the boot, the exploiters simply line up for the next, slightly more subtle phase: getting title to the farmland. The facilitators are the post-colonial governments, all desperately in need of cash and development. The victims are, as usual, the poor farmers trying to eke out a living on small plots they’ve always depended on for subsistence. The villains are big multinational corporations seeking to cash in on what they see as the next big profit-generator: food in a world moving towards mass starvation.  So the smart money is going towards the buying up or long-term leasing of farmland in areas already half starving themselves—sub-Saharan Africa, in countries like Sierra Leone and Ethiopia. That it will put small farmers and their families out of business, push millions more of these victims into large cities or refugee camps where they will be unable to work or feed themselves and provide the next images of babies with swollen stomachs and flies feeding on their eyes—this doesn’t seem to bother or deter these captains of finance. Their only goal is to find the next big cash cow, and food-growing is apparently where they think it’s at.
            The Oakland Institute ( is at the forefront of trying to draw attention to the developing bonanza (or catastrophe, depending on which view you take). Its website features several articles drawing attention to what it calls the “growing global concern about the race for prime farmland in the world’s poorest countries.” The article “Sierra Leone: Local Resistance Grows as Investors Snap up Land,” Guardian, April 11, 2012, for example, points out that between 2000 and 2010, the rush for land had claimed upwards of 200 million hectares of land, mostly in sub-Saharan Africa, including 17% of Sierra Leone’s arable land. Another article reporting on a conference sponsored by the Oakland Institute itself, filled in what kinds of commodities the new industrial plantations grow for export: sugarcane for ethanol, crude oil palm, rubber, and large-scale rice production. Speakers at the conference decried these developments, pointing out that just as small farm holdings (they employ about 3.5 million people—roughly 2/3 of the population) were beginning to come back after Sierra Leone’s long civil war, their recovery is being threatened by the land grab. As one farmer, who has lost her cropland to the Swiss investor, Addax Bioenergy, lamented:
 “How are we going to get food security if you give all the upland land to the investors? We beg you to listen to us….We are suffering because we have nowhere to go. You come out from war, build a house and now when you speak out, they lock you up.”
A landowner and member of parliament, Sheka Musa Sam, referred to the recent lease of 6,500 hectares of land in his district by Socfin Agricultural Company:
“There is no way we can just sit down for 50 years without getting a living. We need to come together and form a united front. We can’t let them make us slaves on our own land. This evil thing will make the poor people even poorer.”
What Musa Sam and others referred to were the endless problems caused by the land deals:
            the overwhelming negative impacts on women who lose their livelihoods and food production; the effect on children’s education who have to drop out of school because their mothers can no longer pay their school fees; increased hunger, rising food prices and despoiled water supplies; the devastating environmental effects of the investors’ operations; and also concerns about the way the industrial plantations shred the social fabric of rural communities, causing marriage breakdowns, unwanted teenage pregnancies, increased incidence of sexually transmitted diseases, and even the loss of self-esteem when one loses one’s self-employment.
In short, what is being reproduced in Africa are all the ills of industrial farming and industrial development we have seen globally.
            The Oakland Institute recently wrote a letter to President Obama urging him to discuss the issues of the land-grab in a May 19 meeting he had with Ethiopian Prime Minister Meles Zenawi. Whether the letter did any good is anyone’s guess, but we can imagine that concern for African farmers was not nearly as high on the U.S. priority list as keeping Ethiopia from becoming a ‘safe haven’ for terrorists. What was probably also high on that list was making Africa equally ‘safe’ for genetically-engineered seed from the likes of Monsanto. For although this President has direct roots in Africa, he can never forget what another president (the great Calvin Coolidge) once memorably said:
             ..the chief business of the American people is business.
            Which forms a nice segue into the second topic at hand: the business of Princess making. This is the preoccupation of Berkeley writer Peggy Orenstein—who calls it the Princess Industrial Complex. Her recent book, Cinderella Ate My Daughter, suggests the idea. What Orenstein, herself the mother of a young girl, has found over several years is that American girls are everywhere bombarded with a self-image that seems cute and harmless on the surface, but in reality masks its own kind of grab: a grab for the minds of children and the dollars of parents so eager to indulge their daughters in the fantasy that they are innocent royals, as to be blind to the gross commercialism in which they are partaking and to the underlying dynamics of this fantasy. In a recent article on the subject, “Dodging Disney in the Delivery Room,” (NPR, Feb. 9, 2011) Orenstein quotes from a NY Times report that
            “Disney has begun sending sales reps into 580 hospitals nationwide. The reps are offering new moms, within hours of giving birth, a free Disney Cuddly Bodysuit for their babies if they sign up for e-mail alerts from The idea is to encourage mothers to infuse their infants with brand loyalty as if it is mother's milk.”
Yes. You read that right: now, not even the delivery room is safe from the predatory sales pitches of corporate America. This sales “push” is meant to do two things: first, to duplicate the success among preschool girls of the Disney Princess line, hyped up in 2000 to earn a staggering $4 billion annually from a product line numbering no less than 26,000 items! And second, to rectify what Disney execs noticed as a gap in their sales: little girls were not becoming consumers until preschool, “resulting in a good three years of potential revenue loss.” If Disney could get expectant mothers thinking about Disney outfits in the delivery room or earlier, it would be, according to one Disney exec, a “home run.” No wonder. The Advertising Educational Foundation sees 1-year-old infants as an “informed, influential and compelling audience,” able at 12 months to recognize brands and be “strongly influenced” by advertising! Orenstein rightly counters this, pointing out that “studies show that children under 8 years old can’t distinguish between ads and entertainment. Until then, they don’t fully comprehend that advertising is trying to sell them something…” But tell that to the predatory bastards in the executive suites and the ad agencies.
            Orenstein has a lot more to say about the perils of the Princess Industrial Complex, specifically that beneath the cute pink that is pushed upon girls everywhere, even in supermarkets, there is a cultural demand: you are what you look like, what people who look at you think of you. Therefore, what you look like as a girl, as a princess in cute pink dresses, and later as a teen and woman obsessed with appearance, should be your constant, your sole preoccupation in life. Nevermind your ability to run and jump, nevermind your ability to read or calculate or think, nevermind your ability to carve out your own, decent path in life. You are, you better be, a clean, pretty and pink object, period. Orenstein also points out what is less obvious: that beneath this preoccupation with pink innocence lies a parental fear of their daughters growing up, of adulthood, of adult sexuality. It’s of course a mixed message because on the one hand, girls are being schooled to think of themselves as pretty and desirable, but on the other are being schooled in maintaining an impossible innocence. The same dual appeal, of course, lies at the heart of the Disney world in general: a mania to represent historical eras, but stripped of all threat and dirt and reality, rendered innocent and clean as they may exist in fantasy, but never in reality. And the rubes just love it.  
            What can never be forgotten, though, is the main objective here: making money off innocence or the promise of innocence, making a profit from the impulse to escape reality and dwell in the land of Cinderella (her wicked stepmother and sisters cleansed of the reality of the original fairy tale) and the handsome prince who will come to rescue her.            
            Think about it next time you are tempted to buy a princess-themed present for the delightful little girl in your life. And about how perhaps, just perhaps, the African land grab and the American princess grab are cut from the same cloth after all.  
Lawrence DiStasi

Saturday, June 23, 2012

Hooked on the Net

A history teacher on the Frontline documentary “Digital Nation” put the problem succinctly: “these kids (his high school students) are natives in this world; we (pre-computer types) are immigrants.” That’s more or less the way I feel. The computer is essentially a typewriter to me. The internet helps in the way a library used to: it provides a quick source for necessary facts, but it is not the place where I live. For the internet generation, however—the one that has been using computers since almost birth—it is where they live. And more and more of them cannot imagine a world without that constant connection. Either via iPhones or laptops or game consoles or iPods or all and more at the same time, the internet generation now in schools like your neighborhood grammar school or high school or MIT is constantly connected, multitasking at all hours of the day and night, including when they’re in class, eating with friends, or studying at home or in the library. They’re googling, chatting, texting, posting on Facebook or Twitter, indulging in their online second life, listening to their playlists, and doing their homework all at the same time.
            There have been many warnings about this, and about what the constant distraction might be doing not only to the competence, but to the brains of these kids. But there are also those who pooh-pooh the warnings and insist that the world in which the current generation will have to function requires them to be adept at multiple tasks and “problem-solving” rather than the old style of memorizing facts and dates. Still, a segment with a teenager in South Korea, where the government has actually had to step in and institute a program to teach children how not to misuse the internet, is sobering. One teen named Kim got so addicted to constant computer gaming that his mother sent him to one of the Internet Rescue Camps where teen internet addicts learn to do outdoors-y things with their peers—like actually communicating in person. Kim wasn’t so sanguine about the camp, though. He said he was obsessing about computer gaming constantly, and couldn’t wait to get home so he could get back online. Others have not been so lucky: several teens actually died after spending 50 hours constantly gaming without a break for either food or water.
            Neuroscientists have begun to seriously investigate what all this constant screen-interaction (average kids spend over 50 hours a week on digital media while real devotees—i.e. addicts—spend considerably more) is doing to youthful brains. Where most young people are certain that their multitasking is very efficient, Prof. Clifford Nass at Stanford had doubts even before his research: “a great deal of research has already shown us that the brain can only attend to one thing at a time,” Nass said. His subsequent research on the efficiency of multitaskers (these were young college kids who regularly work on five or six things at once) showed that, contrary to their own assessment that they were efficient multitaskers, most were significantly slower at simple tasks than normal, and basically terrible at every aspect of multitasking. Nass concluded by saying, “We worry that this may be creating people who are unable to think well and clearly.” Another research project, by Dr. Gary Small of UCLA, gave a more graphic picture—neuroimages of some brains reading, compared to other brains doing google searches. At first the results seemed comforting: the brains of those “googling” showed about twice as much brain activity as those reading a book, thus prompting initial news reports that net surfing is good for the brain, and makes us smarter. But Dr. Small cautioned that in brain imaging, “bigger is not necessarily better.” In fact, Dr. Small pointed out, one could well argue that smaller is better, because, as with a well-trained athlete, the fit body needs to use less energy to do a given physical task than the unfit one. The same applies to brains: reading a book, for a good reader, requires less brain effort to get information (and even pleasure and relaxation).
            Being a book man, of course, I would agree. Reading a book is far easier on the eyes, and far more cozy than staring at a screen all day jumping from one graphic stimulant to another. But those who have been reared on computers do not agree; they’re bored with books. Said one high school student named Greg: “I never read books. If there were 27 hours in a day, I’d read Hamlet maybe. But there aren’t.” Greg, therefore, like most of his peers, gets his “information” about a play like Romeo and Juliet from the internet sites that, in a few paragraphs, provide him with the plot and all he needs for a test. Other sites provide pre-written essays on any aspect of any literary work. That leaves students ample time for all the other joys of the net like Facebooking and texting and gaming. Prof. Mark Bauerlein, of Emory University, has written about such kids and their generation—in a book he titles The Dumbest Generation. He refers to a study by the Chronicle of Higher Education referring to them as ‘the new bibliophobes.’ What the study found was that while use of the internet can raise reading skills in the early grades, as students get more wired, their reading and thinking skills deteriorate. As Professor Bauerlein noted, their writing skills are equally atrocious. Most of his colleagues, he said, agree, noting that less than 10% of students come to college actually prepared to write, and/or think. The aforementioned Professor Nass, of Stanford, was more specific: “Instead of writing an essay, they write paragraphs. They’re good for one or two paragraphs. Then they’re distracted and onto Facebook or some other online destination.” The result, he said, is that there are no connections between paragraphs, no sense of the big picture. Several students actually admitted that this was true of their own writing and the way they go about it.
            Of course, there are always the apologists: a principal of a public school in the Bronx claimed to have turned his school around by equipping every student with a laptop, insisting that this is what these kids will need when they enter the ‘real world.’ An even grander apologist was Mark Trensky, CEO of a digital company called “Games2Train.” He opined that he didn’t at all agree that the book “is the best way to train people for the 21st century,” adding that alarmists also cried havoc when print first replaced memorized epics and when mechanical printing first made books available to the masses.
            Perhaps. Perhaps the digital way of the internet really is opening the world to a more democratic, better-informed, more equitable society of instantly-linked brains. But some cautions might be in order here as well. One commenter on the frontline blog had this to say:

            Facebook and the internet in general, a potentially momentous tool for pushing society forwards, seems to actually be limiting activism. As my peers noted, its easy to browse Facebook, find a cause, and join a group, and simply state that you support the fight against cancer, or say that America should not go to war. These groups could be a potential starting point for activism, mobilizing and connecting people with common values and causes, but from what I've seen, usually nothing moves beyond simply joining a group.

            A more deeply-thought out criticism comes from Joshua Sperber on, in a piece called “We’re All Porn-stars Now.” What Sperber suggests is that the internet has actually co-opted all of us, getting us to internalize the ethics and behaviors of our capitalist masters. This results in profits for those who own the sites and tools that we love—Google and Facebook and even online dating sites—by entrapping us into furnishing the information about ourselves which allows them to sell advertising. This is, of course, why all those “free” sites elicit our information—to provide the means for advertising to be targeted at specific audiences likely to succumb to their appeal. And of course, thousands of sites are direct peddlers of every kind of product from foods to clothes to porn to willing dates ready to provide all of what we desire. Sperber notes, in fact, that dating sites have recently become more profitable than porn sites…This effectively means that people who post pictures on dating sites, or engage in amateur porn, are “giving it away for free.” More accurately, given their extraction of profit through user input and advertisements, as well as the fees that many sites charge, online dating sites establish a relationship of reverse prostitution. Notwithstanding their offer of “efficiency” (which makes their promise of “romance” oxymoronic) and exhibitionism, the material basis of the relationship is that you pay to work for them.
It’s hilarious, really. The internet was supposed to be the essence of “freedom.” Information free, entertainment free, social networking free, the whole world connected in this free, open flow of human togetherness and sharing and learning. And yet, according to Sperber’s account of it, we’ve all become colluders with the money-makers and hucksters, as when we go on Yelp and criticize the service at a restaurant, thus placing the blame for a bad experience on the poor waiter or waitress, while the employer who pays that person a minimum wage gets off scot free. And by “marketing” ourselves online—on Facebook or Linked-In or even Craigslist—we are unknowingly making ourselves into the very thing we say we deplore: commodities. Here’s how Sperber puts it:
..we attempt to make money marketing ourselves online not merely as laborers but as aspiring capitalists, trying to extract surplus value from any conceivable trade, skill, or gimmick. Selling one’s personality, purpose, and essence, the division of labor has been seemingly resolved online: we own the (would be) means of production, which actually means that we have become utterly commodified.
In short, we don’t have the Internet. The Internet has us, and it’s virtually impossible these days to get away. We’re hooked, cooked, and addicted to our own beloved technology. As my father used to say, “What a revolting development this is.”
(Postcript: since writing this on June 20, my internet connection has failed. After nearly an hour talking to an ATT rep, I was told my modem is kaput and no longer able to make the connection to online consciousness. So I’m waiting, bereft of my connection, for a new modem to be shipped by UPS overnight, the whole thing to cost me nearly $100 and a couple of days inability to function as I’ve been brainwashed to expect. Not sure whether this is the revenge of the Internet Gods re: the above critique, but one way or the other, they really have us by the short hairs.)
(PPS: It’s Saturday and I’ve just gotten back online. Should anyone be surprised that, in fact, there was nothing wrong with my modem—I learned this after finally getting the new one working—but rather with some connection prior to the modem fixed by ATT repair. Aarrggh!)
Lawrence DiStasi

Saturday, June 16, 2012

Luck and Just Desserts

Some of you have probably heard about the writer Michael Lewis’ speech at Princeton’s graduation ceremony. He seems to have shocked a lot of people by asserting that luck, sheer, blind luck, plays a bigger role in most people’s lives—particularly those who are “successful”—than either they or anyone else likes to admit. He gave several wonderful examples of this. In his own life, for instance, he graduated from Princeton with a degree in Art History, a degree that qualified him for essentially nothing. But after some fumbling around and going to Law School (like many others who have no idea what to do), he happened to be sitting next to the wife of the CEO of Salomon Brothers and that dinner led to her convincing her husband to hire Lewis. Whereupon he was, with no qualifications whatever, put in charge of Salomon’s newly-expanding derivatives desk—which in turn led to a contract to write about this potentially hazardous field in a book called Liar’s Poker. The book became a best-seller and Lewis’s writing career was launched. All thanks to the luck of sitting next to the right person at dinner.
            Lewis adds a couple more examples—his book Moneyball, for instance, which contrary to most impressions of the book, has an underlying theme:

            The [baseball] players were misvalued. And the biggest single reason they were misvalued was that the experts did not pay sufficient attention to the role of luck in baseball success. Players got credit for things they did that depended on the performance of others… Players got blamed and credited for events beyond their control.  Where balls that got hit happened to land on the field, for example.

In truth, I have often wondered why more attention is not given to this in sports commentary in general. It’s really so obvious. The ball, any ball, takes funny bounces. Rebounds fall right into the hands of some player far from the basket. A pitcher makes a mistake and throws a fast ball right down the middle, resulting in a home run; or a pathetic dribbler goes for a base hit while a smoking line drive goes straight to a fielder for an out. A football is tipped by a receiver and falls right into the arms of a waiting defender, who runs it in for a touchdown.  All pure luck. And yet we are bombarded with ecstatic comments about how great, how talented this player is over that, or this team over all others. Lewis also described the cookie experiment at UC Berkeley, whereby teams of 3 students were told they were to solve a difficult problem (cheating on campus, for example); but in truth the experiment was about them, to see what happened when, midway through their deliberations, a tray with 4 cookies was given to them as a break. Each would eat a cookie, yes, but what would happen to the 4th cookie? As predicted, the leader, chosen totally at random, grabbed the cookie and ate it with self-righteous gusto. The conclusion: any leader (pure luck) feels absolutely entitled to the extra cookie simply by virtue of his dominant position.
            This is Lewis’ point: those who are blessed by life’s fortune always feel that they deserve it. As he puts it regarding Moneyball: “Don’t be deceived by life’s outcomes…(they) have a huge amount of luck baked into them.” 
            Most people, as Lewis points out, cannot accept this. I had some experience with this resistance when I was teaching college freshmen. These were among the most privileged freshmen in the world, so I assigned for their very first in-class essay a simple subject: ‘Would you rather be blessed with more luck or with more brains?’ At first, they couldn’t even understand the question. Luck or brains? Is that even an issue? When I finally explained it a bit, they set out to write and the results were predictable: forget luck, we’d choose brains (forgetting, of course, that an ample brain is also luck). We’ve been taught the brain thing all our lives: be smart, work hard and you’ll be rewarded. What’s luck got to do with it? This is partly because in the general culture, if people even admit that luck plays any role, it’s usually with the caveat that success via luck usually goes to those who are prepared by previous exhaustive work for the lucky break. A scientist may stumble upon an experimental mistake that turns out to be a miracle cure, but of course it was all his years of work that prepared him uniquely to see it. But is this the whole story? Every example Lewis cites suggests otherwise. Dumb luck plays a major role in success—especially, as Lewis knew firsthand, on Wall Street.
            And, of course, it’s more than just Wall Street. It’s our very lives. How much did any of us have to do with being born—the lucky product of one tiny sperm managing to make it on one particular night through the gauntlet of flesh and acids to penetrate one particular egg that happened to fall that very cycle and result, after countless fortunate happenings, in gestation and implantation and survival of all the possible accidents that might snuff out a life, any life, at any moment, to emerge whole into the air and be able to breathe. And how much did any of us have to do with being blessed by the absence of earthquakes and fires and wars and plagues and major diseases and infinite numbers of smaller catastrophes capable of ending a small life at any moment, then to be nurtured in a way that did not pulverize us into self-or-other destruction but allowed our organism to thrive and be articulate enough to reach adulthood with a reasonable chance of success. The nuns in parochial school used to emphasize this continually: how lucky you are to be who you are, and how you have done nothing to deserve such blessings and so should be thankful to god and contribute to others who are less fortunate. And then pass around the donation basket for the missions; which the nuns and the Catholic Church at least had right. Because this was Michael Lewis’ point in his speech to those Princeton graduates. You are the lucky ones. You, through no fault of your own, are the massively blessed ones—graduates of a prestigious university, through which you will meet other lucky ones who will enable you to increase your luck and wealth in this, the wealthiest, least dangerous society the world has ever seen. And you will no doubt, some of you, be among those who grab that cookie and eat it with gusto, convinced that you fully deserve it. And you do not. No more than anyone else. Because like the cookie group leader, your “status is nothing but luck.” You therefore ought to remember the many other unlucky ones who—despite what our society now pretends and promotes—contribute to and make possible every bit of luck and success you will manage to seize as your own.
            It was an amazing message to present to these privileged grads. And it should be promoted far and wide in this nation at this very moment. Because the pernicious message about success and luck that we have been brainwashed with virtually since this nation’s founding is that those who succeed materially are the chosen ones—God’s elect who deserve everything they get, whose material success is in fact a sign of their election (i.e. to their heavenly reward). And those who fail—why they are the lazy ones, the undeserving ones, the losers who are not only a burden on all others, but who are likely, their failure being a sign of it, doomed to everlasting perdition as well. And the message has been made ever more pernicious since the conservative backlash starting in the 1980s with that movie-trained salesman of success, Ronald Reagan. For he was the one who sold America on the nasty message of conservative Republicans: taxes are unfair; the wealthy deserve to keep every bit of their wealth because they’ve earned it, unlike the poor, who, coddled by government handouts, want to take it from them. No, they shout. Let those who win life’s lottery keep what’s theirs. And let those who lose suffer their just desserts. The ethic of Wall Street in recent years has reinforced this message and amplified it and brought us into an era of inequality and cruelty more suited to a banana republic than a constitutional democracy, and thence to the brink of collapse—whence all those ‘self-made banksters’ suddenly found they had need for government handouts after all. With only this difference: as life’s elect, they were sure they deserved it. They were the leaders after all, the lucky ones at the summit of American society, and so possessed of an inalienable right to all of life’s cookies.
            Well, we shall see. We shall see if Michael Lewis’s little speech has any resonance. We shall see if the constant message of every religion and every wisdom tradition and even every political party—that luck in life’s lottery entails an obligation to those who not only are not lucky, but without whom the lucky could not even aspire to their privileged position; the unlucky including all those unseen producers from the bacteria onward whose feverish work digesting and photosynthesizing make life possible to begin with—can ever be taken seriously. We shall see if the lucky few come to their senses before they are brutally forced to do so by those at the bottom who finally realize that they have the only strength that matters: the strength of right to their fair share of the earth, the strength of real desserts that have been denied them for generations, the strength of numbers of those who have always done the work and have always been scorned for it and have nothing more to lose. We shall see.

Lawrence DiStasi

Friday, June 8, 2012

America Diabetica

A segment on the PBS Newshour on June 6 presented some alarming cases and statistics concerning sugar. A 16-yr-old Pueblo Colorado teen was featured as one of millions of American teenagers now exhibiting Type 2 Diabetes—a disease that used to be limited to older adults. This vastly overweight girl was said to be socially isolated (she is homeschooled) and sedentary, with a working mother who cannot be around to monitor her food intake. The result is that she spends her time gorging on typical American junk. Doctors and professionals at the Centers for Disease Control are alarmed, since their projections now show not only that 1 of every 4 American teens already has type 2 diabetes (where insulin is either not sufficient to break down glucose, or the cells ignore the insulin that’s produced); but worse, that 1 of every 3 children born in the year 2000 will develop the disease!  That’s one-third of the country. A Doctor Zeitler laid out the grimmer prospects: whereas in adults with diabetes, the average time from diagnosis to a first major cardiovascular event—heart attack, the need for bypass surgery—is about 15 to 20 years,
Everything that we have seen so far suggests that these kids have a progression rate that's at least as quick, if not a little faster, which means that (for) this kid who has their onset of diabetes at 15, we may be looking at their first major cardiovascular event by the time they're 35. (this PC use of plural pronouns drives me nuts, but that’s what he said. LDS).
Read that again. Kids with diabetes now will be having heart attacks at 35, if they’re lucky. And if the projections are correct, that means 1 of every 3 Americans will be taxing our already overburdened health-care system with more surgeries than ever. Not to mention the other complications from diabetes like loss of toes, limbs, and insulin therapies.

            This trend, of course, has been accelerating since at the least the 1970s and probably since WWII. And the culprit is not hard to find: sugar consumption. One statistic I found showed that since only 1983, the average American consumption of sugar has risen every year to the point in 1999 (the year of the study) where it reached 158 pounds yearly per person—a 30% jump in 16 years. That may be about the time when the geniuses who operate the American food industry discovered that until-then unusable surplus corn could all turn a profit—by being made into high-fructose corn syrup. And what to do with all that corn syrup? Why lace every imaginable American processed food with it—especially our beloved soft drinks. The result is that by 2009, the American Heart Association was noting that average Americans were now consuming 22 teaspoonfuls of added sugar a day (that was average; teenagers 14 to 18 were consuming 34 teaspoonfuls of added sugar per day). Compare this to the recommended average of 9 teaspoons for men and 6 for women. And again, we’re not even counting naturally-occurring sugars such as lactose in milk or fructose in fruit; we’re talking added sugar—“the sweeteners and syrups that are added to foods during processing, preparation or at the table.” As in killer foods like cakes and cookies and puddings, plus the sugar in all our favorite goodies like ketchup and snacks and Thai foods and McDonald’s fries and chicken tenders and those super-sized soft drinks the state of New York has recently tried to limit (to 16 ounces; whereupon, from the outcry by the restaurant industry, you’d think mother’s milk was being rationed), and the city of Richmond CA has recently proposed taxing, with the tax revenues going to fund sports programs for sedentary kids (again with aggressive campaigns by the Beverage Industry to oppose the tax as more “government interference” in American lives.)

            No wonder we’re a nation of sugar-crazed, overweight diabetics.

            This brings to mind the very viability of carbohydrates themselves. Because while it was not long ago that runners and other athletes were recommending “carb loading,” there has more recently emerged a growing chorus of food gurus who insist that ever since humans invented agriculture, the major portion of our diet that comes from grains and other carbohydrates like tubers has led to a plethora of diseases like diabetes and arthritis. I heard one of these guys on the radio the other day, and he was quite convincing. His book, Neanderthin, argues that based on his own experience (and apparently little or no research), the optimum diet for a human is the “paleo” diet: mostly meats, plus tubers that you can dig with a stick, and fruits and veggies that can be eaten raw; all of these being foods that the guts of our hunter-gatherer ancestors allegedly evolved to process. At first glance, this seems reasonable. Grains are indeed inedible in their natural state. They convert quickly to sugars when eaten processed and cooked. And many of them have compounds that can be toxic. But looked at more closely, it appears that the easy notion that hunter-gatherers, like chimpanzees, were natural carnivores who got most of their calories from meats, doesn’t quite compute. It turns out that chimps get more of their calories from fruits and tubers and insects than from meats—a once-a-month rarity, according to Jane Goodall. Humans, similarly, are not naturally-evolved carnivores at all—our teeth are useless for tearing meat from a carcass or bringing down prey, and our jaws are designed for grinding rather than for tearing meat and swallowing hunks whole like carnivores. Nor are our guts designed with enough carnivore-type acids to easily process raw meat.

            In short, most of us need carbohydrates alright. The only question is, what kind and in what proportion should we eat them. The answer seems fairly simple. We should get about half of our carbs from ‘good’ carbs—the kind with lots of fiber. These are the carbohydrates that get absorbed slowly into our systems, thus avoiding those harmful “sugar spikes.” Sadly, these are the carbohydrates that Americans tend to shun: whole grains, vegetables, fruits, and beans. But shunned or not, we need carbs that are not processed ahead of time, or refined, carbs that come in their natural garments rather than in glitzy packaging designed to appeal to ignorant children. Because it’s the processing—the bleaching of flour, the polishing of rice, the mass production of easy meals that require only a minute in the microwave or that come in cardboard containers from the take-out counter—this processing is what powers the rapid ‘bad’ carbohydrate train to sugarland. And thus should be avoided. What is wanted are foods that have texture, substance, that require chewing and time digesting: brown rice and whole wheat and leafy vegetables and all the fruits and nuts loaded with the fiber that slows down that sugar train. The benefits being that slow carbs avoid the peaks and valleys in blood sugar levels that lead to diabetes; and, as a side benefit, tend to lower serum cholesterol in the blood. In sum, somewhere between half the calories in good carbs, some fat, and up to 35% protein (from meat, eggs, milk products, etc.) is a generally recommended balance.

            The trouble, of course, is that in our industrial-food marketplace, this is a balance harder to achieve than ever. Practically all food these days is processed (more profit)—if it’s not genetically engineered to be resistant to poisonous pesticides and herbicides (i.e. to allow poisons to be used with abandon). Still, the advantaged among us can still manage to find such a balance—if we can resist the easy fix of the microwave or the takeout counter, that is. For the disadvantaged, though, it’s a different story—and they are the ones most at risk for diabetes. For the disadvantaged, the neighborhoods they inhabit, such as those in Richmond CA, have been stripped of real stores, of supermarkets or even old-style mom-and-pop groceries carrying at least a few fresh fruits and vegetables. Instead, they are left in ‘food deserts’ to rummage among packaged foods in liquor stores or fast-food restaurants peddling the worst fat-laced processed crap American ingenuity can package for them in bright colors fit for TV commercials. Super-sized drinks. Fat-laced, artificially-colored mystery meat stuffed between mushy-soft white-flour buns. Desserts as the logical extension of sweetened and fat-laden French fries. With some of these teens slurping down a super-sized coke with a package of chips for breakfast, and similar junk food all day long. All of it promoted 24-7 on TV and billboards as mom-centered, community-creating, fun-fostering purveyors of patriotic America. When in truth, the whole American food perplex, full of exotic choices, is the essence of the great shill, the great deception, the great epidemic, the logical apotheosis of diseased capitalism we now have and will have even more exclusively in what is to come: our own America Diabetica.

            And lest you hadn’t noticed, it’s all a perfect emblem for the corruption in this increasingly dysfunctional system, where the poor are targeted by the richest corporations on whom to dump their profit-making garbage; and any attempt to rein in this gross exploitation and outright murder is pilloried as “excessive regulation” of the “free” market by an overly intrusive government.  That is, government takes the hit, and corporations reap the profits from this sorry-ass spectacle: the richest society in planetary history mindlessly eating and polluting itself into an early grave.  All the while braying to itself and the world about its precious “freedom.”

Lawrence DiStasi

Friday, June 1, 2012

Quiet Wars

The New York Times has been doing a bang-up job on President Obama these days, the latest being David Sanger’s June 1 article, “Obama Order Sped up Wave of Cyberattacks Against Iran.” This follows an earlier piece in the May 29 Times describing the President’s hands-on approach to Drone attacks, noting in particular that Obama early on took direct control of that targeted assassination program started by W, insisting that he (Obama) have the last word in selecting who should die via drone (“Obama’s Secret Kill List” by Jo Becker & Scott Shane). What is not clear is whether the Obama administration is approving these pieces, on the theory that the publicity confers ‘street cred’ on the mild-mannered compromiser, demonstrating that under that mild exterior lies a fierce warrior implementing all kinds of mayhem on America’s terrorist enemies. Forget secret prisons. Forget torture and all the attendant messy publicity. With drones—and Obama has increased the use of drones by a huge percentage since taking office, especially in Pakistan and now in Yemen—the President can just pick out a face from those presented to him, approve it as a target, and the drones do the rest—focusing on the alleged terrorist and dropping a missile on his home, car, hideout, or wherever he can be located. No risk of American boys being shot down or exposed. No tearful widows of fallen soldiers to console. As to collateral damage, an even bigger No problema. Wives and children obliterated by missile are fair game; as for the others, the CIA has solved the problem by simply declaring that any grown male killed in a drone attack can be considered a terrorist. After all, what other kind of male would be in the vicinity of a known terrorist?

            It’s a lovely little formula. We send out our drones to attack specified targets already selected by (presumably) on-the-ground spies, and in a puff of smoke away goes one more of our alleged enemies. It’s the perfect type of warfare for our squeamish age. No flag-draped, tearful coffins on our side, no broken brains or limbs to have to deal with for years, and no messy trials where terrorists might be able to justify themselves. Just take them out from the sky via smart un-manned vehicles and you’re done. It’s the Israelis who perfected this targeted assassination mode, of course, and who no doubt sold it to their BFFs, Us. Same type of enemy suspects. Same solution: kill and ask questions later. No evidence needed.

            Now we are finding out that this type of cyber warfare (and drones qualify as cyber warfare, no question about that, being operated by young cyber warriors directing them via computer keyboards in distant places like upstate New York or Colorado) has another strand. We had heard about the Stuxnet virus when it suddenly went public some time in 2010. It wasn’t supposed to go public of course. It was created by US and Israeli intelligence starting in 2006, code name Olympic Games (these guys have a sense of humor, no doubt about it) in the hopes that the Bush Administration could sabotage Iran’s nuclear enrichment program by getting a super-bug into the computers that run their Natanz centrifuges. Apparently, when Bush left office, he urged Obama to keep at least two of his programs going, this being one of them. Obama did so, and put it on steroids. Stuxnet was then said to have successfully caused hundreds of Iran’s centrifuges to self-destruct, before it somehow slipped out of Natanz and onto the internet. Oops. More recently, we have learned of yet another, more general cyber attack via computer virus, this one called Flame. It too has gone public, causing some consternation, since like all viruses, these things spread beyond what their creators might originally have intended. But the main emotion has been gloating, since once again, our super-smart American techies, along with their fiendishly smart Israeli counterparts, have been able to sabotage those dumb A-rabs, and wreak havoc on their primitive cyber systems. We, the public, are no doubt meant to feel comforted by this. After all, while the messy hot wars are being wound down, these clean, clever little cyber wars have been ramping up with all possible speed to keep us safe in consumer heaven.

            The problem, as even David Sanger has admitted, is that little old blowback problem. Because if our techies can come up with computer viruses (and eager beavers in government are now urging the same kinds of attacks on North Korea, on China, and anywhere else we have ‘enemies’), won’t those in the Arab, Korean, and Chinese world soon be able to do the same? And if we have taken off the gloves by attacking another nation’s key facilities, won’t American cyber facilities—far more numerous and perhaps far more vulnerable—also become targets sooner rather than later? Do ya think? Here’s how Sanger puts it:

            Mr. Obama has repeatedly told his aides that there are risks to using — and particularly to overusing — the weapon. In fact, no country’s infrastructure is more dependent on computer systems, and thus more vulnerable to attack, than that of the United States. It is only a matter of time, most experts believe, before it becomes the target of the same kind of weapon that the Americans have used, secretly, against Iran.

                  Well, yeah. And if the United States has now gone totally drone in targeting and killing its putative enemies in Pakistan and Yemen and Somalia and god knows where else, isn’t the same thing going to come back to the United States? Because one thing is certain: once a technology is developed, it quickly becomes part of the arsenal of all states, not just those who invent it. We’ve been learning this lesson since at least the invention of gunpowder. Drones and cyberwarfare will be no different. And with respect to drones, we also now know that these weapons from hell are already coming back to haunt us. For one thing, they simply make war much easier, and for the reasons already stated: no messy foot soldiers to deal with, no deaths in combat to have to apologize for, no nasty wounds to have to take care of. The drone does it all from its invulnerable position in the sky. If it gets shot down, no problem. Just buy another one. And send it out to kill its target, rain or shine, rough terrain or smooth, impregnable enemy fortress or open battlefield. The traditional petty concerns of normal warfare—much less petty concerns about guilt or innocence—no longer apply. Just kill the “bad guys” (and there’s never a shortage of those) and be done with it. For another, these drone things have been selling like hot cakes domestically as well. That’s right. Police departments all over America—New York and L.A. and Chicago already have a supply—are now lining up to stockpile their own drones for use against domestic ‘lawbreakers.’ Because drones make great spies—able to aim their cameras into any walled-in yard or junkyard or forest, able to track bad guys into the most remote hideouts. And in a pinch, they make great assassins too: just send them up to kill whoever has shot first. The drone could probably even document its reasons for implementing a kill: take a video, for example, of some dark assailant who resisted it, and then, voila, it simply had to retaliate to ‘protect’ itself.

            You get the picture. We’re all in the cyber pot, now. And why not. We’ve gotten used to helicopters patrolling our skies. Why not drones? You got something to hide? Something on your computer to hide from Flame? Some conversation on your cell phone or words in your email you’d rather not reveal to Big Brother?

            And it’s all coming from our Brother in the White House. What could be bad?

Lawrence DiStasi