Friday, September 30, 2011

Greed Be Good

In 1965, Alan Greenspan wrote:

“It is precisely the greed of the businessman, or more precisely, his profit-seeking, which is the unexcelled protector of the consumer” (Madrick, 228).


This should really be the epitaph inscribed on the tombstone of the American economy. Far from ‘protecting’ consumers, the greed that has defined American business and especially Wall Street these last 40 years has decimated the economy, loaded businesses with debt, put millions of Americans out of work, and transferred huge chunks of American industry to foreign countries such as China. Therein lies the theme of Jeff Madrick’s crucial book, The Age of Greed, (Knopf: 2011). To read it, with its portraits of banksters and junk bond traders and acquisition specialists and CEOs of America’s largest corporations, is to learn of chicanery, conniving and contempt for average Americans on such a scale as to sometimes deceive the reader into thinking he is reading Dante’s Inferno. Such characters—some of the mightiest names in corporate and political America in the latter years of the 20th Century, names like Rubin and Weill and Reagan and Greenspan and Friedman and Milken and Boesky and Welch—do deserve a poet like Dante to fix them in an appropriate level of pain and torment. While Madrick is not that poet, he does a creditable enough job of this to sicken even the most cynical reader, for his is the tale of the outright looting and crippling of the American industrial might (along with its workers) that was once the envy of the world.

The book begins with the general proposition that while industry and transportation and communications and retailing were once the foundations of American wealth and prosperity, “by the 1990s and 2000s, financial companies provided the fastest path to fabulous wealth for individuals” (24). And where government was once seen as a needed supporter and regulator of such enterprises, Milton Friedman’s economic doctrines, put into saleable form by Ronald Reagan and Alan Greenspan, turned government into the enemy. As Friedman wrote, “The fact is that the Great Depression, like most other periods of severe unemployment, was produced by government (mis)management rather than by the inherent instability of the private economy.” The answer to all problems, in this tortured view, lay not in government actions to help those who need it, but in reducing government and lowering taxes so as to (allegedly) make the poor better off, eliminate inequality and discrimination, and lead us all to the promised free-market land. As noted above, Alan Greenspan believed wholeheartedly in these and other theories (especially those espoused by his friend Ayn Rand), and Ronald Reagan became the shill for selling such pie-in-the-sky nonsense to the American public. As with his sales work for General Electric, Reagan marketed the kool-aid more successfully than anyone could have anticipated. In office in California as governor, he blamed welfare recipients for the state government’s financial problems: “Welfare is the greatest domestic problem facing the nation today and the reason for the high cost of government.” When he got to the national stage with inflation rampant, he hit government profligacy even harder. “We don’t have inflation because the people are living too well,” he said. “We have inflation because government is living too well” (169). All this was coupled with his mantra that getting back to the kind of “rugged individualism” that had made America (and himself) great required reducing taxes. And reduce he did. From a tax rate that was at 70% on the highest earners when he took office, he first signed the 1981 Kemp-Roth bill to reduce it to 50%, and then, in 1986, with the Tax Reform Act, reduced it even further to 28%. Meantime, the bottom rate for the poorest Americans was raised from 11% to 15%, while earlier, Reagan had also raised the payroll tax (for Social Security) from 12.3% to 15.3%. This latter raise, it should be noted, coupled with the provision that only wages up to $107,000 would be taxed for SS, meant that “earners in the middle one-fifth of Americans would now pay nearly 10% of their income in payroll taxes, while those in the top 1% now paid about 1-1/2%” (170). And what Reagan never mentioned about his “rugged individualism” is that he was made wealthy by those rich men who cajoled him to run for office: his agent arranged for 20th Century Fox to buy Reagan’s ranch for $2 million (he had paid only $65,000 for it), giving him a tidy profit with which to buy another ranch that also doubled in price when he sold it.

But such tales treat only the enablers. It is when we get to the actual hucksters of this story that things get interesting (or nauseating, depending on your point of view.) The basic scheme went like this: find a company that is undervalued—often because it had managed its assets so well it had cash on hand—and acquire it, using debt to finance the takeover. Then make money—and I mean millions and billions—on all the steps involved in the takeover, including the debt service, the legal fees, and the rise (or fall) in the stock price. For in the age of greed that Madrick documents, the stock price was all. Anything that pushed the stock price of a company up was good. Anything that pushed it down was bad (unless you were one of those smart guys like hedge-fund ace George Soros who worked the “shorts”). And of course, the best way to get a company’s stock price to go up was to increase profits. And the best way to do that was not to innovate or develop better products, but to slash costs, i.e. fire workers. Here is how Madrick puts it:

American business was adopting a business strategy based on maximizing profits, large size, bargaining power, high levels of debt, and corporate acquisitions…Cutting costs boldly, especially labor costs, was a central part of the strategy. (187)


What began to happen in the 1980s and into the 1990s was that all companies, no matter how successful, became targets of the ruthless merger mania that replaced normal business improvements. Lawyers like Joe Flom and takeover artists like Carl Icahn and T. Boone Pickens could spot an undervalued, or low-stock-price company (the process reminds one of wolves spotting a young, or lame deer in a herd) to take over, using borrowed money to finance it (90% of the purchase price). The borrowing then demanded that the new merged company cut costs in order to service the huge debt required for the merger—which in turn required firing workers. If a company did not want to be taken over, the only way to do so was to get its stock price to rise, and this, too, required the firing of workers. In either case, the workers took the hit. But the CEOs running the merged ventures, often sweethearted into selling by generous gifts of stock, “usually made a fortune.” As Madrick notes, in 1986, Macy CEO Ed Finkelstein arranged for a private buyout of his firm, for $4.5 billion, and became the “envy of fellow CEOs” (174). Like many other mergers, however, this one drained what was one of America’s most successful retail operations, and Macy’s went bankrupt in 1992. Madrick concludes:

The allegiance of business management thus shifted from the long-term health of the corporations, their workers, and the communities they served, to Wall St. bankers who could make them personally rich... (173)


In the process, of course, the Wall Street bankers and leveraged buyout firms (LBOs) like Kohlberg Kravis Roberts who arranged the buys and the financing took in obscene amounts of money. So did risk abitrageurs (who invest in prospective mergers and acquisitions, angling to buy before the stock price rises on the rumor of a merger) like Ivan Boesky. Earning $100 million in one year alone (1986 when he was Wall Street’s highest earner), Boesky needed inside information to buy early, and got into the little habit of paying investment bankers for that information, i.e. on upcoming deals. Unfortunately for him, he got caught in his banner year because one of his informants (Dennis Levine of Drexel Burnham) was arrested and agreed to name those he had tipped off. Boesky was one (the deal was to pay Levine 5% of his profits for early information on a takeover), and he too was subpoenaed in the fall of 1986. Boesky immediately agreed to finger others (agreeing to wear a wire at meetings), and nailed Martin Siegel, also with Drexel, who, in turn, kept the daisy chain of ratting out associates going by naming Robert Freeman, an arbitrageur at Goldman Sachs. Nice fellows. Boesky ended up serving three years in prison, but he fingered an even bigger fish, Michael Milken. Then the wealthiest and most ruthless Wall Streeter of all, Milken, who made his money in junk bonds (risky high-interest bonds to ‘rescue’ companies in trouble) was sentenced to 10 years in jail (reduced to 2 years for good behavior) for securities violations, plus $1.3 billion in fines and restitution. He’d made so much money, though, that he and his family still had billions, including enough to start a nice foundation for economic research, to commemorate his good name in perpetuity.

There are, of course, lots of other admirable characters in this tale, but one in particular deserves mention, Jack Welch, the revered CEO of General Electric. This is because Welch’s reign at GE typifies what greed did to a once-great American institution, the very one that Ronald Reagan shilled for in a more innocent age, the one that brought the Gipper to the attention of the big money boys. Welch made enormous profits for GE (in 2000, the year he left, GE earnings had grown by 80 times to more than $5 billion), and himself, but he didn’t do it the “old fashioned way,” i.e. by developing new and better products. He did it by shifting the emphasis at GE from production to finance. Welch saw the value of this early:

“My gut told me that compared to the industrial operations I did know, the business (i.e. GE Capital) seemed an easy way to make money. You didn’t have to invest heavily in R&D, build factories, or bend metal…” (191)


To give an idea of how this works, Madrick points out that “in 1977, GE Capital…generated $67 million in revenue with only 7,000 employees, while appliances that year generated $100 million and required 47,000 workers” (191). Welch did the math. It didn’t take him long to sell GE’s traditional appliance business to Black & Decker, outraging most employees, though not many of them were left to protest: in his first two years, Welch laid off more than 70,000 workers, nearly 20% of his work force, and within five years, about 130,000 of GE’s 400,000 workers were gone. Fortune Magazine admiringly labeled him the “toughest boss in America.” And by the time he left the company in 2001, GE Capital Services had spread from North America to forty-eight countries, with assets of $370 billion, making GE the most highly valued company in America. The only problem was, with the lure of money and profits so great, GE Capital acquired a mortgage brokerage (Welch was no mean takeover artist himself) and got into subprime lending. In 2008, GE’s profits, mostly based on its financial dealings, sank like a stone, with its stock price dropping by 60%. Welch, the great prophet of American competition, now had to witness his company being bailed out by the Federal Deposit Insurance Company: since it owned a small federal bank, the FDIC guaranteed nearly $149 billion of GE’s debt. So after turning a U.S. industrial giant into a giant bank, the man Fortune Magazine named “manager of the century” also succeeded in turning it into a giant welfare case. Perhaps there’s a lesson here somewhere.

There’s more in this disturbing book—such as the fact that Wall Streeters not only attacked corporations in takeovers, they also attacked governments (George Soros’ hedge fund attacked the British pound, as well as Asian currency in 1999, causing crises in both places, and ultimately, cutbacks in government programs for the poor)—but the story is the same. During several decades of Wall Street financial predation, insider trading, and more financial chicanery than most of us can even dream of, the high-rolling banksters made off with trillions of dollars, and most others (including union pension funds) lost their shirts. Madrick quotes John Bogle, founder of Vanguard Funds, concerning the bust of the high-tech IPO bubble: “If the winners raked in some $2.275 Trillion, who lost all the money?...The losers, of course, were those who bought the stocks and who paid the intermediation fees…the great American public” (332). The same scenario was played out again and again, in derivatives trading, in the housing boom, in the mortgage-backed securities boom, in the false evaluations of stock analysts like Jack Grubman, in the predatory mergers and subprime shenanigans of Citibank CEO Sandy Weill, and on and on, all with an ethic perfectly expressed in an email, made public by the SEC, commenting on how ‘the biz’ was now run:

“Lure the people into the calm and then totally fuck ‘em” (334).

That’s essentially the story here. And the sad ending, which most of us haven’t really digested yet, is that the very vipers who cleverly and maliciously calculated each new heist and made off with all the money while destroying the economy, then got federal guarantees and loans that came to more than $12 trillion, that’s trillion, to “save the country.” And now lobby for “austerity” and “leaner government” and fewer “wasteful social programs” like social security and Medicare, and fewer regulations so that their delicate business minds can feel safe enough to invest again. And save us all again with their unfettered greed.

In which case, I’ll sure feel protected. Won’t you?

Lawrence DiStasi

Tuesday, September 27, 2011

Avatars and Immortality

Anyone who has read or heard even a little history knows that the dream of immortality has existed among humans for a very long time. Most of these dreams (though not all, as the Christian fundamentalist notions of the “rapture,” and Islamic fundamentalist notions of a heaven full of virgins awaiting the martyrs who blow themselves and others up, prove) have been debunked in recent years, when even the Roman Catholic Church has pretty much abandoned its notion of an afterlife in fire for those who’ve been ‘bad’ (whether Catholics still believe in a blissful Heaven for those who’ve been ‘good’ remains unclear to me).

What’s astonishing is that this dream of living forever now exists in the most unlikely of places—among computer geeks and nerds who mostly profess atheism. It exists, that is, in two places: virtual reality, and the transformation of humans into cyborgs (though cyborgs don’t specifically promise immortality, they do promise to transform humans into machines, which is a kind of immortality—see Pagan Kennedy, “The Cyborg in Us All,” NY Times, 9.14.11). If you can create an avatar—a virtual computerized model—of yourself (as has been done for Orville Redenbacher, so that, though dead, he still appears in his popcorn commercials), you can in some sense exist forever. The title of the avatar game on the internet, “Second Life,” reveals this implicitly. So does the reaction of volunteers whom Jeremy Bailenson studied for a Stanford experiment purporting to create avatars that could be preserved forever. When the subjects found out that the science to create immortal avatars of themselves didn’t yet exist, many screamed their outrage. They had invested infinite hope in being among the first avatar-based immortals.

Before dismissing this as foolish dreamery, consider how far this movement has already gone. Right now, the video games that most kids engage in (my grandson has a Wii version of Star Wars in which he ‘becomes’ Lego-warrior avatars who destroy everything in sight) “consume more hours per day than movies and print media combined” (Jeremy Bailenson and Jim Blascovich, Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution, Morrow: 2011, p. 2) The key point about this, moreover, is that countless neuroscience experiments have proved that “the brain doesn’t much care if an experience is real or virtual.” Read that again. The brain doesn’t care whether an experience is “only virtual.” It reacts in much the same way as it does to “reality.”

Frankly, until I read Infinite Reality, all of this had pretty much passed me by. I had read about virtual-reality helmets such as the kind used to train pilots, but I had no idea that things had gone so far. I had no idea that millions of people sign up for the online site called “Second Life” (I tried; it seemed impossibly complex and stupid to me), and invest incredible amounts of time and emotional energy setting up an alternate personality (avatar) that can enter the website’s virtual world and interact in any way imaginable with other people’s avatars. Needless to say, most people equip their avatars with qualities they would like to have, or have wondered about having. Then they go looking for people (avatars) with whom to experiment in a wished-for interaction. The most common interaction, not surprisingly, seems to be sex with another avatar, or several others; but there’s also a lot of wheeling and dealing to gain wealth and prestige. Talk about “be all that you can be!”

Still, the really interesting stuff happens when you get into a virtual laboratory. Whereas “Second Life” takes place on a flat computer screen, virtual reality really comes into its own when you don a headset that can simulate real scenes in 3D fidelity so real that when people approach a simulated pit in front them, they invariably recoil (even though they’re “really” walking on a level floor). While virtual reality of this kind is expensive today, there can be little question that it soon will have become commonplace. Rather than spending tons of money traveling to China, say, one will be able to go there “virtually,” without having to endure the travails of travel, including bothersome other people. What makes this eerie is that video games are already working with this kind of VR, and creating avatars. In games like Pong, Wii, Move, and Kinect the game computer can already “track” a user’s physical movements and then “render” a world incorporating those movements into a virtual tennis scene that is authentic in all necessary details. So,
In a repetitive cycle, the user moves, the tracker detects that movement, and the rendering engine produces a digital representation of the world to reflect that movement…when a Wii tennis player swings her hand, the track wand detects the movement and the rendering engine draws a tennis swing. (p. 44)


As Bailenson notes, “in a state of the art system, this process (of tracking and rendering the appropriate scene from the point of view of the subject) repeats itself approximately 100 times a second.” Everything in the virtual scene appears smooth and natural, including, in the game “Grand Theft Auto,” an episode where players can “employ a prostitute and then kill her to get their money back.” And remember, the brain reacts to all this in the same way it does when it is “really” happening.

The implications to a psychologist like Bailenson are profound. Short people, for example, who adopt a tall avatar for themselves, show definite improvements in their self-image, even after they’ve left the avatar behind. They also show improvements in competition: in real games held afterwards, the person whose avatar was taller became a more successful negotiator. Those who fashion a trim, beautiful avatar, show the same rise in self-esteem. Bailenson also notes the importance of people’s attributions of “mind” or reality to inanimate objects like computers, and this includes avatars. In one experiment, subjects were shown a real person named Sally, and then her avatar disfigured with a birthmark (neurophysiological studies show that interacting with a “stigmatized other,” even someone with a birthmark, causes a threat response). After four or five minutes interacting with Sally’s disfigured avatar, subjects displayed the heart-rate response indicating threat—even though they knew the real Sally had no birthmark. And the games sold to consumers keep getting more sophisticated in this regard. In the Sony PlayStation game, THUG 2 (over 1 million sold in U.S.) players can upload their photos onto the face of a character, and then have their “clones” perform amazing feats of skateboarding, etc. They can also watch them performing actions not under their control. This brings up the question of the effect of watching one’s “doppelganger” (a character with one’s appearance) do something in virtual reality. It appears to be profound: the more similar a virtual character is to the person observing, the more likely the observer is to mimic that character. This can be positive: watching a healthy person who seems similar can lead a person to adopt healthy behavior. But other possibilities are legion. Baileson mentions the commercial ones:

…if a participant sees his avatar wearing a certain brand of clothing, he is more likely to recall and prefer that brand. In other words, if one observes his avatar as a product endorser (the ultimate form of targeted advertising), he is more likely to embrace the product. (119)


In short, we prefer what appears like us. Experiments showed that even subjects who knew their faces had been placed in a commercial, still expressed preference for the brand after the study ended. Can anyone imagine most corporations aren’t already planning for what could be a bonanza in narcissistic advertising?

More bizarre possibilities for avatars, according to Bailenson and Blascovich, seem endless. In the brave new world to come, “wearing an avatar will be like wearing contact lenses.” And these avatars will be capable of not only ‘seeing’ virtual objects and ‘feeling’ them (using ‘haptic’ devices), but of appearing to walk among us. More ominously, imposters can “perfectly re-create and control other people’s avatars” as has already happened with poor old Orville Redenbacher. Tracking devices—which can see and record every physical movement you make—make this not only possible, but inevitable. Everyone, with all physical essentials, will be archived.

All of this makes the idea of “the real world” rather problematic. Of course, neuroscience has already told us that the ‘world’ we see and believe in is really a model constructed by our brains, but still, this takes things several steps beyond that. For if, in virtual reality, “anybody can interact with anybody else in the world, positively or negatively,” then what does it mean to talk about “real” experience? If “everything everybody does will be archived,” what does privacy mean?

At the least, one can say this: a brave new world is already upon us (think of all those kids with video games; think of how much time you already spend staring at your computer screen), and you can bet that those with an eye to profiting from it are already busy, busy, busy. One can also say, take a walk in the real outdoors with real dirt, grass, trees, worms, bugs, and the sweet smell of horseshit; it may soon be only a distant memory.

Lawrence DiStasi

Friday, September 9, 2011

The Spirit of Capitalism

I have been reading Max Weber’s seminal work, The Protestant Ethic and the Spirit of Capitalism lately and it illuminates a great deal about the spirit of our times—a spirit that has been termed The Age of Greed by Jeff Madrick in his recent book of that name. And while what Madrick describes is really the transformation in the last 40 years of America from an industrialized society to a financialized one, it doesn’t address the origins that interest me here. Weber was interested in this too. His question really was not only ‘why do people work to begin with’ (primary cultures had no concept called “work” at all and only exerted themselves periodically in war or in short-term hunting and gathering), but more relevant to his time, ‘why do people in modern society identify themselves as laborers?’ How was it possible for western culture to transform itself from a traditional culture where labor hardly existed except as part of a manorial household, to post-1600s capitalist society where free laborers are yoked to paying jobs in capitalistic enterprises? More specifically, how could a state of mind that Weber finds best illustrated in Ben Franklin (a penny saved is a penny earned; time is money; credit is money—i.e. it is a duty to increase one’s capital) come to be adopted by whole societies when, in the Middle Ages and before, that state of mind would “have been proscribed as the lowest sort of avarice?” As sinful greed? To illustrate how remarkable this is, Weber compares traditional laborers with modern laborers. A farm owner, for example, who pays his workers at a piece-rate (like modern farm workers paid at so much per bushel), thinks to increase production by increasing rates. This works with modern workers, but when applied to traditional laborers, the increased rate backfires. The traditional worker, that is, not only does not increase his work rate, he decreases it—he works slower so as to still earn the same daily amount. As Weber summarizes it, “the opportunity of earning more was less attractive than that of working less.” Thus the attitude of traditionalism:
A man does not “by nature” wish to earn more and more money, but simply to live as he is accustomed to live and to earn as much as is necessary for that purpose. (60)

Weber then devotes his entire book to explaining how Protestantism, especially the Calvinist branch of the Reformation, changed this traditionalist attitude towards work. While a “surplus population which it can hire cheaply” is necessary for capitalism to develop and thrive, so, he says, is a “developed sense of responsibility.” That is, for capitalism to work, “labour must…be performed as if it were an absolute end in itself, a calling.” Far from being natural, or even the product of high or low wages, this attitude “can only be the product of a long and arduous process of education” (62). And the educating body was, originally at least, Protestantism. It is important to note that this education in work did not, at least at first, involve an education in greed, much less enjoyment. To the contrary, Weber makes clear that the essential ingredient, in the beginning, involved a kind of asceticism—not the asceticism of the monastery, but an asceticism in the world. To make labor a calling, that is, meant making labor an obligation in the service of God, of salvation. One was schooled in the idea that hard and constant work was an end in itself, the way of salvation for the average person, and that saving the money one earned was part of that obligation. In order to save, of course, one had to be frugal, buying only what was absolutely necessary. The asceticism that had been the mark of the otherworldly Catholic monastery, that is, was brought into the world. So one worked, one saved (“a penny saved is a penny earned”) and one eventually prospered. It is a commonplace that in the American colonies during the Puritan period (Boston, etc.), these essential elements were merged in such a way that prospering in business became synonymous with salvation—or rather, prospering became a sign of salvation. This is because though election (salvation) or damnation was pre-determined by God, the actual judgment was uncertain, and this uncertainty was almost intolerable. One’s prosperity thus became a sign, a way for the uncertainty to be resolved. The opposite was also true: poverty became a sign of damnation, making the poor doubly damned—both in this world and the next. The sad truth is that many Americans still maintain these essential attitudes.

Work as a calling then, work as a duty, and success in work as a sign of salvation are the essential elements of the Protestant ethic. They are also the essential elements of the spirit of capitalism. As Weber puts it,
the expansion of modern capitalism is not in the first instance a question of the origin of the capital sums which were available…but, above all, of the development of the spirit of capitalism (68).


This is not to say that Protestantism ignored the dangers of wealth. Weber cites the writings of Richard Baxter as illustrative. And there, the key to this danger involved idleness and the temptations of the flesh it exposed one to. As Weber interprets Baxter, “Waste of time is thus the first and in principle the deadliest of sins….Loss of time through sociability, idle talk, luxury, more sleep than is necessary for health..is worthy of absolute moral condemnation” (157). A person was thus led to work constantly, to save what he earned, never to enjoy the fruits of his labor, but rather invest those savings as an entrepreneur in new opportunities for more work (and wealth). So while the ethic frowned on wealth and the luxuries it fostered, it at the same time had the “psychological effect of freeing the acquisition of goods from the inhibitions of the traditionalist ethic. It broke the bonds of the impulse of acquisition in that it not only legalized it, but looked upon it as directly willed by God” (171).

Weber ends his work with the ironic contradiction involved in this religiously inspired ethic. He quotes John Wesley, the co-founder of Methodism, as follows:
“I fear, wherever riches have increased, the essence of religion has decreased in the same proportion. Therefore I do not see how it is possible, in the nature of things, for any revival of true religion to continue long. For religion must necessarily produce both industry and frugality, and these cannot but produce riches. But as riches increase, so will pride, anger, and love of the world in all its branches….So, although the form of religion remains, the spirit is swiftly vanishing away.” (175)


The protestant entrepreneur, in this way, not only won the “feeling of God’s grace” for doing his duty in getting rich, but also a supply of “sober, conscientious, and unusually industrious workmen, who clung to their work as to a life purpose willed by God.” This ethic comforted the capitalist entrepreneur as well that the “unequal distribution of the goods of this world was a special dispensation of Divine Providence.” For had not Calvin himself said that ‘only when people, i.e. the mass of laborers and craftsmen, were poor did they remain obedient to God?’ (177). He had. So low wages, themselves, had been rationalized and justified by the divine.

The Protestant ethic, in sum, according to Weber, not only sanctified labor as a calling enabling a worker to be certain of his election, it also legalized, for the capitalist, the “exploitation of this specific willingness to work.” A Daily Double if there ever was one.

It takes little to see how these attitudes and rationalizations are still in use today. America sanctifies capitalism as literally the manifestation of both God’s will and the natural order of things. American media also lionizes those entrepreneurs who, at least according to their own myth, raise themselves by their own bootstraps to become rich—to become “elect” in modern society’s terms. Finally, American capitalism rationalizes the unequal distribution of wealth and goods in this world as simply the workings of natural or divine laws with which mere humans cannot quarrel.

To Max Weber’s credit, he ends his study with a scathing reminder that though this ethic began in the cloak of saintliness, its apotheosis in industrial capitalism became “an iron cage.” Had he known about capitalism’s most recent metamorphosis into an ongoing financial heist creating ever more inequality, his critique would have been far more savage.

Lawrence DiStasi