Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Saturday, July 2, 2016

'Killer App' Addendum

-->                                             
Scanning my college alumni magazine, I came across a piece by Judith Hertog  called “A Monitored State.” Since it relates closely to my earlier blog, Killer App, I thought its report might be useful here as a gloss on that piece. “A Monitored State” describes Dartmouth professor Andrew Campbell’s experiment monitoring student behavior via the smartphones that virtually all carry and use constantly. A paper he wrote described how smartphone sensor data “contain such detailed information about a user’s behavior that researchers can predict the user’s GPA (grade point average) or identify a user who suffers from depression or anxiety.” In this study, called Student Life, 48 student volunteers allowed Campbell’s team to gather a stream of data via an app installed on their smartphones. The app “tracked and downloaded information from each phone’s microphone, camera, light sensor, GPS, accelerometer and other sensors” and then uploaded it to a database. By analyzing the data, Campbell’s researchers were able to record details about each student’s location, study habits, parties attended, exercise programs, and sleep patterns. For at least two students, Campbell was even able to see signs of depression: “I could see they were not interacting with other people, and one was not leaving his room at all,” Campbell said. Both failed to show up for finals, whereupon Campbell gave them incompletes and encouraged them to return in the fall to complete his and other courses with success. What Campbell draws from this is that, in the future, not only will universities be able to intervene to help students in such situations, but such information will be available in real time to monitor everything, including the state of every student’s mental well-being.
            Campbell has also collaborated with brain science colleagues “to discover how smartphone sensor data can be combined with information from fMRI scans” in order to eventually create apps that not only identify mental problems but also “intervene before a breakdown occurs.” In fact, in a follow-up phase of his study, he got student volunteers to submit to fMRI scans, and wear a Microsoft smart band that collected body signals like heart rate, body temperature, sleep patterns, and galvanic skin response—all associated with stress. Thus, more than simple behaviors, today’s technologies can (and already do) detect, grossly at least, an individual’s state of mind. One of Campbell’s colleagues predicts that in addition to being able to predict which individuals are “most susceptible to weight gain,” smartphones of the future will be able to warn when “its owner enters a fast-food restaurant.”
            The potential threat from all these technologies has not been lost on Campbell and his colleagues. His collaborator, Prof. Todd Heatherton, is already worried about a future determined by the constant collection of the data monitored by smartphones, and its use by companies, insurance underwriters, for instance, to determine who gets insurance and how much they pay for it. Heatherton was also shocked by how casual students were about sharing such personal data for his study. But clearly, this generation is already used to sharing just about everything on apps like Find Friends (an app that broadcasts one’s location to everyone in one’s network). For Heatherton and others, this raises important questions about the ethics of all this technology and how far it can be used to monitor every detail of our lives. James Moore, a Dartmouth philosophy professor specializing in ethics, worries how information about a person’s entire life could be used by governments wanting, for just one example, to monitor those on welfare. Or totalitarian governments that could use such data to keep potentially rebellious populations under rigid control.
            Campbell himself worries about the same thing, hoping that legislation will be forthcoming that will at least give individuals ownership of their own data (now being used by Google and many others for commercial purposes and more). People need to think about this, he says, and realize that “we are turning into a monitored state.” Or perhaps already are.
            Even George Orwell couldn’t have imagined such an easily ‘big-brothered’ state—and all thanks to those adorable smartphones.  

Lawrence DiStasi

Wednesday, June 1, 2016

Killer App

-->
I have just finished reading Sherry Turkle’s recent book, Reclaiming Conversation: The Power of Talk in a Digital Age (Penguin: 2015). Being without a smartphone, and never having texted in my life, I found Turkle’s research into what smartphones are doing to young people (and to many of their parents) shocking. Consider some stats first: a) average Americans check their smartphones every 6-1/2 minutes (actually, college students in one of Turkle’s classes say that they can sometimes go 3 minutes without a phone check, but the more likely limit is 2 minutes!) b) fully one-fourth of American teens connect to a device within 5 minutes of waking (80% sleep with their phones). c) most teenagers send about 100 texts every day. d) 44 percent of teens do not “unplug” ever. Now let me quote what Turkle says about the power these smartphones have to enslave us: “It (the smartphone) is not an accessory. It’s a psychologically potent device that changes not just what you do but who you are” (319). Keep that in mind: using a smartphone changes who you are, it changes your brain, it changes how you behave, it changes how you talk and relate to and treat other people—and mostly not for the better. This is the sum and substance of Turkle’s (she is a professor of sociology and psychology at MIT who specializes in the effect of technology on modern life) book. Though it may not be too late (her title implies that we can, if we are determined to, ‘reclaim conversation’), Turkle’s research shows that things have gone very far indeed.
            Let me cite just a few of the examples Turkle provides. First of all are the rules that young people now live by—and here I should say that I found myself at first contemptuous of, and then feeling deep sympathy for, these kids (to me, even the thirty-somethings are kids who have grown up with technology) whose interactions like dinner or dating are now governed by their devices and the “apps” they apply there. The rules are ubiquitous and bizarre, but clearly necessary. There’s the “rule of two or three” at meals: students Turkle interviewed at one college, that is, make sure that at least two people in a group of seven or so at dinner are NOT on their phones before they allow themselves to check theirs; if fewer than two were paying attention to the conversation, it wouldn’t work. As Eleanor says of observing the rule, “It’s my way of being polite.” The corollary is that conversations, even at dinner, even among friends, are fragmented (and hence “lighter” of necessity): everyone is more interested in checking what might be on their phones, or who might be texting and require an immediate response, than in the people they’re actually with—much less what they’re saying.
            And this gets to one of the major points in the book. The ability to converse is atrophying among smartphone users. Family members at dinner constantly check their phones. Kids in class and on dates and at parties check their phones. And much of the checking involves texting—the major form of communication among phone users. That is, people don’t “talk” to each other on their phones; they “text” each other. And there are strict rules among friends—many kids number as many as 100 texting friends in their circle—who text. If someone texts you with an ‘emergency’ (for teens, every slight is an emergency), you have at most 5 minutes to respond. If you don’t comply within that time limit, then you risk losing that friend because your delay in responding is taken as an insult. So kids with phones tend to be hypervigilant—they don’t want to miss an important text, which is why they can’t stand not checking their phones every 2 minutes. The other reason they can’t stand not checking their phones is what they have acronymed FOMO: Fear of Missing Out. Something better than what’s happening here and now might be going on. FOMO haunts even those who are at parties, or in bed with a partner! One of Turkle’s informants described being at a party, but being compelled to check her phone (everyone was doing this for the same reason) to see if a friend was at another party that might be hotter. A college student described being in bed with a guy, who got up to go to the bathroom—which impelled her to take out her phone to check her Tinder app to see what men in her area might be interested in meeting, and more. Her comment: “I have no idea why I did this—I really like the guy…I want to date him, but I couldn't help myself. Nothing was happening on Facebook; I didn’t have any new emails” (38). A recent grad named Trevor told Turkle about his college graduation party where “people barely spoke” but “looked at their phones.” And this was okay because
Everyone knew that when they got home they would see the pictures of the party. They could save the comments until then. We weren’t really saying good-bye. It was just good-bye until we got to our rooms and logged onto Facebook (138).

In other words, life is not what’s happening in reality, face to face; it’s what gets reported on Facebook. Likewise, conversation doesn’t happen by talking face to face; that’s too risky; one might say something rash or erroneous. Conversation is what happens on Gchat or when texting—where one can edit one’s response (or breakup messages) and make them perfect. Real conversation is just too fraught with uncertainty, with emotion, with risk, with the mess that is human life.
            This is serious, America. These machines are changing the way human beings interact. They are changing the way humans feel about each other, literally changing if they can feel about each other. And that is another of Turkle’s major points here. Based on her research and consultations with middle schools in her area, she points out that without the give and take of face-to-face conversation, many young people are losing no less than the defining human capacity of empathy. A researcher at Stanford, Clifford Nass, specifically looked into the emotional capacity of freshmen at Stanford. He compared the emotional development of women who characterized themselves as “highly connected” to those spending less time online, and found that the former had a weaker ability to identify the feelings of other people (which is what empathy involves), and actually felt less accepted by their peers. As Turkle summarizes it, “Online life was associated with a loss of empathy and a diminished capacity for self-reflection” (41). And no wonder. Texting has become the substitute for having to look someone in the eye, of having to see emotions in their faces and bodies, especially when we have to discuss something that might be stirring or painful. Face-to-face conversation is “too risky”—that’s how most young people put it. Another person’s response to you might get too emotional. And it’s not just teenagers. Mothers and whole families now have fraught family discussions on Gchat so as to avoid possible eruptions of emotion or words that hurt. It takes the risk out of family dynamics, they say. One never has to face someone yelling. But what is being lost? is Turkle’s question. And her answer is that essentially, our human-ness is being lost. Children “are being deprived” she says, “not only of words but of adults who will look them in the eye.” And as countless volumes of research have shown, eye contact is vital to “emotional stability and social fluency: deprived of eye contact, infants become agitated, then withdrawn, then depressed” (108).  As to empathy, it seems to be more or less out the window. Turkle quotes teachers at a middle school she consults with:
When they hurt each other, they don’t realize it and show no remorse. When you try to help them, you have to go over it over and over with them, to try to role-play why they might have hurt another person. And even then, they don’t seem sorry. They exclude each other from social events, parties, school functions, and seem surprised when others are hurt…They are not developing that way of relating where they listen and learn how to look at each other and hear each other (164).

When one looks at how romance and other interactions are handled, one can see why. Turkle, for example, describes the NOTHING gambit. This refers to not responding to a flirtatious text. Just silence, nothing. One girl calls it “a way of driving someone crazy…you don’t exist.” And then the proper way to respond to nothing is to pretend, in turn, that it didn’t happen. Because trying to text again saying “Why don’t you get back to me” is simply “not cool.” It’s being a loser. So is responding too quickly to a text. Ryan says, for example, that if a woman responds to his text immediately, it might be good, but it might also mean “She’s psycho, man” (188).
            What a terrible burden this must be. The weight of always wanting to know if the other person is interested has always been a cause of anxiety in romantic encounters. But at least when the brushoff happens face to face, something is settled; a human interaction is, literally, faced. Here, nothing is. All simply dissolves in nothingness. You don’t exist. And having to wonder if even someone you apparently have a connection with is possibly checking out Tinder or some other app for a better party or a better partner (there are always dozens of ‘partners’ available on Tinder)--that must be agonizing.
            Of course, when technology becomes a problem—as Turkle suggests it is—there are always those who count on more technology to solve the problem. Robots seem to be the current solution of choice among MIT engineers. Here is where Turkle takes her story in the end, and it is not encouraging. Apparently, AI engineers are working hard to design robots that can actually provide the “eye contact” and “human” conversation that we are no longer getting from our apps. Speculation is rife that robots will soon be able to perform the daunting caretaking tasks that a fast-growing older population requires. Not enough humans to do the dirty work? Design robots to provide what’s lacking, even as babysitters. And there is data from some primitive robots already among us about how it’s working. In one encounter, Turkle describes what happened when a 12-year-old girl named Estelle was used as a subject to interact with a robot named Kismet. Kismet ‘listens’ attentively to Estelle (elsewhere, Turkle notes that the “feeling that no one is listening to me” plays a large part in what we try to solve with technology), simulates human facial expressions, pretends deep interest in what Estelle says. All goes well until a glitch in Kismet’s program leads to a break in the contact, and Kismet turns away. Estelle is deeply disappointed and shows it by eating cookies voraciously. When pushed to explain, she laments tearfully that Kismet didn’t like her; the robot turned away. The researchers explain that it was simply a technical problem and had nothing to do with her, but Estelle is not consoled. She thinks it’s her failure that Kismet doesn’t “like” her (the “like” on Facebook has become the standard for judging ourselves and our appeal). Another instance involves a young girl named Tara who, in all her actions, is the “perfect child.” But her mother notices that she sometimes talks to Siri, the talking app developed by Apple. And when she does, she vents all the anger on Siri that she has suppressed elsewhere. Tara compartmentalizes, in other words: she has to be perfect with people, but can be angry with Siri. Turkle comments thusly: “if Tara can ‘be herself’ only with a robot, she may grow up believing that only an object can tolerate her truth” (347).
            Is this the place we're getting to, one where only machines can tolerate us? This would be a brave new world, indeed. And the grim truth seems to be that some of us are already there. Sherry Turkle wants us to change. She wants us to “reclaim conversation” with each other, before it’s too late. She draws hope from the fact that the human brain is capable of changing. She has seen this in summer camps where children are prevented from having devices of any kind, and where after a short time, they do reclaim their human interest in each other and what’s around them. But she has a warning as well, especially as regards our attempts to fill the well of human loneliness with robots (or machines of any kind). This is because engineers have already shown that they can build toys for children (Furbies, etc.) that feign feelings, and that, by getting the child to care for them, instill feelings of attachment in the child for what is a lifeless object, a machine. This takes advantage of a well-known phenomenon: humans tend to impute human feelings to that which seems human—like talking apps that simulate conversation, like our computers. And so, one of her conclusions goes like this:
Nurturance turns out to be a “killer app.” Once we take care of a digital creature or teach or amuse it, we become attached to it, and then behave “as if” the creature cares for us in return (352).

            Killer app indeed. There are many of them already, busily doing their grim work as human surrogates. For, as Turkle writes, “Now we have to ask if we become more human when we give our most human jobs away” (362). It ought to be clear what Turkle thinks about this, and what I think. The question is, do enough others care enough to think about it? How about you (having been reached, of course, electronically)?

Lawrence DiStasi

Tuesday, September 27, 2011

Avatars and Immortality

Anyone who has read or heard even a little history knows that the dream of immortality has existed among humans for a very long time. Most of these dreams (though not all, as the Christian fundamentalist notions of the “rapture,” and Islamic fundamentalist notions of a heaven full of virgins awaiting the martyrs who blow themselves and others up, prove) have been debunked in recent years, when even the Roman Catholic Church has pretty much abandoned its notion of an afterlife in fire for those who’ve been ‘bad’ (whether Catholics still believe in a blissful Heaven for those who’ve been ‘good’ remains unclear to me).

What’s astonishing is that this dream of living forever now exists in the most unlikely of places—among computer geeks and nerds who mostly profess atheism. It exists, that is, in two places: virtual reality, and the transformation of humans into cyborgs (though cyborgs don’t specifically promise immortality, they do promise to transform humans into machines, which is a kind of immortality—see Pagan Kennedy, “The Cyborg in Us All,” NY Times, 9.14.11). If you can create an avatar—a virtual computerized model—of yourself (as has been done for Orville Redenbacher, so that, though dead, he still appears in his popcorn commercials), you can in some sense exist forever. The title of the avatar game on the internet, “Second Life,” reveals this implicitly. So does the reaction of volunteers whom Jeremy Bailenson studied for a Stanford experiment purporting to create avatars that could be preserved forever. When the subjects found out that the science to create immortal avatars of themselves didn’t yet exist, many screamed their outrage. They had invested infinite hope in being among the first avatar-based immortals.

Before dismissing this as foolish dreamery, consider how far this movement has already gone. Right now, the video games that most kids engage in (my grandson has a Wii version of Star Wars in which he ‘becomes’ Lego-warrior avatars who destroy everything in sight) “consume more hours per day than movies and print media combined” (Jeremy Bailenson and Jim Blascovich, Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution, Morrow: 2011, p. 2) The key point about this, moreover, is that countless neuroscience experiments have proved that “the brain doesn’t much care if an experience is real or virtual.” Read that again. The brain doesn’t care whether an experience is “only virtual.” It reacts in much the same way as it does to “reality.”

Frankly, until I read Infinite Reality, all of this had pretty much passed me by. I had read about virtual-reality helmets such as the kind used to train pilots, but I had no idea that things had gone so far. I had no idea that millions of people sign up for the online site called “Second Life” (I tried; it seemed impossibly complex and stupid to me), and invest incredible amounts of time and emotional energy setting up an alternate personality (avatar) that can enter the website’s virtual world and interact in any way imaginable with other people’s avatars. Needless to say, most people equip their avatars with qualities they would like to have, or have wondered about having. Then they go looking for people (avatars) with whom to experiment in a wished-for interaction. The most common interaction, not surprisingly, seems to be sex with another avatar, or several others; but there’s also a lot of wheeling and dealing to gain wealth and prestige. Talk about “be all that you can be!”

Still, the really interesting stuff happens when you get into a virtual laboratory. Whereas “Second Life” takes place on a flat computer screen, virtual reality really comes into its own when you don a headset that can simulate real scenes in 3D fidelity so real that when people approach a simulated pit in front them, they invariably recoil (even though they’re “really” walking on a level floor). While virtual reality of this kind is expensive today, there can be little question that it soon will have become commonplace. Rather than spending tons of money traveling to China, say, one will be able to go there “virtually,” without having to endure the travails of travel, including bothersome other people. What makes this eerie is that video games are already working with this kind of VR, and creating avatars. In games like Pong, Wii, Move, and Kinect the game computer can already “track” a user’s physical movements and then “render” a world incorporating those movements into a virtual tennis scene that is authentic in all necessary details. So,
In a repetitive cycle, the user moves, the tracker detects that movement, and the rendering engine produces a digital representation of the world to reflect that movement…when a Wii tennis player swings her hand, the track wand detects the movement and the rendering engine draws a tennis swing. (p. 44)


As Bailenson notes, “in a state of the art system, this process (of tracking and rendering the appropriate scene from the point of view of the subject) repeats itself approximately 100 times a second.” Everything in the virtual scene appears smooth and natural, including, in the game “Grand Theft Auto,” an episode where players can “employ a prostitute and then kill her to get their money back.” And remember, the brain reacts to all this in the same way it does when it is “really” happening.

The implications to a psychologist like Bailenson are profound. Short people, for example, who adopt a tall avatar for themselves, show definite improvements in their self-image, even after they’ve left the avatar behind. They also show improvements in competition: in real games held afterwards, the person whose avatar was taller became a more successful negotiator. Those who fashion a trim, beautiful avatar, show the same rise in self-esteem. Bailenson also notes the importance of people’s attributions of “mind” or reality to inanimate objects like computers, and this includes avatars. In one experiment, subjects were shown a real person named Sally, and then her avatar disfigured with a birthmark (neurophysiological studies show that interacting with a “stigmatized other,” even someone with a birthmark, causes a threat response). After four or five minutes interacting with Sally’s disfigured avatar, subjects displayed the heart-rate response indicating threat—even though they knew the real Sally had no birthmark. And the games sold to consumers keep getting more sophisticated in this regard. In the Sony PlayStation game, THUG 2 (over 1 million sold in U.S.) players can upload their photos onto the face of a character, and then have their “clones” perform amazing feats of skateboarding, etc. They can also watch them performing actions not under their control. This brings up the question of the effect of watching one’s “doppelganger” (a character with one’s appearance) do something in virtual reality. It appears to be profound: the more similar a virtual character is to the person observing, the more likely the observer is to mimic that character. This can be positive: watching a healthy person who seems similar can lead a person to adopt healthy behavior. But other possibilities are legion. Baileson mentions the commercial ones:

…if a participant sees his avatar wearing a certain brand of clothing, he is more likely to recall and prefer that brand. In other words, if one observes his avatar as a product endorser (the ultimate form of targeted advertising), he is more likely to embrace the product. (119)


In short, we prefer what appears like us. Experiments showed that even subjects who knew their faces had been placed in a commercial, still expressed preference for the brand after the study ended. Can anyone imagine most corporations aren’t already planning for what could be a bonanza in narcissistic advertising?

More bizarre possibilities for avatars, according to Bailenson and Blascovich, seem endless. In the brave new world to come, “wearing an avatar will be like wearing contact lenses.” And these avatars will be capable of not only ‘seeing’ virtual objects and ‘feeling’ them (using ‘haptic’ devices), but of appearing to walk among us. More ominously, imposters can “perfectly re-create and control other people’s avatars” as has already happened with poor old Orville Redenbacher. Tracking devices—which can see and record every physical movement you make—make this not only possible, but inevitable. Everyone, with all physical essentials, will be archived.

All of this makes the idea of “the real world” rather problematic. Of course, neuroscience has already told us that the ‘world’ we see and believe in is really a model constructed by our brains, but still, this takes things several steps beyond that. For if, in virtual reality, “anybody can interact with anybody else in the world, positively or negatively,” then what does it mean to talk about “real” experience? If “everything everybody does will be archived,” what does privacy mean?

At the least, one can say this: a brave new world is already upon us (think of all those kids with video games; think of how much time you already spend staring at your computer screen), and you can bet that those with an eye to profiting from it are already busy, busy, busy. One can also say, take a walk in the real outdoors with real dirt, grass, trees, worms, bugs, and the sweet smell of horseshit; it may soon be only a distant memory.

Lawrence DiStasi