Obscured by Crowds: Clay Shirky’s Cognitive Surplus

In The Young & The Digital (Beacon, 2009), Craig Watkins points out an overlooked irony in our switch from television screens to computer screens: We gather together around the former to watch passively, while we individually engage with the latter to actively connect with each other. This insight forms the core of Clay Shirky’s Cognitive Surplus: Creativity and Generosity in a Connected Age (Penguin, 2010). Shirky argues that the web has finally joined us in a prodigious version of McLuhan’s “global village” or Teilhard de Chardin’s “Noosphere,” wherein everyone online merges into one productive, creative, cooperative, collective consciousness. If that seems a little extreme, so are many of Shirky’s claims. The “cognitive surplus” marks the end of the individual literary mind and the emergence of the Borg-like clouds and crowds of Web 2.0.

Okay, not exactly, but he does argue for the potential of the cognitive collective. So, Wot’s… Uh, the deal?

Is Clay Shirky the new Seth Godin? I’d yet to read anything written by him that didn’t echo things I’d read David Weinberger or Howard Rheingold (or Marshall McLuhan, of course), and I hoped Cognitive Surplus would finally break the streak. Well, it does, and it doesn’t. As Shirky put it in his previous book, Here Comes Everybody (Penguin, 2008), “society doesn’t change when people adopt new tools; it changes when people adopt new behaviors.” This time around he argues that we adopt new behaviors when provided with new opportunities, which, by my estimate, are provided by new tools — especially online.

Steve Jobs once said that the computer and the television would never converge because we choose one when we want to engage and the other when we want to turn off. The problem with Shirky’s claims is that he never mentions this disparity of desire. A large percentage of people, given the opportunity or not, do not want to post things online, create a Facebook profile, or any of a number of other web-enabled sharing activities. For example, I do not like baseball. I don’t like watching it, much less playing it. If all of the sudden baseballs, gloves, and bats were free, and every home were equipped with a baseball diamond, my desire to play baseball would not increase. Most people do not want to comment on blog posts, video clips, or news stories, much less create their own, regardless of the tools or opportunities made available to them. Cognitive surplus or not, its potential is just that without the collective desire to put it into action.

Shirky’s incessant lolcat bashing and his insistence that we care more about “public and civic value” instead comes off as “net” elitism at its worse. The wisdom of crowds, in James Surowieki’s phrase, doesn’t necessarily lead to the greater good, whatever that is. You can’t argue for bringing brains together and then expect them to “do right.” Are lolcats stupid? Probably, but they’re certainly not ushering in the end of Western civilization. It’s still less popular to be smart than it is to be a smartass, but that’s not the end of the world, online or off-. The crowd is as wise as the crowd does. Glorifying it as such, as Jaron Lanier points out in You Are Not a Gadget (Knopf, 2010), is just plain wrong-headed.

The last chapter, “Looking for the Mouse,” is where Shirky shines though. [Although its namesake echoes a story by Jaron Lanier from a 1998 Wired article about children being smarter and expecting more from technology. Lanier wrote, “My favorite anecdote concerns a three-year-old girl who complained that the TV was broken because all she could do was change channels.” Shirky’s version involves a four-year-old girl digging in the cables behind a TV, “looking for the mouse.”] His ability to condense vast swaths of knowledge into a set of tactics for new media development in this last chapter is stunning compared to the previous 180 pages. Perhaps he is the new Seth Godin afterall.

References:

Lanier, J. (1998, January). “Taking Stock.” Wired, 6.01.

Lanier, J. (2010). You Are Not a Gadget: A Manifesto. New York: Knopf.

Shirky, C. (2010). Cognitive Surplus: Creativity and Generosity in a Connected Age. New York: Penguin.

Surowieki, J. (2005). The Wisdom of Crowds. New York: Anchor.

Watkins, S. C. (2009). The Young & The Digital. New York: Beacon.

Operation: Mindcrime — Inception

In his book Speaking into the Air (University of Chicago Press, 1999), John Durham Peters points out that if telepathy — presumably the only communication context more immediate than face-to-face interaction — were to occur, how would one know who sent the message? How would one authenticate or clarify the source? Planting an idea undetected into another’s mind, subconsciously in this case, is the central concept of Christopher Nolan’s Inception. [Warning: I will do my best to spoil it below.]

Looking down on empty streets, all she can see
Are the dreams all made solid
Are the dreams all made real

All of the buildings, all of those cars
Were once just a dream
In somebody’s head
— Peter Gabriel, “Mercy Street”

The meta-idea of planting an idea in someone’s mind, known to some as memetic engineering, is not new; however, conceptualizing the particulars of doing it undetected is. Subconscious cat-burglar Dominic Cobb (Leonardo DiCaprio) specializes in extracting information from slumbering vaults. After a dream-within-a-dream heist-gone-wrong, he’s offered a gig planting something in one: and idea that will grow to “transform the world and rewrite all the rules.” Cobb reminds me of Alex Gardner (Dennis Quaid) in the 1984 movie Dreamscape. Gardner is able to enter the dreams of others and alter their outcomes and thereby the outcomes of “real” situations. Cobb and his team do the same by creating and sharing dreams with others. The ability to share dreams — or to enter other worlds together via dreams, computer networks, hallucinations, mirrors, lions, witches, wardrobes, what-have-you — seems to be a persistent human fantasy. Overall, Nolan does a fine job adding to that canon of stories.

Cognitive linguist George Lakoff gets theory-checked mid-film when Cobb’s partner Arthur (Joseph Gordon-Levitt — standing in for Heath Ledger?) explains inception with the “don’t think of an elephant” ploy. What are you thinking about right now? Exactly. The problem is that you know why you’re thinking that right now. Successful inception requires that you think you thought of the idea yourself, independent of outside influence. It’s the artificial insemination of an original thought, “pure inspiration” in Cobb’s terms.

For better or worse, this concept (which takes the entire first act to establish), its mechanics (designer sedatives to sleep, primitive “kicks” to wake up), and the “big job” (a Lacanian catharsis culminating in the dismantling of a global empire) are just the devices that might enable the estranged Cobb to return home to his children. His late wife Mal (Marion Cotillard — standing in for Brittany Murphy?), or rather his projection thereof, haunts his dreams, jeopardizing his every job. Mal is a standout strong character and performance in a cast of (mostly; see below) strong characters and performances. She is beautiful, scary, and maintains an emotional gravity intermittently missing in this often weightless world. She is the strange attractor that tugs the chaos along. Whenever the oneiric ontology of Inception feels a bit too free-floating, Mal can always be counted on to anchor it in anger and affect.

The first time through, I thought that over-explaining the “idea” idea was the movie’s one flaw, finding myself thinking, “Okay, I get it” over and over. The second time through though, I honed in on it: The one thing preventing the concept from fully taking hold in the holiest of holies in my head is Ellen Page. Sure, she ably carried the considerable weight of Hard Candy (2005) and manhandled the tomboyish Juno (2007) to breakout success (admittedly with Michael Cera’s help), but her character and performance in Inception is the splitting seam that unstitches the dream into so many threads of sober consciousness. She’s supposed to be a brilliant architect yet simultaneously unaware of the ins-and-outs of inception and extraction, but she only believably excels at the latter. Where Keanu Reaves’ bumbling and understated Neo made The Matrix (1999) work by asking questions and pulling the viewer into the second world, Page’s clueless Ariadne drags us, the pace, and the other actors down. With the inexperienced patron Saito’s (Ken Watanabe) cues and clues to guide us through the intricacies of dream-theft, Ariadne is rendered all but unnecessary. She’s mostly redundant.

The seed of every story is a conceit, an unrealistic event or idea that the rest of the story sets out to explain. The survivors of a loved one who has committed suicide can never really know why he or she did so. The living can always see another option. If nothing else, Inception succeeds in explaining the suicide of a completely rational person, but I think it succeeds at much more than that.

Note: This post greatly benefited from discussions with and thoughts from Jessy Helms, Cynthia Usery, and Matt Morris.

What Means These Screens? Two More Books

Every once in a while our reliance on technology initiates a corrective or at least a thorough reassessment. In a sort of Moore’s Law of agentic worry, the intervals seem to be shortening as fast as the technology is advancing, and the latest wave is upon us.

Sometimes these assessments are stiflingly negative and sometimes they are uselessly celebratory. Jaron Lanier’s recent book flirts with the former, while other current thinkers lean toward the latter. For instance, where Clay Shirky sees the book as an inconvenience borne by an era characterized by a lack of access, Nicholas Carr’s The Shallows: What the Internet is Doing to Our Brains (W. W. Norton & Co, 2010) laments the attempt to shred their pages into bits and scatter them all over the internet, decontextualizing great paragraphs, sentences, phrases, words. Apparently Shirky would rather read War and Pieces than War and Peace.

For all of its astute observations and well-argued points, The Shallows sometimes exhibits a strange disparity between what Carr hesitates to claim and what he writes as common knowledge. For example, he states outright that language is not a technology (p. 51) – a claim with which I not only disagree but feel is rather bold – yet hedges when saying that the book is the medium most resistant to the influence of the internet (p. 99) – a claim that seems pretty obvious to me. Books, as a medium and as an organizing principle, just do not lend themselves to the changes the digital revolution hath wrought on other media. Their form nor their fragmentation makes near as much sense.

When we do research, we rarely read an entire book. We scour indices and tables of contents for the relevant bits. As Howard Bloom gleefully explains in his contribution to this year’s summer reading list:

…if you prefer playing video games to plowing through a thousand pages of Joyce’s Odesseus and falling out of your beach chair with periodic bouts of sleep, I highly recommend the Google Book Search e-approach, deep dives into the minds of philosophers you would normally never think of sampling between games of badminton.

As much as I’d love to be able to run a digitally enabled quick-search on all the books on my bookshelf, that doesn’t mean I don’t want the option of pulling one down in its entirety once in a while. The same could be said for the fragmentation of the album as the organizing principle for music. It doesn’t take a 19th century librarian to see that preferring the excerpts and snippets of research is not the same thing as never wanting a book to read. This is the thick thicket, as Matt Schulte would call it, of digitizing books.

Carr’s point though, is not just the dissolution of our books, but the dissolution of our minds. He claims that the manifold fragments and features of the web are preventing us from concentrating for a book-length spell, much less wanting one. As clear as his argument reads and as solid as his research seems (Carr assembled a firm foundation of writing history and media ecology on which to build), it’s difficult not to take the very point of it as so much pining for a previous era. He’s careful to blunt that point by praising the web’s usefulness and to self-analyze his own tech-habits just enough to soften the prickly parts of his argument. It’s a seductive read in spite of itself.

I thoroughly enjoyed all of The Shallows, but the last chapter, “A Thing Like Me,” is one of the more frustrating twenty-odd pages I’ve read in some time. Not because it was bad, but because it was so dead-on in-tune with my recent thoughts on media and minds. It was a lengthy and weighty I-wish-I’d-written-that experience. Damn you, Nicholas Carr!

Speaking of things I wish I’d written, Tom Bissell’s Extra Lives: Why Video Games Matter (Pantheon, 2010) is a prefect model of how to write about something totally geeky, maintain the things that make it geeky, and still make it accessible to anyone. When I was a gamer, a self-identification I wouldn’t feel comfortable using even in jest today, there wasn’t such a category. Playing video games was a subset of the larger “nerd” label. Given my hiatus from said world, I should’ve been outmoded by Bissell’s admittedly narrow focus on recent console games, a focus he admits runs the “danger of seeming, in only a few years, as relevant as a biology textbook devoted to Lamarckism.” Thankfully, what this book’s subject matter lacks in breadth, Bissell’s intelligence, insight, writing, and wit make up for in spades.

Adult indulgence in video games begs questions of maturity and responsibility in the adult, but it also begs questions of the games as well. Bissell explores some of both, but mostly the latter. He thoroughly refutes Roger Ebert’s recent claim that video games can never be art (Ebert has since retracted his statements), snags insider insights via interviews with several top game designers, makes fun of Resident Evil‘s deplorable dialog, and descends into the depths of addiction and abuse — on the screen and IRL — with Grand Theft Auto IV. It’s a thumb-blistering journey through the screen and into the machine, and, in spite of its candor and seriousness, it’s damn funny.

What I can say for very few recent books, I can say for The Shallows and Extra Lives: They are as entertaining and funny as they are provocative and informative. Simply put, they are good reads. Carr and Bissell should be proud.