How Soon is Now? The Perpetual Present

When I was growing up, the year 2000 was the temporal touchstone everyone used to mark the advances of modern life. Oh, by then we’d be doing so many technologically enabled things: Cars would fly and run on garbage, computers would run everything, school wouldn’t exist. We were all looking forward, and Y2K gave us a point on the horizon to measure it all by. When it came and went without incident, we were left with what we had in the present. In Present Shock: When Everything Happens Now (Current, 2013), Douglas Rushkoff argues that the flipping of the calendar to the new millennium turned our focus from the future to the never-ending now. “We spent the latter part of the 20th Century leaning towards the year 2000, almost obsessed with the future, the dot-com boom, the long boom, and all that,” he tells David Pescovitz, “It was a century of movements with grand goals, wars to end wars, and relentless expansionism. Then we arrived at the 21st, and it was as if we had arrived.”

“We spent centuries thinking of hours and seconds as portions of the day,” he continues, “But a digital second is less a part of greater minute, and more an absolute duration, hanging there like the number flap on an old digital clock.” A digital clock is good at accurately displaying the time right now, but an analog clock is better at showing you how long it’s been since you last looked. Needing, wanting, or having only the former is what present shock is all about. It’s what Ruskoff calls elsewhere “a diminishment of everything that isn’t happening right now — and the onslaught of everything that supposedly is.” As the song goes, when you say it’s gonna happen “now,” well, when exactly do you mean?

Michael Leyton (1992) calls us all “prisoners of the present” ( p. 1), like runners on a temporal treadmill. He argues that “all cognitive activity proceeds via the recovery of the past through objects in the present” (p. 2), and those objects often linger longer than they once did thanks to recording technologies. In 1986 Iain Chambers described the persistence of the present through such media, writing,

With electronic reproduction offering the spectacle of gestures, images, styles, and cultures in a perpetual collage of disintegration and reintegration, the ‘new’ disappears into a permanent present. And with the end of the ‘new’ – a concept connected to linearity, to the serial prospects of ‘progress’, to ‘modernism’ – we move into a perpetual recycling of quotations, styles, and fashions: an uninterrupted montage of the ‘now’ (p. 190).

Present ShockNeedless to say that the situation has only been exacerbated by the onset of the digital. In one form or another, Rushkoff has been working on Present Shock his whole career. In it he continues the critical approach he’s sharpened over his last several books. Where Life, Inc. (Random House, 2009) tackled the corporate takeover of culture and Program or Be Programmed (OR Books, 2010) took on technology head-on, Present Shock deals with the digital demands of the now. A lot of the dilemma is due to the update culture of social media. No one reads two-week old Tweets or month-old blog posts. If it wasn’t posted today, in the last few hours, it disappears into irrelevance. And if it’s too long, it doesn’t get read at all. These are not rivers or streams, they’re puddles. All comments, references, and messages, and no story. The personal narrative is lost. It’s the age of “tl; dr.” The 24-hour news, a present made up of the past, and advertising interrupting everything are also all about right now, but our senses of self maybe the biggest victims.

“Even though we may be able to be in only one place at a time,” Rushkoff writes, “our digital selves are distributed across every device, platform, and network onto which we have cloned our virtual identities” (p. 72). Our online profiles give us an atemporal agency whereon we are there but not actually present. On the other side, our technologies mediate our identities by anticipating or projecting a user. As Brian Rotman (2008) writes, “This projected virtual user is a ghost effect: and abstract agency distinct from any particular embodied user, a variable capable of accommodating any particular user within the medium” (p. xiii). Truncated and clipped, we shrink to fit the roles the media allow.

Mindfulness is an important idea cum buzzword in the midst of all this digital doom. Distraction may be just attention to something else, but what if we’re stuck in permanently distracted present with no sense of the past and no time for the future? If you’ve ever known anyone who truly lives in the moment, nothing matters except that moment. It’s the opposite of The Long Now, what Rushkoff calls the “Short Forever.” Things only have value over time. Citing the time binding of Alfred Korzybski, the father of general semantics, Rushkoff illustrates how we bind the histories of past generations into words and symbols. The beauty is that we can leverage the knowledge of that history without going through it again. The problem is that without a clear picture of the labor involved, we risk mistaking the map for the territory.

James Gleick summed it up nicely when he told me in 1999,”We know we’re surrounding ourselves with time-saving technologies and strategies, and we don’t quite understand how it is that we feel so rushed. We worry that we gain speed and sacrifice depth and quality. We worry that our time horizons are foreshortened — our sense of the past, our sense of the future, our ability to plan, our ability to remember.” Well, here we are. What now?

The existence of this book proves we can still choose. In the last chapter of Present Shock, Rushkoff writes,

…taking the time to write or read a whole book on the phenomenon does draw a line in the sand. It means we can stop the onslaught of demands on our attention; we can create a safe space for uninterrupted contemplation; we can give each moment the value it deserves and no more; we can tolerate uncertainty and resist the temptation to draw connections and conclusions before we are ready; and we can slow or even ignore the seemingly inexorable pull from the strange attractor at the end of human history (p. 265-266).

We don’t have to stop or run, we can pause and slow down. Instant access to every little thing doesn’t mean we have to forsake attended access to a few big things. Take some time, read this book.

References:

Chambers, Iain. (1986). Popular Culture: The Metropolitan Experience. New York: Routledge.

Leyton, Michael. (1992). Symmetry, Causality, Mind. Cambridge, MA: The MIT Press.

Morrissey, Steven & Marr, Johnny (1984). How Soon is Now? [Recorded by The Smiths]. On Hatful of Hollow [LP]. London: Rough Trade.

Rotman, Brian. (2008). Becoming Beside Ourselves: The Alphabet, Ghosts, and Distributed Human Being. Durham, NC: Duke University Press.

Rushkoff, Douglas. (2013). Present Shock: When Everything Happens Now. New York: Current.

Metaphors Be With You: Slinging the Slang Online

Marshall and Eric McLuhan’s Laws of Media (1988) opens with the claim that each of our artifacts is “a kind of word, a metaphor that translates experience from one form to another” (p. 3). For a man of letters to use a linguistic premise upon which to build the laws of media is not surprising. It was McLuhan (1951) after all who pointed out that advertising employs the same strategies as poetry. If we treat software (specifically microblogging platforms) and cities as artifacts, the emergent form seems to be the evolution of language itself: causal, casual language. New slang manifests from urban areas to online services.

A few of Eisenstein, et al’s linguistically linked cities.

Georgia Tech’s Jacob Eisenstein and his colleagues have been studying the conflation of urban populations, microblogging, and the evolution of language. Jim Giles of New Scientist reports one such study:

After collecting the data, the team built a mathematical model that captures the large-scale flow of new words between cities. The model revealed that cities with big African American populations tend to lead the way in linguistic innovation.

Slang that would normally remain isolated in one urban area until picked up by some mass medium or transmitted by traveling users is now narrowcast via networks. Innovators of utterances share their new words without ever seeing another’s city.

Though one can scarcely discuss the transgressions of language, poetry, and the city without mentioning Guy Debord and The Situationists, Michel de Certeau is perhaps the most famous theorist to conflate the urban and the linguistic. “The act of walking is to the urban system what the speech act is to language or to the statements uttered,” he writes (1984, p. 97). “Walking affirms, suspects, tries out, transgresses, respects, etc., the trajectories it ‘speaks’. All the modalities sing a part in this chorus, changing from step to step, stepping in through proportions, sequences, and intensities which vary according to the time, the path taken and the walker” (p. 99). These thoughts of walking in the city, which is incidentally the name of the chapter from which they are cited, evoke the language of appropriation, allusion, remix. De Certeau continues elsewhere:

Our society has become a recited society, in three senses: it is defined by stories (récits, the fables constituted by our advertising and informational media), by citations of stories, and by the interminable recitation of stories (p. 186).

In other words, we make meaning by appropriating (see also Jenkins, 1992; 2006). William Gibson (2005) writes, “Today’s audience isn’t listening at all–it’s participating. Indeed, audience is as antique a term as record, the one archaically passive, the other archaically physical. The record, not the remix, is the anomaly today. The remix is the very nature of the digital.” Slang is not necessarily remix, but it often involves the appropriation of utterances that once meant something else, a recontextualization of their meaning. The use and evolution of slang operates on the same basic premise of sampling and remix, as well as that of metaphor.

The widespread dissemination of pop culture is nothing new. As Todd Gitlin writes in his book Media Unlimited (Metropolitan Books, 2001), “Poetry and song migrated across Europe hand to hand, mouth to ear to mouth. Broadsheets circulated. From the second half of the fifteenth century on, Gutenberg’s movable type made possible mass-printed Bibles and a flood of instructional as well as scurrilous literature. Even where literacy was rare, books were regularly read aloud” (p. 27). Though Gutenberg’s printing press represents what McLuhan (1964) referred to as the first assembly line — one of repeatable, linear text — and is what made large-volume printed information a personal, portable phenomenon, the advent of the telegraph brought forth the initial singularity in the evolution of information technology. As James Carey (1988) observed, the telegraph separated communication from transportation. As news on the wire, information could thereafter spread and travel free from its human progenitors. Information was thusly commoditized. Liberated from books and newspapers, new slang and ideas have since become a larger part of our culture than physical products.

The telegraph is so far antiquated in the landscape of communication technology, simply bringing it up in a serious manner seems almost silly. It’s quite literally like using a word that has fallen out of favor. Words are metaphors, and metaphors are expressions of the unknown in terms of the known. Once a new word is known, it becomes assimilated into the larger language system. The same transition occurs in the evolution of technology: Once a device has obsolesced into a general usage, we forget its original impact. The technological “magic” dissipates.

Slang is verbal violence on new psychic frontiers.
It is a quest for identity. — Marshall Mcluhan

In an interview we did several years ago, Paul D. Miller pointed out that McLuhan once said that “the forces of language in an electronic context would release the ‘Africa Within'” (quoted in Christopher, 2007, p. 244). As Eisenstein and his colleagues seem to have found, our tribes come together online, and language evolves from streets to Tweets.

References:

Carey, James W. (1988). Communication as Culture: Essays on Media and Society. New York: HarperCollins.

Christopher, Roy. (2007). Paul D. Miller a.k.a. DJ Spooky: Subliminal Minded. In R. Christopher (Ed.), Follow for Now: Interviews with Friends and Heroes. Seattle, WA: Well-Red Bear, pp. 235-245.

De Certeau, Michel. (1984). The Practice of Everyday Life. Berkeley, CA: University of California Press.

Eisenstein, Jacob, O’Connor, Brendan, Smith, Noah A., & Xing, Eric P. (2012, October 23). Mapping the geographical diffusion of new words. Retrieved November 24, 2012 from http://arxiv.org/abs/1210.5268

Gibson, William. (2005, July). God’s Little Toys. WIRED, 13.7.

Giles, Jim. (2012, November 17). Twitter Shows Language Evolves in Cities. New Scientist, 2891.

Gitlin, Todd. (2001). Media Unlimited: How the Torrent of Images and Sounds Overwhelms Our Lives. New York: Metropolitan Books.

Jenkins, Henry. (1992). Textual Poachers: Television Fans & Participatory Culture. New York: Routledge.

Jenkins, Henry. (2006). Convergence Culture: Where Old and New Media Collide. New York: New York University Press.

McLuhan, Marshall. (1951). The Mechanical Bride. New York: Vanguard Press.

McLuhan, Marshall. (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.

McLuhan, Marshall. (1970). Culture is Our Business. New York: Ballantine Books.

McLuhan, Marshall & McLuhan, Eric (1988). Laws of Media: The New Science. Toronto, Canada: University of Toronto Press.

——–

This piece is another of my many early rough drafts that I’m working on extending elsewhere. Thanks to Brian McFarland for links and correspondence. Apologies to Carrie Fisher for the title.

Ian Bogost: Worthwhile Dilemmas

Partially fueled by Jane McGonigal’s bestselling Reality is Broken (Penguin, 2011), “gamification”—that is turning mostly menial tasks into games through a system of points and rewards—became the buzzword of 2011 and diluted and/or stigmatized videogame studies on many fronts. Gaming ungamed situations is not all bad though. Brian Eno and Peter Schmidt’s Oblique Strategies (1975) were tactics for gaming a stalled creative process. In an interview with Steven Johnson, Brian Eno explained, “The trick for me isn’t about showing people how to be creative as though they’ve never been like that before, but rather trying to find ways of recontacting the natural playfulness and curiosity that most people were born with.” When it becomes exploitative, it becomes a problem.

Enter one of the most outspoken, prolific, and creative videogame scholars working today. Ian Bogost is a professor at Georgia Tech and co-founded videogame design company, Persuasive Games. Among his many books are  Unit Operations: An Approach to Videogame Criticism (MIT Press, 2008), Persuasive Games: The Expressive Power of Videogames (MIT Press, 2010), and How to Do Things with Videogames (University of Minnesota Press, 2011), as well as A Slow Year: Game Poems (Open Texture, 2010), the latter of which which includes four videogames and many meditative poems about the Atari 2600. His latest is Alien Phenomenology, or What It’s Like to Be a Thing (University of Minnesota Press, 2012), which calls for an object-oriented approach to things as things and for thinkers to also become makers.

Roy Christopher: While reading How to Do Things with Videogames, it occurred to me that videogames really are the medium of the now. They encompass so much of everything else our media does and is. Was this part of your point and I just need a late pass?

Ian Bogost: Maybe it would be more accurate to say that videogames are the least recognized medium of the now. In the book—in the first chapter even—I argue against the conceit that games have not achieved their potential. That’s true of course, but what medium has achieved its potential? But in that context I was speaking against researchers, critics, and designers who talk about everything videogames are not, but could be: akin to film, or novels, or textbooks, or what have you. The book tries to show that videogames are already a great many things, from art to pornography to work to exercise.

But all that said, videogames are hardly a dominant medium. What is instead? Some might say “the Internet,” but that’s wrong too, although the reasons it is wrong are surprising. As Marshall McLuhan taught us, media contain other media. But weirdly, even though we access the Internet on computers, the former actually has relatively little to do with the latter. The Internet contains writing, images, moving images, sound—all “traditional” media in common parlance. McLuhan’s idea of the Global Village was meant to rekindle the senses overlooked thanks to the age of print, and in that sense TV and the Internet have succeeded in realizing that vision. But the result turns out to be just the same as TV and radio and print, except any of us can create the equivalent of a publisher or a broadcaster.

Videogames, by contrast, have different properties than these other media. They model the way something works rather than describing or showing it; they offer an experience of making choices within that model rather than an audiovisual replay of it, and they contextualize that model within the context of a simulated world. Now, to be sure, that sort of approach is very “now” in the sense that we SHOULD be interested in the complex, paradoxical interrelations of the moving parts in a system. But at the end of the day, it’s just easier to watch cat videos on YouTube and spout one-liners onto Twitter. In some sense, videogames both are and aren’t other media. They do what other media do—and some things they do not—but they do them differently.

RC: The idea of attaching rewards to menial tasks is understandable, but the current buzz around gamification seems to miss much of the point by filtering out what’s actually good about games. You’ve been quite vocal about the ills of this trend. What are we to do?

IB: If videogames both have and haven’t arrived as a mature medium, then the proponents of gamification want to pretend that the work is done and now we can settle in to the task of counting the profits. The basics of this phenomenon are simple enough: marketers and consultants need to surf from trend to trend, videogames are appealing and seductive but complex and misunderstood, so the simple directive to apply incentives to all our experiences both satisfies the economic rationalists and ticks off the “game strategy” box for organizations.

The irony, not lost on many, is that as virtual incentives like points and reward programs have risen, so tangible incentives have gone into decline. We used to provide material incentives in the form of things like compensation, benefits, perks, and so forth. Now we use JPEGs and 32-bit Integers.

In fact, just as I was writing this response, a friend told me about a novella someone wrote that appears to be an introduction to gamification. It’s called “I’ll Eat This Cricket for a Cricket Badge,” written by a marketing consultant with the improbably-parodic-sounding name Darren Steele. The description reads, “This is the story of Lara, a senior director at Albatron Global. Today she learns she has 24 hours to prepare for a once-in-a-decade meeting with ‘The Brotherhood,’ the triumvirate of terror that founded the company.” Imagine if these gamification shills spent even a fraction of the energy and creativity they devote to swindling on the earnest implementation of worthwhile ideas. In fact, I can’t even tell if the novella is serious or not, the world has become that ambiguous.

As with most things, knowing what to do about it is harder than mere critique. And in that respect, it’s always dangerous to fight against marketers and consultants. Though often stupid, they are also very smart. Or better yet, they often use their savvy to appear stupid or simplistic, so that we’ll let them into our homes and our minds.

In that respect, one possible strategy of opposition is to infiltrate the consultancies and corporations themselves. To create our own highly leveraged solutions-oriented roll-out for it-doesn’t-matter-what service. It’s too laborious and time-consiming to convince people to make games in earnest, so to combat gamification we need to seed a distraction, a new trend that will dissipate this one. Media theory as consultancy counter-terrorism.

RC:  A set of tactics like Brian Eno and Peter Schmidt’s “Oblique Strategies” seems a better tack for bringing gaming ideas into other areas of creative problem solving.

IB: Eno and Schmidt’s Oblique Strategies were originally meant to spur ideas for artists, but now we see similar idea cards being used in design and business too (the famous design firm IDEO released something similar a few years back). And given our Facebook-status and Twitterified media ecosystem, there seems to be a strong interest in aphoristic world views. And for that matter, Jesse Schell developed a series of cards around his theory of game design, which he calls “lenses” in a textbook called The Art of Game Design. So there are some precedents for bits-and-pieces idea generation around games.

But there’s a chicken-egg problem at work here too. In order to be susceptible to the surprising solutions of idea generation, you still have to be conversant enough in those ideas to give them life. For example, many of the phrases on the original Oblique Strategies cards are meant for musicians (the deck’s original creative context), and if you are not a musician, it’s hard to imagine understanding how to “mute and continue” or “left channel, right channel, centre channel” unless you were already well-versed in musical concepts. Admittedly, these are pretty basic ideas, basic enough that even a layperson can grasp them, but that’s only because the experience of recorded music is so universal. The basics are shared as a literacy. But that literacy had to come from somewhere, and until the literacy is developed for games, design tools for their increased application will remain mired in ignorance. To use games, we must know games, but to know them we must have used them.

This is why progress will be stochastic. In How to Do Things With Videogames I argue that games will have arrived through incremental examples altering, increasing, changing our ideas of what games can do. I didn’t use this language there, but it’s a kind of accretion, in which the medium grows bit by bit over time, eventually developing a larger and larger gravity. This process is both recursive and compounded, in the sense that individual successes feed back on our overall comfort and knowledge, becoming candidates for the kind of idea generation that Oblique Strategies exemplifies.

RC: Cow Clicker is like your hit song that won’t stop playing. People’s missing the point seemed to prove its point further. Even with its persistence, did you accomplish what you set out to do?

IB: Cow Clicker is so much bigger than me now, it’s not even possible to know if it did what I set out for it to do, or if that’s even a desirable outcome. There’s an Internet adage called Poe’s Law, that says that it’s often difficult or even impossible to tell the difference between extremism and its parody. It was originally coined in relation to discussions of evolution within Christian forums, but it’s been generalized since: a parody of something extreme can be mistaken for the real thing. And if a real thing sounds sufficiently extreme, it can be mistaken for parody.

The best example of this phenomenon these days is The Onion. There’s a whole website, literallyunbelievable.org, that collects reactions from readers who mistake Onion articles for the real deal, such as the fuming reactions from folks who took seriously headlines like “Planned Parenthood Opens $8 Billion Abortionplex.” And then on the flip side, it’s become common to hear people say of undeniably real headlines, “Is this an Onion article?” The lines between reality and absurdity have blended.

So, it’s clear that Cow Clicker is far weirder than my original intentions. Rather than reflect more on whether or not I succeeded, I’ve started asking other questions. What happened? is certainly one of them, and I’m not sure I’ll ever wrap my head around it. Perhaps more interesting: What can I learn from it? or even What’s next for Cow Clicker. The latter question just terrifies me, because I’ve tried so hard to distance myself from the madness that running the game entailed. But it’s also short-sighted. After all, Cow Clicker was popular. It still is. People like clicking on cows! What can I do with that observation, what can I make that takes that lesson in a direction unburdened by the concerns of obsession and enframing? Is it even possible? In any case, I’m not giving anything away when I say that I don’t think I’m done with Cow Clicker yet. Or better, I don’t think Cow Clicker is done with me.

RC:  Video games inform most of your work, including your new title, Alien Phenomenology. Tell us about your foray into object-oriented ontology and its link with video games.

IB: Object-oriented ontology seems like an obvious match for media studies. Any scholar or creator of media interested in the “thingness” of their objects of study has something to gain from OOO. In addition to (or even instead of) studies of political economy and reception, we can add studies of the material history and construction of computational devices. In other words, “materialism” need not retail only its Marxist sense, but also its realist one: not just political economy, but also just stuff.

I suspected there would be productive connections with object-oriented philosophy, and I remember waiting for Graham Harman’s Tool-Being: Heidegger and the Metaphysics of Objects (Open Court) to be published in 2002 so I could read it and apply it in my dissertation. I’d been following the emergence and growth of speculative realism with interest, but from afar.

Then two things happened. First, I started thinking about the idea of a “pragmatic” speculative realism, one that would embrace some of the first principles devised by the movements’ true philosophers, but that would put them to use in the service of specific objects, but looking beyond human experience. That thought was in my head since 2005 or so.

The second thing was the Atari. Several years ago, I learned how to program the 1977 Atari Video Computer System (VCS), the console that made home videogame play popular. Nick Montfort and I were working on a book on the platform (Racing the Beam; MIT Press, 2009), about the relationship between the hardware design of the Atari VCS and the creative practices that its designers and programmers invented in those early days of the videogame. The Atari featured a truly unique custom graphics and sound chip called the Television Interface Adapter (TIA). It made bizarre demands on game makers: instead of preparing a screen’s worth of television picture all at once, the programmer had to make changes to the data the TIA sent to the television in tandem with the scanline-by-scanline movement of the television’s electron beam. Programming the Atari feels more like plowing a field than like drawing a picture.

As I became more and more familiar with this strange system, I couldn’t help but feel enchanted by its parts as much as its output. Sure, the Atari was made by people in order to entertain other people, and in that sense it’s just a machine. But a machine and its components are also something more, something alive, almost. I found myself asking, what is it like to be an Atari, or a Television Interface Adapater, or a cathode ray tube television? The combination of that media-specific call to action and my broader interest in object-oriented ontology more generally catalyzed the project that became Alien Phenomenology, a book about using speculation to understand the experience of things, of what it’s like to be a thing.

RC: What’s coming up next for you?

IB: There’s a concept in sales, the sales funnel. It’s a structured approach to selling products and services that helps salespeople move opportunities from initial contact through closing by structuring that process in a number of elements. Those might include securing leads, validating leads, identifying needs, qualifying prospects, developing proposals, negotiating, closing the sale, of course, and then managing and retaining the client.

In sales, it’s always best to keep the contacts and leads elements at the top of the funnel very full, because those opportunities will winnow away through attrition, disinterest, loss, and other factors. You tend to have far fewer proposals and negotiations than you do contacts.

I often think about my upcoming creative work through a similar kind of structure. The “creative funnel,” we might call it. We can even use some of the same language: leads, opportunities, commitments, publishing, and support, or something like that. In any case, I tend to throw a whole lot of stuff at the wall (lead and opportunities), because I know that far fewer of those ideas will actually be realized.

In the leads and opportunities column, I’m currently working with my co-editor Nick Montfort to support a number of new books in the Platform Studies series, the series we began with Racing the Beam. Those include both popular and esoteric game consoles and microcomputers. As for my own writing, I’m trying to identify which of a number of books I’ll pursue next… I’ve got one planned on game criticism (a series of critical pieces on specific games), one on games and sports, one on Apple, a book on McLuhan and metaphysics (with Levi Byrant), the crazy kernel of a follow-up to Alien Phenomenology, and a book on play that I would call my attempt at a Malcolm Gladwell-style trade book. Who knows which if any of those will ever come to fruition.

As for commitments, Levi and I are finishing a collection called New Realisms and Materialisms, which we hope will paint a very broad portrait of the different ways of thinking that take those names, applied to a variety of domains, from philosophy to art, architecture to ecology. I’m also desperate to make some new games… I’ve got a small iOS puzzle game in the works, and a larger, weirder piece that should open at the Jacksonville Museum of Contemporary Art in the fall of 2012 and see a general release shortly thereafter.

And I’m closing, if you will, on a big game infrastructure project, the Game-O-Matic authoring system. It was funded by the Knight Foundation two years ago as a tool to help journalists quickly and easily make games about current events without specialized game design or programming knowledge, and it’s just about to release into beta. The system is sort of magical: it takes a concept map (a diagram of nouns with verbs connecting them) and turns them into a playable game. Folks can sign up to use it for free.

I’m currently struggling to take seriously my own idea of “carpentry,” the practice of making things that do theory (described in Alien Phenomenology). I’m trying to expand my theoretical output beyond books, but I still love reading and writing, so I hope I’ll end up with an interesting menagerie of new little creatures over the next few years.

References:

Bogost, Ian. (2011). How to Do Things with Videogames. Minneapolis, MN: The University of Minnesota Press.

Bogost, Ian. (2012). Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis, MN: The University of Minnesota Pres.

Eno, Brian & Mills, Russell, with Rick Poyner. (1986). More Dark Than Shark. London: faber & faber.

Eno, Brian & Schmidt, Peter. (1975). Oblique Strategies: Over One Hundred Worthwhile Dilemmas. London: Brian Eno/Peter Schmidt.

Harman, Graham. (2002). Tool-Being: Heidegger and the Metaphysics of Objects. Open Court.

Johnson, Steven. (2011). The Innovator’s Cookbook: Essentials for Inventing What’s Next. New York: Riverhead.

McGonigal, Jane. (2011). Reality Is Broken: Why Games Make Us Better and How They Can Change the World. New York: Penguin.

Montfort, Nick & Bogost, Ian. (2009). Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT Press.

Mindfulness and the Medium

Over forty years ago, media philosopher Walter Ong wrote that the “advent of newer media alters the meaning and relevance of the older. Media overlap, or, as Marshall McLuhan has put it, move through one another as do galaxies of stars, each maintaining its own basic integrity but also bearing the marks of the encounter ever after” (1971, p. 25). That is, a new technology rarely supplants its forebears outright but instead changes the relationships between existing technologies. During a visit to Georgia Tech’s Digital Media Demo Day, Professor Janet Murray told me that there are two schools of thought about the onset of digital media. One is that the computer is an entirely new medium that changes everything; the other is that it is a medium that remediates all previous media. It’s difficult to resist the knee-jerk theory that it is both an entirely new medium and remediates all previous media thereby changing everything, but none of it is quite that simple. As Ted Nelson would say, “everything is deeply intertwingled” (1987, passim).

Inventing the Medium: Principles of Interaction Design as a Cultural Practice (MIT Press, 2012), Murray’s first book since 1997’s essential Hamlet on the Holodeck (MIT Press), is a wellspring of knowledge for designers and practitioners alike. Unifying digital media under a topology of “representational affordances” (i.e., computational procedures, user participation, navigable space, and encyclopedic capacity), Murray provides applicable principles for digital design of all kinds — from databases (encyclopedic capacity) to games (the other three) and all points in between. There’s also an extensive glossary of terms in the back (a nice bonus). Drawing on the lineage of Vennevar Bush, Joseph Weizenbaum, Ted Nelson, Seymour Papert, and Donald Norman, as well as Murray’s own decades of teaching, research, and design, Inventing the Medium is as comprehensive a book as one is likely to find on digital design and use. I know I’ll be referring to it for years to come.

“Mindfulness” illustration by Anthony Weeks.

Designers can’t go far without grappling with the way a new medium not only changes but also reinforces our uses and understandings of the current ones. For example, the onset of digital media extended the reach of literacy by reinforcing the use of writing and print media. No one medium or technology stands alone. They must be considered in concert. Moreover, to be literate in the all-at-once world of digital media is to understand its systemic nature, the inherent interrelationship and interconnectedness of all technology and media. As Ong put it, “Today, it appears, we live in a culture or in cultures very much drawn to openness and in particular to open-system models for conceptual representations. This openness can be connected with our new kind of orality, the secondary orality of our electronic age…” (1977, p. 305). “Secondary orality” reminds one of the original names of certain technologies (e.g., “horseless carriage,” “cordless phone,” “wireless” technology, etc.), as if the real name for the thing is yet to come along.

These changes deserve an updated and much more nuanced consideration given how far they’ve proliferated since Ong’s time. Net Smart: How to Thrive Online (MIT Press, 2012) collects Howard Rheingold‘s thoughts about using, learning, and teaching via networks from the decades since Ong and McLuhan theorized technology’s epochal shift. Rheingold’s account is as personal as it is pragmatic. He was at Xerox PARC when Bob Taylor, Douglas Englebart, and Alan Kay were inventing the medium (see his 1985 book, Tools for Thought), and he was an integral part of the community of visionaries who helped create the networked world in which we live (he coined the term “virtual community” in 1987). In Net Smart, his decades of firsthand experience are distilled into five, easy-to-grasp literacies: attention, participation, collaboration, crap detection (critical consumption), and network smarts — all playfully illustrated by Anthony Weeks (see above). Since 1985, Rheingold has been calling our networked, digital technologies “mind amplifiers,” and it is through that lens that he shows us how to learn, live, and thrive together.

These two books are not only thoughtful, they are mindful. The deep passion of the authors for their subjects is evident in the words on every page. A bit ahead of their time, Walter Ong and Marshall McLuhan gave us a vocabulary to talk about our new media. With these two books, Janet Murray and Howard Rheingold have given us more than words: They’ve given us useful practices.

References:

McLuhan, Marshall. (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.

Murray, Janet. (2012). Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: The MIT Press.

Nelson, Ted. (1987). Computer Lib/Dream Machines. Redmond, WA: Tempus Books.

Ong, Walter J. (1971). Rhetoric, Romance, and Technology: Studies in the Interaction of Expression and Culture. Ithaca, NY: Cornell University Press.

Ong, Walter J. (1977). Interfaces of the Word: Studies in the Evolution of Consciousness and Culture. Ithaca, NY: Cornell University Press.

Ong, Walter J. (1982). Orality and Literacy: The Technologizing of the Word. New York: Routledge.

Rheingold, Howard. (1985). Tools for Thought: The History and Future of Mind-Expanding Technology. New York: Simon & Schuster.

Rheingold, Howard. (2012). Net Smart: How to Thrive Online. Cambridge, MA: The MIT Press.

Hip-Hop Theory Talk

I’ve been working on a new book called Hip-Hop Theory: The Blueprint to 21st Century Culture about how Hip-hop culture preconfigures many of the forms and norms of the now. I gave the following talk to my class at The University of Texas at Austin, which shows me fumbling through some of the major concepts from the book [runtime: 37:01]:

Here’s a brief overview of the book:

The many innovations of Hip-hop now undergird our Western culture. From appropriating technology and reinventing language to street art and advertising, as well as the intertextual nature of our evermore connected mass media and communication. The DJ’s innovative use of the turntable preconfigured sampling technology and made the sample a viable currency of music making and sampling itself the battleground of creative work and copyright law. To wit, technologically enabled cutting and pasting are now preeminent practices not only for musicians but also filmmakers, designers, storytellers—culture creators of all kinds. Graffiti artists’ repainting of the urban scenery with images and letters prefigured the ubiquity of street-styled advertising. This book is about is the many ways that the foundations of Hip-hop appropriation – allusions and creative language use, as well as technology and self-reference – inform the new millennium, how an understanding of Hip-hop culture is also an understanding of 21st century culture.

Thank you (and my classes) for indulging me. I’ll post more on this project as it develops.

 

Publish or Be Published: Beyond the TED Problem

Publishing has its problems. Academic publishing has its as well, and in turn public intellectualism has problems. With the rise of ebooks, self-publishing, blogging (oh, how I loathe that term), and the like, all of this seems to be coming to a head. I have chosen a path that attempts to eschew these issues. This is not to say that I am above academic publishing, but to say that I am not interested in being read by such a small audience. I am also not necessarily interested in scientific rigor as such. Interesting ideas to me come from many sources, and those are rarely academic journals (I’m more of a Feyerabendian than a Popperian). No offense to those who pursue that path, but it’s not mine. Today, Cory Doctorow posted a piece to bOING bOING about the problem, and The Guardian chimed in as well. Steven Shaviro has been very vocal about the issue, having run into it specifically with Oxford University Press, writing,

I was asked to sign a contract for an essay I have written, which is scheduled to appear in an edited collection. Let’s leave aside the fact that I wrote the essay — it was solicited for this collection — in summer 2010, and yet it will not appear in print until 2013. I think that the glacial pace of academic publishing is a real problem. But that is not what is bothering me at the moment…

What’s bothering him is that the piece would have been “work-for-hire.” That the contract stipulated terms as follows:

WORK-FOR-HIRE. The Contributor acknowledges that the Publisher has commissioned the Contribution as a work-for-hire, that the Publisher will be deemed the author of the Contributior as employer-for-hire, and that the copyright in the Contribution will belong to the Publisher during the initial and any renewal or extended period(s) of copyright. To the extent, for any reason, that the Contribution or any portion thereof does not qualify or otherwise fails to be a work-for-hire, theContributor hereby assigns to the Publisher whatever right, title and interest the Contributor would otherwise have in the Contribution throughout the world.

Shaviro continues,

I found this entirely unbelievable, and unacceptable. Since when has original academic writing been classified as “work-for-hire”? It is possible, I suppose, that things like writing encyclopedia essays might be so categorized; but I have never, in my 30 years in academia, encountered a case in which primary scholarship or criticism was so classified. Is this something widespread, but which I simply haven’t heard about? I’d welcome information on this score from people who know more about the academic publishing situation than I do. But it seems to me, at first glance, that the Press is upping the ante in terms of trying to monopolize “intellectual property,” by setting up an arrangement that both cuts off the public from access and denies any rights to the henceforth-proletarianized “knowledge worker” or producer. I am unwilling to countenance such an abridgment of my ability to make the words that I have written more freely available.

In an update on the situation, Shaviro adds,

 I don’t think I have permission to actually reproduce the words of the editor from OUP, so I will paraphrase. What he basically said was that traditional publication agreements are insufficient because they only give presses “limited sets of rights.” In other words, he was openly confessing that OUP seeks complete and unlimited control over the material that they publish. The justification he gave for this was that old neoliberal standby, “flexibility” — OUP is seeking to do all sorts of digital distribution, and if rights are limited then they may not be able to control new forms of distribution that arise due to technological changes. Of course, the mendaciousness of this claim can be seen by the fact that, as was confirmed to me by one of the people involved in putting together the volume, the “work-for-hire” provision was in place long before the Press even got the idea of supplementing physical publication of the volume with a (no doubt password-protected and expensive-to-access) website.

I have exactly one piece “published” in an academic journal. It was a book review. It was due on November 15, 2008, and appeared in the September, 2010 issue of the journal — two years later. As much as I am thankful for the opportunity (my master’s thesis advisor Brian H. Spitzberg had passed the chance on to me), and I know that’s a normal publication period, it was a freaking book review. Why would I ever pursue that avenue again? My friend Alex Burns has a great post on how academia kills writing, which is a great fear of mine: I want to write books, and I want to write books that people actually want to read.

Alex Reid has an excellent post about why academics keep writing books that no one wants to read, which is because academics largely write books in the pursuit of tenure, not in the pursuit of an audience. Ian Bogost calls this “vampire publishing.” Their shared concern draws an important distinction between writing to be read and writing to have written (a distinction my professor at UT, Katie Arens, has drawn as well). In academia, there’s a strong push toward the latter. Bogost writes,

The reason there is no irony in my simultaneous support of Alex’s position and my continued participation in scholarly publishing is quite simple: people actually want to read my books. They buy them, both in print and electronic format. And I’ve tried very hard as an author to learn how to write better and better books, books that speak to a broader audience without compromising my scholarly connections, books that really ought to exist as books. Imagine that!

The problem doesn’t stop there though. As a scholar who pursues nonacademic or para-academic routes to publication, I am appalled at how insanely bad some of the channels outside of academia have gotten. Case in point: TED. TED, the “Technology, Entertainment, Design” conference originally envisioned by Richard Saul Wurman, has been watered down to the point of self-parody. If they hadn’t once done great things, this wouldn’t matter, but a once visionary site of Big-Idea exchange has become the Starbucksification of public intellectualism, what Benjamin Bratton calls, “the Thomas Friedman of Megachurch Infotainment.” If the following doesn’t make you lose your shit, then you should probably stop reading this post-haste [runtime: 3:47]:

vDHET3aCI2U

“John Boswell, of the ‘Symphony of Science’, came to TED2012 and made this remix of the speakers onstage.” It’s a TED-sponsored promotional video! It’s not a parody, it’s a self-parody! (Have you ever seen the Bank of America “One Bank” video?) TED, once the bastion of non-academic public intellectualism, is now this. SMFH.

The problem — the real problem —  is that there should be a gate-keeping function to scholarship, but that the ones in place are currently failing us. TED’s former elitism wasn’t necessarily the answer, but their new openness is total, indisputable crap. Couple that with the aforementioned problems of academic publishing, and you’ve got yourself a crisis — a big one.

My main gripe with all of this is that Big Name people basically copyright ideas via TED (Bogost calls it, “American Idol for non-fiction trade books”). I’m all for openness, and I pretty well only synthesize the ideas of others (and I do my damnedest to cite and give credit where its due; I am self-conscious about it to a fault), but I’ve seen this happen so many times: One person spends years developing idea X and then one of The Chosen mentions X in a TED Talk™, and then it’s their idea. That is a problem.

Unfortunately, I don’t have a solution. If I did, this would be a very different piece. I have chosen to do what I do and hope for the best. I know many others who’ve resolved to do the same. None of this is to shit on those who do academic publishing or hope to do so, but we need to realize that the system is broken and that the alternatives are not much better. Here’s hoping we all find ways to get our ideas out there.

—–

Apologies to Doug Rushkoff for my bastardization of his book title for the name of this piece, and many thanks to Steven Shaviro, Alex Burns, Ian Bogost, and Alex Reid for sharing their thoughts.

David Preston: Hacking High School

After a decade of teaching at the university level, David Preston decided to stop ignoring the ills we all know haunt those halls and dropped back to high school. He’s now trying to reform a place that desperately needs it. I got the chance to participate in a discussion with his literature and composition classes, thanks to David, Ted Newcomb, and Howard Rheingold, all of whom are hacking education in various ways. I can tell you with no reservations that David is making the difference. I want to keep this introduction as brief as possible and just let him tell you about it. Some men just want to watch the world learn.

Roy Christopher: What drove you from the hallowed hells of academia to teaching high school?

David Preston: (Hang on, let me hop up on my soapbox) Every generation thinks school can’t get any worse but somehow we manage. When I was a kid I hated school but loved learning (and still do), so when I graduated I thought I could liberate the other inmates by learning about the institution and how to fix it. After college I wrote about schools as a journalist and then I went back for a master’s and a Ph.D. in education. But in grad school I discovered the politics, how difficult it is to ask pressing questions without incurring the wrath of well-funded powers-that-be. Eventually I figured there wasn’t enough lipstick for this institutional pig and found my way into management consulting, where I worked with executives and organizations on learning and planning. Even though I was making good money and keeping my hand in by teaching courses at UCLA, the idea of school nagged at me because I could see the trend worsening. Really smart, highly-motivated students and executives told me how completely unprepared they were for life after graduation—and these were the successful people! Today’s students have it even worse. They don’t learn about their own minds, they don’t learn about how they fit in the larger scheme of things, they don’t learn how to use the tools available to them, and they don’t learn the basics of how to manage their bodies or their money. Forget the achievement gap and the union versus reform sideshow—even the best prep school curricula are designed for a world that no longer exists (if it ever did). Once upon a time the American high school diploma signified that a person had the tools to be self-sufficient; now it’s like one of those red deli counter tickets that tells you to line up at the recruiter’s office or financial aid. And the worst part is, today’s students know all this because technology allows them to see the world for themselves. They don’t have to be told that school is an irrelevant exercise in obedience.

I’ve been critical of school since watching my first grade teacher pull kids’ hair for getting math problems wrong, but after 9/11 I thought about the issue differently. I reflected on how our thinking influences the world we’re living in and the future we’re creating for ourselves. Whatever big-picture issue you care about—the environment, the economy, human rights, politics—is defined by how people think and communicate about it. And the institution ostensibly in charge of helping people learn to think and communicate is fucked. So, when a friend of mine suggested in 2004 that I take a “domestic Peace Corps” sabbatical and offered me an opportunity to teach high school courses, I turned him down immediately. But over the next couple of weeks I realized that you never hear anything about education policy from inside the classroom, and I’d get to be an embedded anthropologist. Boots on the ground. I wanted to find out what today’s students are actually like (they’re not the Digital Natives you read about!) and what actually goes on in school on the days they don’t give tours. I may have been fantasizing about Hunter S. Thompson riding with Hell’s Angels or Jane Goodall hanging with chimps when I said yes to going back inside the belly of the beast.

I taught at the country’s fourth-largest high school in LA. It had a year-round calendar with three tracks to accommodate five thousand students, most of whom didn’t carry books because they didn’t want to get jumped on the way home. But this one student, Zolzaya Damdinsuren, came into my class during a sweaty summer school afternoon and made me an offer I couldn’t refuse. This is a whole other story, but the bottom line is that I spent a month in western China, Tibet, and Mongolia with Zolzaya and his family, and the experience changed me. By the time I returned I had decided not to return to my consulting practice. Instead I resolved to create learning solutions that would help people whether they were in school or not. I moved to California’s central coast and I’ve been hacking education ever since.

RC: Tell me about your current education project, the one you’ve been piloting for a while now.

DP: I’m helping students build a massively multiplayer online learning network. I started with the students in my high school classes. Initially, 100 students created 100 blogs and learned about online security, privacy, filter bubbles, search, online business models, and how to use social media to curate and broadcast information. We reached out to authors, we conducted a flash mob research project that created a mindmap out of a William Gibson interview in 24 hours, and we held video conferences with illustrious celebrities such as yourself. That was fall semester. Now we’re reaching out to recruit a study group of 20,000-50,000 people to prepare for the AP English Literature & Composition exam using both synchronous and asynchronous platforms. This is proof-of-concept: the ultimate goal is to create an online exchange that offers the resources and tools people need to acquire information, demonsrate mastery and build a portfolio of work. In five years I want to see a teacher make a million dollars, not because of some collective bargaining agreement, but because she’s that good. Maybe she’s an author, maybe she’s a mechanic. I want to create a model of community in which learning is an economic driver. I think the outcome will be a competitive market of entrepreneurs, job candidates and creatives who aren’t just eager to tell you what they can do, but eager to show you what they’ve already done.

RC: What insights have you found doing this work?

Until about two years ago I was focusing on interdisciplinary curriculum and information-referenced assessment models as ways to extend what I could offer students. But basically these were just ways of remixing the standard curriculum and providing more formative feedback to learners. Even my use of social media was essentially limited to conserving paper, helping absentees, and trying to make the same old lessons seem more engaging or entertaining.

You see that sort of thing all over the Web. Blended learning, virtual schooling, online lessons, LMS, SIS—some of the ideas and applications are really cool, but it’s all essentially Skinner’s Box 2.0. It’s what happens when anything good gets sucked into the school policy meat grinder. Apple in the world = Think different. Apple in school = Electronic textbooks. Peter Drucker said the worst thing management can do is the wrong thing more efficiently. Standardizing and streamlining is great if you’re starting with something of quality, but otherwise incremental change makes the problem worse because it reinforces the idea that change is impossible. You can’t lose twenty pounds by eating one less Twinkie a day. You have to radically, fearlessly redesign from purposeful scratch. That’s how evolutionary adaptation works: one day there’s no fin, then the water rises and—Whoa!—everybody who’s still alive and reproducing has fins. So I gave up trying to tweak the finless and started thinking more about where we are trying to swim. This took the form of a simple question: What does it take to be an educated global citizen in the 21st century?

The real opportunity of the Internet is creating a network that takes on its own momentum, grows, and exponentially increases its value. In fact, I think at this point network theory has a greater payoff in learning than learning theory does. The really cool part is that as the network grows and gains experiences, it also changes purpose and direction. School isn’t built to tolerate that, which I think is a big issue, considering the need for innovation in this country.

It’s exciting to be a part of something so dynamic. In too many places learners are forced to wait for an institution, or a government, or an economic sector to get its act together and do right by them. Learners don’t have to wait for Superman. They are Superman.

RC: Well, one of the things I wonder is where the funding comes from. That still seems to be a major problem with education reform, and I’m not just talking about funding for technology and other resources, but funding for teachers: One of the main reasons interesting and innovative people avoid teaching in high school is because there’s so much more money to be made elsewhere. How do we fund this revolution?

DP: Learning needs to become the economic driver. We need a learning environment in which learners and mentors select each other, co-create interdisciplinary curricula and demonstrate mastery in ways that translate to the broader economy and life in our culture. Such an open market would allow learning innovators to create revenue streams that feed communities and align compensation with perceived value and performance: if you suck you starve, if you rock you make bank. This is happening already. In Korea, teacher Rose Lee is known as the “Queen of English.” She makes over $7 million a year. If clients are willing to invest that much in university prep, imagine what they’ll do for top-shelf professionals who can prepare the next generation for economic success without needing the university at all. Creating a new economic sector around learning makes mentoring a much more dynamic and potentially lucrative endeavor than teaching ever was.

Until that exists, though, it’s still possible to integrate coursework and network once learners get the basics of the Internet and online privacy/security. It doesn’t take much money for an individual teacher to offer online learning opportunities. I started off guerrilla style. Everything I’m currently using with students is available for free to anyone who has access to the Internet—and every student has access to the Internet. It drives me crazy when I hear well-meaning adults suggest that we not work online with students because not everyone has a computer at home. We read books with students, and some of my students don’t have those at home either. This is Problem Solving 101. If you don’t have a computer at home you have an access problem. That would be a cruel proposition if the problem wasn’t super easy, but we are surrounded by solutions. Go to a friend’s; go to the computer center or library; spend $3 at the copy store. If an entire community is impacted to the point that an individual really can’t access the Internet, document the case that supports getting the community connected. Agitate. Citing lack of Internet access in 2012 is an admission of defeat that suggests a lack of determination and imagination.

RC: What are you up to off-campus?

DP: For the last six months I have been neck-deep in the work I’m doing with students. Writing curriculum, reading blogs, and replying to messages around the clock seven days a week. It’s insane. I’ve never worked harder as a teacher or had more fun. Now I’m documenting the process and starting to promote it. I’m writing a white paper, starting a blog, designing the system architecture for the learning exchange, consulting, and speaking about the proof of concept. Next event is the CUE conference in Palm Springs on March 15.

It’s hard to overstate the importance of liberating learning from school. Our present is competitive and our future is uncertain. My old mentor used to say that in chaos there is profit, but success in 2012 is not for the passive, weak, or risk-averse. Intellectual and financial freedom isn’t something that can be given to you. You have to take it.

Fresh Prints: Digitization and Its Discontents

When John Naisbitt was researching his best-selling book Megatrends (1982), he had a file system of shoe boxes. The shoe boxes were labeled according to major trends he had spotted in local newspapers from across the country and filled with the actual clips from those papers. Not only is this method of research rendered obsolete by the all-encompassing web, in light of the web’s ubiquity (especially to the so-called “digital natives” who’ve grown up with the web), it sounds downright silly.

A fraction of Kevin Kelly's library.

Kevin Kelly has a lot of books, and like me, he works with them, adds to them, uses them. But he’s ready to leap into a future without them in their current form. Calling us “People of the Screen” (not his most original idea), he writes on his website,

I work with books. I wrestle with them, play with them, mark them, write in them, dog-ear them, talk to them. I use them. But my books on paper, as gorgeous as they look, are usually bimbos. I can’t search them, clip them, cut and paste their best parts, share their highlights, or my marginalia, link them to my other books, or continue our conversation for very long. That’s why I am moving to digital books as fast as I can.

I have to admit to finding this somewhat troubling. Not so much the move to digital books, which I’ve been toying with myself, but the enthusiasm with which Kelly touts the move. I maintain that the move to digital makes sense for other media–music and movies, where the media themselves require no more than speakers and a screen, respectively–but that books are an example of good design. Compact discs and DVDs are not an examples of good design. A cassette tape or a video tape is not an example of good design. For music, the iPod is an example of good design, one that is far better than any previous music device. There’s no carrying anything else along (e.g., CDs or cassettes). There’s no flipping of the tape, or rewinding or fast-forwarding to find that perfect track. The music just flows, like words on a page.

We’ve discussed these transitions at length in terms of organizing principles, but what we’re really talking about here, especially in the case of the printed word, is delivery systems. The book, as cumbersome and intractable as Kelly’s attitude sees it, is an example of good design. Books are built to last, their batteries don’t run down, most of them are extremely portable in small numbers, and they exist just fine without screens. This last point is one I’ve been thinking on a lot lately. As much as I do not lament the past inconveniences of flipping over of a record or rewinding a cassette tape, I am more and more aware of how the computer has devoured all of our media activities, and part of my anxiety against the leap to bits is the fervor with which we’re putting everything on a screen. I’ve been looking for things that don’t require screens: riding bicycles, skateboarding, walking, face-to-face conversations, and so on. Reading books is still among these activities, but the screen’s threat to that activity troubles me. This cartoon from Reddit user Gordondel illustrates the point:

And this one (source unknown), speaks to the very speed of our increasingly digitized culture, in contrast to the analog methodology of John Naisbitt above:

Again, I do not lament the change in music, especially where discovery is concerned. It’s the best it has ever been for a music fan like myself, and for years I’ve wanted the ability to search my bookshelves with the same ease that I search for music, both new and on my hard drives. I have also discussed this shift on this site ad nauseam, as well as invited my music friends to discuss it here. When it comes to what I do — that is, synthesizing the ideas of others into (hopefully) new insights, like a DJ mixing records (I like to think, in my grander moments) — there is no question that digitizing makes sense. Though, as Alex Burns noted in a recent email to me, citing ebooks has yet to be formalized (i.e., there are no page numbers), tools like DevonThink and Steven Johnson‘s Findings work wonders for locating quotations, citations, and connecting tasty morsels among digitized texts. Limited by the selection of books that exist in the digital future Kelly is cheerleading, our libraries just aren’t there yet. The printed word still carries its own inherent DRM by dint of resisting digitization in a way that other media do not. Where we easily rip(ped) our CDs and DVDs to hard drives and co-located clouds, no one is rushing through their bookshelves with the same fervor. This changes the power structure of the format shift.

To that point, earlier today, Jay Babcock posted a link to an interview with journalist and Free Ride (Doubleday, 2011) author Robert Levine by Ben Watt, DJ, label head, and musician/songwriter with Everything but the Girl. In light of the SOPA/PIPA crisis, their discussion is germane and deserves a wide readership. Digital vs analog discussions inevitably turn to the internet, and furthering the distiction between music and text above, Levine states,

I have a contract with Random House: They gave me an advance that represents a risk to them, since many books don’t sell very well, and they take most of the revenue on each sale to compensate them for that risk. If you pirate my book, I don’t lose all that much money directly, but it definitely affects my ability to get another deal and ultimately — because working on something for two years costs money — write another book. Random House is my partner. Like all partners, authors and publishers have differences of opinion — the former want higher royalties and the latter don’t. But commercial-scale piracy hurts both. As to whether authors and musicians should have publishers or labels, that’s a separate issue.

It’s always more complex than we think. Digitization often undermines our ideas of intellectual property (It should be noted that large-file sharing site MegaUpload was shutdown while I wrote this piece). Levine continues, “the fact that barriers to entry have come down is what’s great about the Internet, and the fact that piracy is rampant is what’s wrong with the Internet, and I think we need to separate them.” The question then becomes: How do we move forward in one way without moving backward in another?

That aside, after debating the all-or-nothing, digital divide of books, I purchased my latest e-reader because I wanted the option of ebooks. Let’s face it, a lot of books are cheaper in digital form. I had to debate the divide remembering that some of my favorite movies are yet to be available on DVD, but once we all decide that we’d rather have ebooks than book-books (what I call “The Tyranny of Adoption”), the latter will go the way of the CD, DVD, and LP.

Recently I was contemplating my next ‘zine project, an archaic practice the physicality of which I still find rewarding in both process and product (much like shopping in brick-and-mortar record and book stores), and I was thinking of making it available for e-readers as well. One of the first things that occurred to me was the lack of a two-page spread in that format. In ‘zines, magazines, and books, the fold between signatures, between pages, provides a landscape view of two pages at once. This expanse of visual real estate is not extant on an e-ink or tablet screen. Much like the one-sidedness of the MP3, the ebook is all fronts.

Let me stop here and attempt to gather the threads unraveled above:

  • Digitization is not inherently a bad thing.
  • Some media thrive in strictly digital format. Others need more nuanced modes of delivery.
  • (That is, some things do not need to be on screens.)
  • Wanting searchable book content does not mean not wanting books.
  • We decide what works for us.
  • No matter what, we still need to reconcile intellectual property with digitization (IP with IP).

New devices and media formats, whether we’re designing them or adopting them, curate our culture. We have to think cumulatively about these changes and decide what we want. Book culture has served us well, and we might be ready to let go of it in its current form (reactions to yesterday’s Wikipedia blackout in protest of SOPA certainly do not support literary culture as we know it). Let’s just be mindful of the culture we’re creating.

————–

One for Fun: While I was writing this piece, Jason Kottke posted the video below of John Scalzi’s thirteen-year-old daughter Athena seeing an LP record for the first time [runtime: 1:41]. One cannot help imagining the same fate for books:

ibfx4AFlgH4

————–

Acknowledgements: To be fair to Kevin Kelly, his original post was about digital publishing, and I agree with his points and enthusiasm for that. Given my ebook anxiety, I couldn’t help but take his massive analog library as an opportunity to discuss the readers’ side of the issue. Thanks are due to Dr. Martha Lauzen, who told me the John Naisbitt story during my master’s degree days studying with her at San Diego State University. Gratitude is also due to Alex Burns, Jay Babcock, Steven Johnson, Jason Kottke, Dave Allen, David Ewald, and Lily Brewer for sharing links, lively discussion, and correspondence.

Headroom for Headlines: News in the Now

It might be un-American to admit it, but I think the funniest thing about The Onion is the headlines. No offense to the rest of that great publication, but I rarely read past the blurb at the top. I’m not alone in this practice. When it comes to an information diet, our news is largely a headline-driven enterprise.

In 2006 Jakob Neilson found that browsers of online content read pages in an F-shape, conceding that they don’t read your website at all. They scan it. That means that most people who even visited this page have already stopped reading.

Images from Jakob Nielson’s eye tracking study.

The irony of using The Onion as an example is that an onion, when used as a metaphor, is a thing of many layers. It is only by peeling away those layers that one arrives at the elusive something obscured by them. I realize that many won’t consider The Onion a viable news source, but as an example, it works in the same way that The Daily Show does. Viewers of that show tend to be among the most-informed of publics, but it’s not because of the show. It’s analogous to the child growing up in a house full of books. A child who grows up with books in the house tends to be smarter, but it’s not because of the books. The books–and by analogy the show–are the third factor in the correlation. Parents who have books in their house tend to be smarter, and smarter parents have smarter children. Daily Show viewers tend to already be more informed before watching his show. I submit that the same can be said of readers of The Onion.

Back to the onion as metaphor: If we only observe the onion’s peel, we miss out on the something inside. So, if we’re only reading headlines, how informed are we? Status updates, Twitter streams, and Google search results only add to the pithy reportage we consume. Part of the problem is economic. Breaking headlines are much cheaper and easier to produce than in-depth follow-up stories (see Burns & Saunders, 2009), but part of it is us: We’ve chosen this form of media.

I’m admittedly not much of a news hound. In spite of my love of magazines, if you’ve read–or scanned–any of this website, you know I tend to read more books than anything else. I’m also not lamenting any sort of “death of print” sentiment or trying to rehash the arguments of Nicholas Carr’s The Shallows. I once called Twitter “all comments, no story,” and I’m just frustrated at finding out about things but never finding out more about them. If  “the internet is the largest group of people who care about reading and writing ever assembled in history,” as Clay Shirky once said, then what is it that we are reading?

The Onion and The Daily Show make preaching to the choir an understatement, but if The Long Tail taught us anything, wasn’t that it? Find your audience and serve them (Thank you for reading this far).

References:

Anderson, Chris. (2006). The Long Tail: Why the future of Business is Selling Less of More. New York: Hyperion.

Burns, Alex & Saunders, Barry. (2009). Journalists As Investigators and ‘Quality Media’ Reputation. Record of the Communications Policy & Research Forum 2009, 281-297.

Carr, Nicholas. (2010). The Shallows: What the Internet is Doing to Our Brains. New York: W.W. Norton & Co.

Nielson, Jakob. (2006, April 17). F-Shaped Pattern For Reading Web Content. Alertbox: Current Issues in Web Usability.

Bring the Noise: Systems, Sound, and Silence

In our most tranquil dreams, “peace” is almost always accompanied by “quiet.” Noise annoys. From the slightest rattle or infinitesimal buzz to window-wracking roars and earth-shaking rumbles, we block it, muffle it, or drown it out whenever possible. It is ubiquitous. Try as we might, cacophony is everywhere, and we’re the cause in most cases. Keizer (2010) points out that, besides sleeping (for some of us), reading is ironically the quietest thing we do. “Written words were meant to evoke heard speech,” he writes, “and were considered inadequate until they did so, like tea leaves before the addition of hot water” (p. 21). Reading silently was subversive.

We often speak of noise referring to the opposite of information. In the canonical model of communication conceived in 1949 by Claude Shannon and Warren Weaver, which I’ve been trying to break away from, noise is anything in the system that disrupts the signal or the message being sent.

If you’ve ever tried to talk on a cellphone in a parking garage, find a non-country station on the radio in a fly-over state, or follow up on a trending topic on Twitter, then you know what this kind of noise looks like. Thanks to Shannon and Weaver (and their followers; e.g., Freidrich Kittler, among many others), it’s remained a mainstay of communication theory since, privileging machines over humans (see Parikka, 2011). Well before it was a theoretical metonymy, noise was characterized as “destruction, distortion, dirt, pollution, an aggression against the code-structuring messages” (Attali, 1985, p. 27). More literally, Attali conceives noise as pain, power, error, murder, trauma, and youth (among other things) untempered by language. Noise is wild beyond words.

The two definitions of noise discussed above — one referring to unwanted sounds and the other to the opposite of information — are mixed and mangled in Hillel Schwartz’s Making Noise: From Babel to the Big Bang and Beyond (Zone Books, 2011), a book that rebelliously claims to have been written to be read aloud. Yet, he writes, “No mere artefacts of an outmoded oral culture, such oratorical, jurisprudence, pedagogical, managerial, and liturgical acts reflect how people live today, at heart, environed by talk shows, books on tape, televised preaching, cell phones, public address systems, elevator music, and traveling albums on CD, MP3, and iPod” (p. 43). We live not immersed in noise, but saturated by it. As Aden Evens put it, “To hear is to hear difference,” and noise is indecipherable sameness. But, one person’s music is another’s noise — and vice versa (Voegelin, 2010), and age and nostalgia can eventually turn one into the other. In spite of its considerable heft (over 900 pages), Making Noise does not see noise as music’s opposite, nor does it set out for a history of sound, stating that “‘unwanted sound’ resonates across fields. subject everywhere and everywhen to debate, contest, reversal, repetition: to history” (p. 23).

Wherever we are, what we hear is mostly noise. When we ignore it, it disturbs us. When we listen to it, we find it fascinating.
John Cage

The digital file might be infinitely repeatable, but that doesn’t make it infinite. Chirps in the channel, the remainders of incomplete communiqué surround our signals like so much decimal dust, data exhaust. In Noise Channels: Glitch and Error in Digital Culture (University of Minnesota, 2011), Peter Krapp finds these anomalies the sites of inspiration and innovation. My friend Dave Allen is fond of saying, “There’s nothing new in digital.” To that end, Krapp traces the etymology of the error in machine languages from analog anomalies in general, and the extremes of Lou Reed’s Metal Machine Music (RCA, 1975) and Brian Eno‘s Discreet Music (EG, 1975) in particular, up through our current binary blips and bleeps, clicks and clacks — including Christian Marclay‘s multiple artistic forays and Cory Arcangel’s digital synesthesia. This book is about both forms of noise as well, paying due attention to the distortion of digital communication.

There is a place between voice and presence where information flows. — Rumi

Another one of my all-time favorite books on sound is David Toop’s Ocean of Sound (Serpent’s Tail, 2001). In his latest, Sinister Resonance: The Mediumship of the Listener (Continuum Books, 2010), he reinstates the human as an inhabitant on the planet of sound. He does this by analyzing the act of listening more than studying sound itself. His history of listening is largely comprised of fictional accounts, of myths and make-believe. Sound is a spectre. Our hearing is a haunting. From sounds of nature to psyops (though Metallica’s “Enter Sandman” is “torture-lite” in any context), the medium is the mortal. File Sinister Resonance next to Dave Tompkins’ How to Wreck a Nice Beach (Melville House, 2010) and Steve Goodman’s Sonic Warfare (MIT Press, 2010).

And how can we expect anyone to listen if we are using the same old voice? — Refused, “New Noise”

Life is loud, death is silent. Raise hell to heaven. Make a joyous noise unto all of the above.

———-

My thinking on this topic has greatly benefited from discussions with, and lectures and writings by my friend and colleague Josh Gunn.

References and Further Resonance:

Attali, J. (1985). Noise: The Political Economy of Music. Minneapolis, MN: University of Minnesota Press.

Evens, A. (2005). Sound Ideas: Music, Machines, and Experience. Minneapolis, MN: University of Minnesota Press.

Goodman, S. (2010). Sonic Warfare. Cambridge, MA: MIT Press.

Hegarty, P. (2008). Noise/Music: A History. New York: Continuum Books.

Keizer, G. (2010). The Unwanted Sound of Everything We Want: A Book About Noise. Philadelphia, PA: Public Affairs.

Krapp, P. (2011). Noise Channels: Glitch and Error in Digital Culture. Minneapolis, MN: University of Minnesota Press.

Parikka, J. (2011). Mapping Noise: Techniques and Tactics of Irregularities, Interception, and Disturbance. In E. Huhtamo & J. Parikka (Eds.), Media Archeology: Approaches, Applications, and Implications. Berkeley, CA: University of California Press.

Refused. (1998). “New Noise” [performed by Refused]. On The Shape of Punk to Come: A Chimerical Bombination in 12 Bursts (Sound recording). Örebro, Sweden: Burning Heart Records.

Schwartz, H. (2011). Making Noise: From Babel to the Big Bang and Beyond. New York: Zone Books.

Shannon, C.E., & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.

Sterne, J. (2003). The Audible Past: Cultural Origins of Sound Reproduction. Durham, NC: Duke University Press.

Tompkins, D. (2010). How to Wreck a Nice Beach. Brooklyn, NY: Melville House.

Toop, D. (2010). Sinister Resonance: The Mediumship of the Listener. New York: Continuum Books.

Voegelin, S. (2010). Listening to Noise and Silence: Towards a Philosophy of Sound Art. New York: Continuum Books.