Sharing Music: Kick Out the Spam…

I spent my undergraduate years working at record stores. Not surprisingly, the lulls behind the counter were largely spent talking about and sharing music. We’d all bring in our small CD cases, each stocked with a dozen or so discs for the shift. There was a lot of judging and clowning, but even more sharing and putting each other on to new sounds.

When I first got an iPod in 2003, I thought the practice would continue. Around the time that I procured my refurbished player, my friend Chang came out to San Diego on tour with dälek. Before a show one day, he was hanging out with some of his old college friends, one of whom had a new boyfriend. Chang snagged the dude’s iPod from her, and was judging her new beau on the merits of his mp3s. Maybe this happens more often than I’m aware, but this case is the rarity in my experience. Ironically, our listening experiences tend to be as insular as the devices that facilitate them.

When the Walkman first came out, it was intended for sharing. The first models released had two headphone jacks. I distinctly remember the first one I listened to having dual jacks. When the initial numbers came back, and they found that no one was sharing the devices, Sony retooled their tack. In the ads, Weheliye (2005) writes that “couples riding tandem bicycles and sharing one Walkman were replaced by images of isolated figures ensnared in their private world of sound” (p. 135). And so it has gone, each of us to his or her own.

There is research on the matter though. Termed “playlistism,” the studies aim to highlight the links between music and identity using the practice of sharing playlists. Assuming that we compile playlists to represent our identities, the sharing of them should show how we present ourselves through music. Citing Brown, Sellen, & Geelhoed (2001), Valcheva (2009) found that sharing via peer-to-peer networks “confounded the traditional way of possessing and sharing music, and thus instigating a shift, on one hand, towards a citizen/leech styled community where music sharing interaction tends to be anonymized.” We don’t use P2P spaces to share in a traditional sense. In contrast, “[P]laylistism is underpinned by the practice of capturing and contributing one’s ‘music personality’ in the form of playlists that are either published online or shared through portable devices.” As one article put it, “We are what we like.”

Now that we listen more from the cloud and less as a crowd, the streaming services have adopted a stance of “social integration.” Similar to what Four Square does with your location when you check in to a place (automatically sending it to your social networks), Spotify does with the song you’re listening to. While Spotify doesn’t require that you share your listening, it does require you to have a Facebook account. Some online publications have adopted the practice as well, letting all of your friends know what you’ve been reading online. The trend is troubling. Social integration is the opposite of sharing. Sharing implies intention, and if your playlists are being broadcast without your curation, well, then they’re just spam in the streams of those who follow or friend you. It’s analogous to signing your friends up to newsletters they might not want or adding their numbers to telemarketers call-lists. There is nothing social about it.

I believe sharing music is a powerful practice. I wouldn’t know about most of the bands I listen to or have ever listened to if it weren’t for the friends who shared them with me. Sharing via automation does not make things social. Real sharing requires attention and intention. No algorithm can replicate that.

References:

Brown, B., Sellen, A. & Geelhoed, E. (2001). Music sharing as a computer supported collaborative application. Proceedings of ECSCW 2001, Bonn, Germany: Kluwer academics publisher.

Gelitz, Christiane (2011, March/April) You Are What You Like. Scientific American Mind.

Valcheva, Mariya (2009). Playlistism: a means of identity expression and self‐representation. A report on a conducted scientific research within “The Mediatized Stories” project at the University of Oslo.

Weheliye, Alexander G. (2005). Phonographies: Grooves in Sonic Afro-Modernity. Durham, NC: Duke University Press.

Headroom for Headlines: News in the Now

It might be un-American to admit it, but I think the funniest thing about The Onion is the headlines. No offense to the rest of that great publication, but I rarely read past the blurb at the top. I’m not alone in this practice. When it comes to an information diet, our news is largely a headline-driven enterprise.

In 2006 Jakob Neilson found that browsers of online content read pages in an F-shape, conceding that they don’t read your website at all. They scan it. That means that most people who even visited this page have already stopped reading.

Images from Jakob Nielson’s eye tracking study.

The irony of using The Onion as an example is that an onion, when used as a metaphor, is a thing of many layers. It is only by peeling away those layers that one arrives at the elusive something obscured by them. I realize that many won’t consider The Onion a viable news source, but as an example, it works in the same way that The Daily Show does. Viewers of that show tend to be among the most-informed of publics, but it’s not because of the show. It’s analogous to the child growing up in a house full of books. A child who grows up with books in the house tends to be smarter, but it’s not because of the books. The books–and by analogy the show–are the third factor in the correlation. Parents who have books in their house tend to be smarter, and smarter parents have smarter children. Daily Show viewers tend to already be more informed before watching his show. I submit that the same can be said of readers of The Onion.

Back to the onion as metaphor: If we only observe the onion’s peel, we miss out on the something inside. So, if we’re only reading headlines, how informed are we? Status updates, Twitter streams, and Google search results only add to the pithy reportage we consume. Part of the problem is economic. Breaking headlines are much cheaper and easier to produce than in-depth follow-up stories (see Burns & Saunders, 2009), but part of it is us: We’ve chosen this form of media.

I’m admittedly not much of a news hound. In spite of my love of magazines, if you’ve read–or scanned–any of this website, you know I tend to read more books than anything else. I’m also not lamenting any sort of “death of print” sentiment or trying to rehash the arguments of Nicholas Carr’s The Shallows. I once called Twitter “all comments, no story,” and I’m just frustrated at finding out about things but never finding out more about them. If  “the internet is the largest group of people who care about reading and writing ever assembled in history,” as Clay Shirky once said, then what is it that we are reading?

The Onion and The Daily Show make preaching to the choir an understatement, but if The Long Tail taught us anything, wasn’t that it? Find your audience and serve them (Thank you for reading this far).

References:

Anderson, Chris. (2006). The Long Tail: Why the future of Business is Selling Less of More. New York: Hyperion.

Burns, Alex & Saunders, Barry. (2009). Journalists As Investigators and ‘Quality Media’ Reputation. Record of the Communications Policy & Research Forum 2009, 281-297.

Carr, Nicholas. (2010). The Shallows: What the Internet is Doing to Our Brains. New York: W.W. Norton & Co.

Nielson, Jakob. (2006, April 17). F-Shaped Pattern For Reading Web Content. Alertbox: Current Issues in Web Usability.

Bring the Noise: Systems, Sound, and Silence

In our most tranquil dreams, “peace” is almost always accompanied by “quiet.” Noise annoys. From the slightest rattle or infinitesimal buzz to window-wracking roars and earth-shaking rumbles, we block it, muffle it, or drown it out whenever possible. It is ubiquitous. Try as we might, cacophony is everywhere, and we’re the cause in most cases. Keizer (2010) points out that, besides sleeping (for some of us), reading is ironically the quietest thing we do. “Written words were meant to evoke heard speech,” he writes, “and were considered inadequate until they did so, like tea leaves before the addition of hot water” (p. 21). Reading silently was subversive.

We often speak of noise referring to the opposite of information. In the canonical model of communication conceived in 1949 by Claude Shannon and Warren Weaver, which I’ve been trying to break away from, noise is anything in the system that disrupts the signal or the message being sent.

If you’ve ever tried to talk on a cellphone in a parking garage, find a non-country station on the radio in a fly-over state, or follow up on a trending topic on Twitter, then you know what this kind of noise looks like. Thanks to Shannon and Weaver (and their followers; e.g., Freidrich Kittler, among many others), it’s remained a mainstay of communication theory since, privileging machines over humans (see Parikka, 2011). Well before it was a theoretical metonymy, noise was characterized as “destruction, distortion, dirt, pollution, an aggression against the code-structuring messages” (Attali, 1985, p. 27). More literally, Attali conceives noise as pain, power, error, murder, trauma, and youth (among other things) untempered by language. Noise is wild beyond words.

The two definitions of noise discussed above — one referring to unwanted sounds and the other to the opposite of information — are mixed and mangled in Hillel Schwartz’s Making Noise: From Babel to the Big Bang and Beyond (Zone Books, 2011), a book that rebelliously claims to have been written to be read aloud. Yet, he writes, “No mere artefacts of an outmoded oral culture, such oratorical, jurisprudence, pedagogical, managerial, and liturgical acts reflect how people live today, at heart, environed by talk shows, books on tape, televised preaching, cell phones, public address systems, elevator music, and traveling albums on CD, MP3, and iPod” (p. 43). We live not immersed in noise, but saturated by it. As Aden Evens put it, “To hear is to hear difference,” and noise is indecipherable sameness. But, one person’s music is another’s noise — and vice versa (Voegelin, 2010), and age and nostalgia can eventually turn one into the other. In spite of its considerable heft (over 900 pages), Making Noise does not see noise as music’s opposite, nor does it set out for a history of sound, stating that “‘unwanted sound’ resonates across fields. subject everywhere and everywhen to debate, contest, reversal, repetition: to history” (p. 23).

Wherever we are, what we hear is mostly noise. When we ignore it, it disturbs us. When we listen to it, we find it fascinating.
John Cage

The digital file might be infinitely repeatable, but that doesn’t make it infinite. Chirps in the channel, the remainders of incomplete communiqué surround our signals like so much decimal dust, data exhaust. In Noise Channels: Glitch and Error in Digital Culture (University of Minnesota, 2011), Peter Krapp finds these anomalies the sites of inspiration and innovation. My friend Dave Allen is fond of saying, “There’s nothing new in digital.” To that end, Krapp traces the etymology of the error in machine languages from analog anomalies in general, and the extremes of Lou Reed’s Metal Machine Music (RCA, 1975) and Brian Eno‘s Discreet Music (EG, 1975) in particular, up through our current binary blips and bleeps, clicks and clacks — including Christian Marclay‘s multiple artistic forays and Cory Arcangel’s digital synesthesia. This book is about both forms of noise as well, paying due attention to the distortion of digital communication.

There is a place between voice and presence where information flows. — Rumi

Another one of my all-time favorite books on sound is David Toop’s Ocean of Sound (Serpent’s Tail, 2001). In his latest, Sinister Resonance: The Mediumship of the Listener (Continuum Books, 2010), he reinstates the human as an inhabitant on the planet of sound. He does this by analyzing the act of listening more than studying sound itself. His history of listening is largely comprised of fictional accounts, of myths and make-believe. Sound is a spectre. Our hearing is a haunting. From sounds of nature to psyops (though Metallica’s “Enter Sandman” is “torture-lite” in any context), the medium is the mortal. File Sinister Resonance next to Dave Tompkins’ How to Wreck a Nice Beach (Melville House, 2010) and Steve Goodman’s Sonic Warfare (MIT Press, 2010).

And how can we expect anyone to listen if we are using the same old voice? — Refused, “New Noise”

Life is loud, death is silent. Raise hell to heaven. Make a joyous noise unto all of the above.

———-

My thinking on this topic has greatly benefited from discussions with, and lectures and writings by my friend and colleague Josh Gunn.

References and Further Resonance:

Attali, J. (1985). Noise: The Political Economy of Music. Minneapolis, MN: University of Minnesota Press.

Evens, A. (2005). Sound Ideas: Music, Machines, and Experience. Minneapolis, MN: University of Minnesota Press.

Goodman, S. (2010). Sonic Warfare. Cambridge, MA: MIT Press.

Hegarty, P. (2008). Noise/Music: A History. New York: Continuum Books.

Keizer, G. (2010). The Unwanted Sound of Everything We Want: A Book About Noise. Philadelphia, PA: Public Affairs.

Krapp, P. (2011). Noise Channels: Glitch and Error in Digital Culture. Minneapolis, MN: University of Minnesota Press.

Parikka, J. (2011). Mapping Noise: Techniques and Tactics of Irregularities, Interception, and Disturbance. In E. Huhtamo & J. Parikka (Eds.), Media Archeology: Approaches, Applications, and Implications. Berkeley, CA: University of California Press.

Refused. (1998). “New Noise” [performed by Refused]. On The Shape of Punk to Come: A Chimerical Bombination in 12 Bursts (Sound recording). Örebro, Sweden: Burning Heart Records.

Schwartz, H. (2011). Making Noise: From Babel to the Big Bang and Beyond. New York: Zone Books.

Shannon, C.E., & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.

Sterne, J. (2003). The Audible Past: Cultural Origins of Sound Reproduction. Durham, NC: Duke University Press.

Tompkins, D. (2010). How to Wreck a Nice Beach. Brooklyn, NY: Melville House.

Toop, D. (2010). Sinister Resonance: The Mediumship of the Listener. New York: Continuum Books.

Voegelin, S. (2010). Listening to Noise and Silence: Towards a Philosophy of Sound Art. New York: Continuum Books.

remixthebook: Guest Post and Tweeting

In 1997, I wrote a piece about turntablism for Born Magazine called “Band of the Hand.” Years later, I wrote a related piece for Milemarker‘s now defunct Media Reader magazine, called “war@33.3: The Postmodern Turn in the Commodification of Music.” I’ve been revisiting, remixing, and revising these previous thesis pieces ever since. I eventually combined the two and posted them here, but I’ve also written other things that spin off from their shared trajectories.

This week, I am proud to be guest-tweeting for Mark America’s remixthebook (Univeristy of Minnesota Press, 2011). In addition, I posted a piece on the remixthebook site. remixthebook and its attendant activities situate the mash-up as a defining cultural activity in the digital age. With that in mind, I tried to go back to the writings above and update them using pieces of relevant things I’ve written since. If you will, my post is a metamix of thoughts and things I’ve written about remix in the past decade and a half or so, pieces which also represent material from my other book-in-progress, Hip-hop Theory: The Blueprint to 21st Century Culture. It’s a sample-heavy essay that aims to illustrate the point.

Here are a few excerpts:

Culture as meaning-making requires participation. In addition to the communication processes of encoding and decoding, we now participate in recoding culture. Using allusions in our conversation, writing, and other practices engages us in culture creation as well as consumption. The sampling and remixing practices of Hip-hop exemplify this idea more explicitly than any other activity. Chambers wrote, “In readily accessed electronic archives, in the magnetic memory banks of records, films, tapes and videos, different cultures can be revisited, re-vived, re-cycled, re-presented” (p. 193). Current culture is a mix of media and speech, alluded to, appropriated from, and mixed with archival artifacts and acts.

We use numerous allusions to pop culture texts in everyday discourse, what Roth-Gordon calls “conversational sampling.” Allusions, even as direct samples or quotations, create new meanings. Each form is a variation of the one that came before. Lidchi wrote, “Viewing objects as palimpsests of meaning allows one to incorporate a rich and complex social history into the contemporary analysis of the object.” It is through use that we come to know them. Technology is not likely to slow its expanse into every aspect of our lives and culture, and with it, the reconfiguration of cultural artifacts is also not likely to stem. Allusions – in the many forms discussed above and many more yet to come – are going to become a larger and larger part of our cultural vocabulary. Seeing them as such is the first step in understanding where we are headed.

Rasmussen wrote, “there is no ‘correct’ way to categorise [sic] the increasing diversity of communication modes inscribed by the media technologies. Categories depend on the nature of the cultural phenomena one wants to investigate.” Quotation, appropriation, reference, and remix comprise twenty first century culture. From our technology and media to our clothes and conversations, ours is now a culture of allusion. As Schwartz so poetically put it: “Whatever artists do, they are held in the loose but loving embrace of artists past.” Would that it were so.

The whole post is here.

Many thanks to Mark America and Kerry Doran for the opportunity and to everyone else for joining in on the fun. Here’s the trailer for the project [runtime: 1:21]:

iXnBVn_OS90

For the Nerds: Bricks, Blocks, Bots, and Books

I used to solve the Rubik’s Cube — competitively. I never thought much of it until I, for some unknown reason, was recently compelled to tell a girl that story. I now know how nerdy it sounds. The girl and I no longer speak.

Erno Rubik among his Cubes.
Some of the things I grew up doing, I knew were nerdy (e.g., Dungeons & Dragons, LEGOs, computers, etc.). Others were just normal. Looking back on them or still being into them, one sees just how nerdy things can be. In a recent column on his SYFFAL site, my man Tim Baker serves the nerds some venom. Nailing several key aspects of the issue, Baker writes,

Thanks to the proliferation of information on the internet anyone can be an expert in anything, well a self-presumed expert. The problem is that people are choosing to become experts in things that might carry a certain cultural currency in fringe groupings but have no real world value. Comic books and niche music scenes are great, and add to the spice of life but no matter how often the purveyors of such scenes repeat the mantra, they are by no means important. They are entertaining and enjoyable but fail to register on Maslow’s hierarchy of needs. So while cottage industries have popped up allowing those who are verbose enough to make a case that Led Zeppelin is essential to who we are, it does not change the fact that these experts are dabbling in the shallow end of the pool.

Now, if you know me, you know that I’m the last person to be promoting anything resembling growing up, but I will agree that since the widespread adoption of the web, nerd culture often gets completely out-of-hand. It’s also treated as a choice you can make, but as every true nerd knows, we’re born not made. As my friend Reggie Hancock puts it, citing the most recent nerd icon to end all nerd icons, Tina Fey:

Tina Fey is, unabashedly, a nerd. It’s not a badge of honor she wears, but a stink of reality. She’s not a nerd because she likes Star Wars and did an independent study of comedy in junior high school, Tina Fey likes Star Wars and did an independent study because she’s a nerd. It’s not a persona she assumes, she didn’t live with a dumb haircut for years on purpose, but because Tina Fey was born a nerd, lives as a nerd, and will die a nerd.

To the cheers and glee of nerdkind everywhere, John Baichtal and Joe Meno have edited a collection of ephemera regarding every adults favorite plastic blocks. The Cult of LEGO (No Starch Press, 2011) covers the blocks’ history, how-to, and hi-tech.

Nerd touchstones like comics, movies, LEGO-inspired video games (including Star Wars, of course), Babbage’s Difference Engine, and Turing machines are covered inside, as well as the LEGO font, image-to-brick conversions, home brick-printing, Douglas Couplandbrick artists, record-setting builds, and robots — Mindstorms, LEGO’s programmable robot line, by far the most sophisticated of the LEGO enclaves. Here’s the book trailer [runtime: 1:43]:

CByAKmKC4zQ

If you want to build stuff with more than just plastic bricks, O’Reilly’s magazine, Make: Technology on Your Time, is the grown-up nerd’s monthly bible. Volume 28 (October, 2011) is all about toys and games. There’s a pumpkin catapult, a kinda-creepy, semi-self-aware stuffed bear, a silly, copper steamboat, a giant bubble blower… It’s all here — and much more. Check the video below [runtime: 2:18].

So, whether you know someone who dweebs over arduinos, has fits over RFIDs, or just loves to build stuff, Make is the magazine. It gets no nerdier. Also, check out the Maker Shed (nerd tools and supplies galore) and Maker’s Notebooks (my favorite thing from this camp).

eU4GuSx3Z4Y

Oh, and if you can’t solve the Cube, there’s a LEGO Mindstorms Rubik’s Cube solver on page 245 of The Cult of LEGO. The machine takes an average of six minutes. For the record, my fastest time was 52 seconds.

Get on it, nerds.

David Preston’s Literature & Composition Class Talk

On November 2nd, I was invited to talk to Dr. David Preston’s Literature and Composition class via Blackboard Collaborate and Howard Rheingold‘s Rheingold University. Here’s a screen capture of that talk [Warning: It’s long. Runtime: 1:02:21]. Topics include a few of my projects, the web, advent horizons, collaborative learning, technology in the classroom and in the lives of the youth.

Many thanks to Ted Newcomb and Howard Rheingold for hooking this up, to David Preston and his students for their time, attention, and participation, and to Linda Burns for saving the video. This was a great opportunity and a humbling and inspiring experience.

Follow for Now is Now Available at BookPeople

Yep, nearly five years after its release, Follow for Now is now available at BookPeople in Austin, Texas. As you can see in the photo below, it’s in the General Science section, and I am quite proud.

It’s also in Cyberculture & History, and right now, in the New Arrivals.

So, if you’re in Austin and don’t have a copy, stop by and get yours.

Many thanks to Michael McCarthy and everyone at BookPeople for their support. And to you for yours.

Touching Screens: Digital Natives and Their Digits

Since I attempted to brand and explicate the Advent Horizon idea, the following clip has been circulating online. “The new generation is growing up with more digital than print media,” deigns The Huffington Post. “They play with their parents’ smartphones, tablets, laptops. We guess It’s only natural that they examine items that don’t respond to touch — and then move on to the things that do.” Danny Hillis once said that technology is the name we give to things that don’t work yet. I think this baby would disagree with that statement wholesale [runtime: 1:26]:

aXV-yaFmQNk

Though I find the sentiment that Steve Jobs “coded a part of her OS” a bit much, this clip reminds me of a story  by Jaron Lanier from the January, 1998 issue of Wired about children being smarter and expecting more from technology. Lanier wrote, “My favorite anecdote concerns a three-year-old girl who complained that the TV was broken because all she could do was change channels.” Clay Shirky tells a similar story in Cognitive Surplus (Penguin, 2010). His version involves a four-year-old girl digging in the cables behind a TV, “looking for the mouse.”

Without mutual engagement and accountability across generations, new identities can be both erratically inventive and historically ineffective. — Etienne Wenger

These are all early examples of a new Advent Horizon being crossed. The touchscreen, the latest ubiquitous haptic device, is here to stay. To those who are growing up with it, everything else seems “broken” — much like a TV “that only changes channels” to a native computer user. We become what we behold.

Why am I always looking at life through a window?
— Charlie Gordon in Flowers for Algernon by Daniel Keyes

The screen is already the most seductive of technologies. Think about how much time you spend staring at one screen or another. Iain Chambers (1994) writes, “In the uncanny property of the computer to present a ‘world picture’ we confront the boundary set by the screen, the tinted glass that lies between the apparently concrete world and the simulated one of ethereal lights” (p. 64). We want to get in there so bad. Think of the persistent dream of entering the screen and the machine: NeuromancerTRON, Snow CrashLawnmower Man, Videodrome, and even Inception, among many, many others. It has a mythology all its own.

To its end, we’ve gone from wearing the goggles and gloves of most virtual reality systems to using our bodies as input devices via the sensors of Wii and Kinect, bringing the machine into the room. Where our machines’ portability used to be determined by the size of the technology available, the size of our devices are now dictated by the size of our appendages. We can make cellphones and laptops smaller, but then we wouldn’t be able to hold them or press their buttons individually, a limitation that the touchscreen is admittedly working around gracefully. Still, we have to design at human scale. These are the thresholds of our being with our technology.

The Machine is not the environment for the person; the person is the environment for the machine. – Aviv Bergman

The long-range question is not so much what sort of environment we want, but what sort of people we want. – Robert Sommer

We have to think carefully and cumulatively about what we design. Technology curates culture. Technology is a part of our nature. How will we control it? The same way we do our lawns or our weight: Sometimes we will; sometimes we won’t, but we have to remember that we’re not designing machines. We’re designing ourselves.

References:

Chambers. I. (1994). Migrancy, Culture, Identity. New York: Routledge.

Christopher, R. (2007). Brenda Laurel: Utopain Entrepreneur. In R. Christopher (Ed.), Follow for Now: Interviews with Friends and Heroes. Seattle, WA: Well-Red Bear.

Keyes, D. (1966). Flowers for Algernon. New York: Harcourt.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.

Shirky, C. (2010). Cognitive Surplus: How Technology Makes Consumers into Collaborators. New York: Penguin.

Sommer, R. (2007). Personal Space: The Behavioral Basis of Design. Bristol, England, UK: Bosko Books.

Wenger, E. (1998). Communities of Practice: Learning, Meaning, and Identity. New York: Cambridge University Press.

————-

And I say peace to Friedrich Kittler (1943-2011).

Not Great Men: The Human Microphone Effect

The passing of Steve Jobs has sent millions of people into reflection and reverie, and begs questions of the possibilities of repeating his vision and success. “Will there ever be another Steve Jobs?” asks one publication. While another contrarily claims that he “was not god,” still others iconize him, call him a tech-messiah, and lament his passing with something just short of worship. As agnostic as I’ve been computer-wise, I’ve always been a fan of the man, but does the death of Steve Jobs mark the end of a human era, the end of the singular genius, the lone visionary, the thought leader? In some ways, I am compelled to answer affirmatively, but to give Jobs all the credit is to do him and others like him a disservice. As Bonnie Stewart put it, “I fully agree that Steve Jobs left us a legacy. But it is not to be him.” We are the reason he was the last of his kind.

The connectivity of the web has all but killed the archetype of the singular visionary leader. Online, we connect to share with each other, not to listen to a single voice. It’s not necessarily the death of the grand narrative and the birth of postmodernism, it’s more the onset of postMODEMism. Ever since we started modulating and demodulating our ideas, information, and identities, our heroes have been in harm’s way. The web is more about processes and projects than products. The web is inherently a collaborative space. Authorship does not equal ownership. We’re in this together.

In spite of recent reports, the creative class is very real, and, as Scott Smith pointed out, is the larger part of the masses currently occupying Wall Street. The creative class is still here, but like the creative genius, no one owes us a living. We have to make our own way, and we will.

Unlike others, I don’t think the Big Idea is dead either. I think our collaborative, networked thinking makes it more difficult to see the collaborative origins of the singular innovation. If ideas are networks, then big ideas are big networks. Even Jobs brought to market what were previously existing, networked ideas: “He saw what technologies were on the verge of being possible — and what technologies consumers were ready to accept,” Josh Bernoff wrote when Jobs stepped down as Apple CEO in August. “There could have been no iPhone without the habits created by iPods and Blackberry, no Mac without Apple and IBM PCs embraced by those who came before… Apple doesn’t make flash memory, microprocessors, touchscreens, or, for the most part, websites. It just puts them all together.” Toward the end of this 1996 interview with Steve Jobs on Wall Street Week with Louis Rukeyser [runtime: 4:32], Jobs talks about the sheer openness of the internet and how no one single company can ever contain it [the internet bit starts around 3:15]. “We’re going to see innovation contain it,” he says.

SaJp66ArJVI

No weak men in the books at home
The strong men who have made the world
History lives on the books at home
The books at home

It’s not made by great men

The past lives on in your front room
The poor still weak the rich still rule
History lives in the books at home
The books at home

It’s not made by great men
— Gang of Four, “Not Great Men”

It’s downright eerie watching these ideas collide in realtime on the choppy live-feed of Slavoj Žižek addressing the protestors of Occupy Wall Street today, as they respond in unison: “You don’t need a genius to be your leader.” This call-and-response is called “The Human Microphone” and is used due to restrictions on amplified sound in the public space of New York City. In an ironic mix of collaborative leadership, collective allegiance, communication technology, and lacks thereof, The Human Microphone is the perfect metaphor for the death of the hero. There is no “one for all” anymore. History’s not made by great men. As Bonnie Stewart concludes, “So maybe in this new world order, we should stop touting those who are ‘crazy enough to be geniuses’, — which is a romantic notion, even if it is sometimes true, like with Jobs — and reward those who are best able to share and innovate in teams.”

The good news for all is that collaboration makes each of us bigger. Find the folks that empower you to do more, to be more, and avoid the ones who don’t. As the Hopi once put it, “We are the ones we’ve been waiting for.”

————

Here’s a clip of an odd yet amazing cover of Gang of Four’s “Not Great Men” by an appropriately all-female Japanese percussion group [runtime: 4:09]:

K19jPwpP5XY

————

Many thanks to my friend Dave Allen for sharing links and the Japanese Gang of Four cover clip, to Mike Schandorf for sharing the Žižek live-feed, and to my friend and collaborating champion Heather Gold for sharing the Steve Jobs clip. Onward together.

Drawing Lines in Time: The Advent Horizon

Significant advances in technology are disruptive. They are beginnings. They are bifurcations. They are the initial conditions from which our media is born. As Jean Cocteau once put it, “The public does not like dangerous profundities; it prefers surfaces” (1972, p. 316). Feared and disparaged at first, technological contrivances are eventually welcomed in and change our world. They literally change our minds. They change our relationship with our world and with each other. Not unlike learning new words, every new advance is a new addition to our media lexicon. Our media vocabulary includes those technologies with which we feel facile or familiar. Cocteau continues, “As a matter of fact, the public likes to ‘recognize’ the familiar. It hates to be disturbed. It is shocked by surprises” (p. 315), and no one states the matter more clearly than Barry Brummett:

Every new technology is feared, is compared unfavorably to the one before, and is misunderstood, especially in the early years of its inception. We simply have fewer anxieties about computers, for instance, now than we did during their introduction into the global market and culture (p. 172).

One of the ideas in my talk “Disconnecting the Dots: How Our Devices are Divisive,” as well as in my book-in-progress The Medium Picture is the line we draw at the edge of our comfort zone with new technologies. It’s a line we draw as individuals as well as a society at large. I call it the Advent Horizon. I was pushed to explain it further by David Burn:

@davidburn Two key phrases from #Geekend presentations this week: Advent Horizon and Interchange Zero c/o @RoyChristopher and @sethpriebatsch#brainy

We feel a sense of loss when we cross one of these lines. From the Socratic shift from speaking to writing (see Wolf, 2007), to the transition from writing to typing, we’re comfortable — differently on an individual and collective level — in one of these phases. As we adopt and assimilate new devices, our horizon of comfort drifts further out while our media vocabulary increases. Any attempt to return to a so-called “Natural State” is a futile attempt to get back across the line we’ve drawn for ourselves.

Evidence that we’ve crossed one of these lines isn’t difficult to find. Think about the resurgence of vinyl record sales, or the way we teach computer animation. The former is an analog totem from a previous era, the latter is analog scaffolding for the digital world (what Bob Greenberg calls “analog drudgery“). Fans of vinyl records are either clinging to their youth or celebrating the only true music format that ever mattered. A vinyl record is a true document of a slice of time.

I visited Full Sail University in Orlando, Florida last summer. In their animation and game design programs, students take illustration (with pencils and paper), flipbook-style animation (with paper and lightboxes), and 3D modeling (real-world 3D, sculpture with clay and other materials) before they ever sit down at a computer. Clinging to a previous era and having to back up to learn something new: These are evidence that an Advent Horizon has been crossed.

Each generation is born during a certain technological era, between these lines we draw. We are imprinted by the media technology with which we grow up. For instance, there has always been a television in my world. When I was born, it was there. In contrast, my parents remember when the first TV arrived in their house. William Gibson tells the story.

The only memory I have of a world prior to media is of standing in a peanut field on a farm in Tennessee, looking down the hill at a black, 1950s, sort of, late ’40s panel truck, driving along the road.

One of the next earliest memories is of my father bringing home this wooden, box-like thing, with a cloth grille on the front, and a little round, circular television screen, which, I believe, we had for some time prior to there actually being any broadcast to receive.

And then there was a test pattern. I think the test pattern preceded any actual broadcast for several weeks, and the test pattern itself was only available briefly, at scheduled times. And people… neighbors, would come, and they would look at this static, non-moving pattern on the screen that… promised something.

And then television came.

As Alan Kay once said, “Technology is anything that was invented after you were born” (quoted in Kelly, 2010, p. 235). I have never known a world without television, and my students have never known — or don’t remember — a world without computers, the web, or cellular phones. Perhaps they will cross a line of comfort when implants become the norm for their children, but the world before wireless connectivity means nothing to them.

————
Here’s the relevant clip from my talk in Boston, thanks to David Burn [runtime: 1:37]:

JZuKwpU8PtQ

By the way, the “L” and “B” story at the beginning of this clip was a secret message to my girlfriend, who became my fiancé on this trip to Boston. Here’s to connecting our dots, Lily Brewer.

————

Many thanks to Sloane Kelley, Jake and Miriam Hodesh, and the rest of my Geekend family, as well as David Burn for the push on this idea.

References:

Brummett, Barry. (2008). A Rhetoric of Style. Carbondale, IL: Southern Illinois Press.

Cocteau, Jean. (1972). Cocteau’s World: An Anthology of Writings by Jean Cocteau. Margaret Crosland (Ed.). New York: Dodd, Mead & Company.

Kelly, Kevin. (2010). What Technology Wants. New York: Penguin.

Neale, Mark. (director). William Gibson: No Maps for These Territories [Motion picture]. London: Docurama.

Wolf, Maryanne. (2007). Proust and the Squid: The Story and Science of the Reading Brain. New York: Harper.