David Preston: Hacking High School

After a decade of teaching at the university level, David Preston decided to stop ignoring the ills we all know haunt those halls and dropped back to high school. He’s now trying to reform a place that desperately needs it. I got the chance to participate in a discussion with his literature and composition classes, thanks to David, Ted Newcomb, and Howard Rheingold, all of whom are hacking education in various ways. I can tell you with no reservations that David is making the difference. I want to keep this introduction as brief as possible and just let him tell you about it. Some men just want to watch the world learn.

Roy Christopher: What drove you from the hallowed hells of academia to teaching high school?

David Preston: (Hang on, let me hop up on my soapbox) Every generation thinks school can’t get any worse but somehow we manage. When I was a kid I hated school but loved learning (and still do), so when I graduated I thought I could liberate the other inmates by learning about the institution and how to fix it. After college I wrote about schools as a journalist and then I went back for a master’s and a Ph.D. in education. But in grad school I discovered the politics, how difficult it is to ask pressing questions without incurring the wrath of well-funded powers-that-be. Eventually I figured there wasn’t enough lipstick for this institutional pig and found my way into management consulting, where I worked with executives and organizations on learning and planning. Even though I was making good money and keeping my hand in by teaching courses at UCLA, the idea of school nagged at me because I could see the trend worsening. Really smart, highly-motivated students and executives told me how completely unprepared they were for life after graduation—and these were the successful people! Today’s students have it even worse. They don’t learn about their own minds, they don’t learn about how they fit in the larger scheme of things, they don’t learn how to use the tools available to them, and they don’t learn the basics of how to manage their bodies or their money. Forget the achievement gap and the union versus reform sideshow—even the best prep school curricula are designed for a world that no longer exists (if it ever did). Once upon a time the American high school diploma signified that a person had the tools to be self-sufficient; now it’s like one of those red deli counter tickets that tells you to line up at the recruiter’s office or financial aid. And the worst part is, today’s students know all this because technology allows them to see the world for themselves. They don’t have to be told that school is an irrelevant exercise in obedience.

I’ve been critical of school since watching my first grade teacher pull kids’ hair for getting math problems wrong, but after 9/11 I thought about the issue differently. I reflected on how our thinking influences the world we’re living in and the future we’re creating for ourselves. Whatever big-picture issue you care about—the environment, the economy, human rights, politics—is defined by how people think and communicate about it. And the institution ostensibly in charge of helping people learn to think and communicate is fucked. So, when a friend of mine suggested in 2004 that I take a “domestic Peace Corps” sabbatical and offered me an opportunity to teach high school courses, I turned him down immediately. But over the next couple of weeks I realized that you never hear anything about education policy from inside the classroom, and I’d get to be an embedded anthropologist. Boots on the ground. I wanted to find out what today’s students are actually like (they’re not the Digital Natives you read about!) and what actually goes on in school on the days they don’t give tours. I may have been fantasizing about Hunter S. Thompson riding with Hell’s Angels or Jane Goodall hanging with chimps when I said yes to going back inside the belly of the beast.

I taught at the country’s fourth-largest high school in LA. It had a year-round calendar with three tracks to accommodate five thousand students, most of whom didn’t carry books because they didn’t want to get jumped on the way home. But this one student, Zolzaya Damdinsuren, came into my class during a sweaty summer school afternoon and made me an offer I couldn’t refuse. This is a whole other story, but the bottom line is that I spent a month in western China, Tibet, and Mongolia with Zolzaya and his family, and the experience changed me. By the time I returned I had decided not to return to my consulting practice. Instead I resolved to create learning solutions that would help people whether they were in school or not. I moved to California’s central coast and I’ve been hacking education ever since.

RC: Tell me about your current education project, the one you’ve been piloting for a while now.

DP: I’m helping students build a massively multiplayer online learning network. I started with the students in my high school classes. Initially, 100 students created 100 blogs and learned about online security, privacy, filter bubbles, search, online business models, and how to use social media to curate and broadcast information. We reached out to authors, we conducted a flash mob research project that created a mindmap out of a William Gibson interview in 24 hours, and we held video conferences with illustrious celebrities such as yourself. That was fall semester. Now we’re reaching out to recruit a study group of 20,000-50,000 people to prepare for the AP English Literature & Composition exam using both synchronous and asynchronous platforms. This is proof-of-concept: the ultimate goal is to create an online exchange that offers the resources and tools people need to acquire information, demonsrate mastery and build a portfolio of work. In five years I want to see a teacher make a million dollars, not because of some collective bargaining agreement, but because she’s that good. Maybe she’s an author, maybe she’s a mechanic. I want to create a model of community in which learning is an economic driver. I think the outcome will be a competitive market of entrepreneurs, job candidates and creatives who aren’t just eager to tell you what they can do, but eager to show you what they’ve already done.

RC: What insights have you found doing this work?

Until about two years ago I was focusing on interdisciplinary curriculum and information-referenced assessment models as ways to extend what I could offer students. But basically these were just ways of remixing the standard curriculum and providing more formative feedback to learners. Even my use of social media was essentially limited to conserving paper, helping absentees, and trying to make the same old lessons seem more engaging or entertaining.

You see that sort of thing all over the Web. Blended learning, virtual schooling, online lessons, LMS, SIS—some of the ideas and applications are really cool, but it’s all essentially Skinner’s Box 2.0. It’s what happens when anything good gets sucked into the school policy meat grinder. Apple in the world = Think different. Apple in school = Electronic textbooks. Peter Drucker said the worst thing management can do is the wrong thing more efficiently. Standardizing and streamlining is great if you’re starting with something of quality, but otherwise incremental change makes the problem worse because it reinforces the idea that change is impossible. You can’t lose twenty pounds by eating one less Twinkie a day. You have to radically, fearlessly redesign from purposeful scratch. That’s how evolutionary adaptation works: one day there’s no fin, then the water rises and—Whoa!—everybody who’s still alive and reproducing has fins. So I gave up trying to tweak the finless and started thinking more about where we are trying to swim. This took the form of a simple question: What does it take to be an educated global citizen in the 21st century?

The real opportunity of the Internet is creating a network that takes on its own momentum, grows, and exponentially increases its value. In fact, I think at this point network theory has a greater payoff in learning than learning theory does. The really cool part is that as the network grows and gains experiences, it also changes purpose and direction. School isn’t built to tolerate that, which I think is a big issue, considering the need for innovation in this country.

It’s exciting to be a part of something so dynamic. In too many places learners are forced to wait for an institution, or a government, or an economic sector to get its act together and do right by them. Learners don’t have to wait for Superman. They are Superman.

RC: Well, one of the things I wonder is where the funding comes from. That still seems to be a major problem with education reform, and I’m not just talking about funding for technology and other resources, but funding for teachers: One of the main reasons interesting and innovative people avoid teaching in high school is because there’s so much more money to be made elsewhere. How do we fund this revolution?

DP: Learning needs to become the economic driver. We need a learning environment in which learners and mentors select each other, co-create interdisciplinary curricula and demonstrate mastery in ways that translate to the broader economy and life in our culture. Such an open market would allow learning innovators to create revenue streams that feed communities and align compensation with perceived value and performance: if you suck you starve, if you rock you make bank. This is happening already. In Korea, teacher Rose Lee is known as the “Queen of English.” She makes over $7 million a year. If clients are willing to invest that much in university prep, imagine what they’ll do for top-shelf professionals who can prepare the next generation for economic success without needing the university at all. Creating a new economic sector around learning makes mentoring a much more dynamic and potentially lucrative endeavor than teaching ever was.

Until that exists, though, it’s still possible to integrate coursework and network once learners get the basics of the Internet and online privacy/security. It doesn’t take much money for an individual teacher to offer online learning opportunities. I started off guerrilla style. Everything I’m currently using with students is available for free to anyone who has access to the Internet—and every student has access to the Internet. It drives me crazy when I hear well-meaning adults suggest that we not work online with students because not everyone has a computer at home. We read books with students, and some of my students don’t have those at home either. This is Problem Solving 101. If you don’t have a computer at home you have an access problem. That would be a cruel proposition if the problem wasn’t super easy, but we are surrounded by solutions. Go to a friend’s; go to the computer center or library; spend $3 at the copy store. If an entire community is impacted to the point that an individual really can’t access the Internet, document the case that supports getting the community connected. Agitate. Citing lack of Internet access in 2012 is an admission of defeat that suggests a lack of determination and imagination.

RC: What are you up to off-campus?

DP: For the last six months I have been neck-deep in the work I’m doing with students. Writing curriculum, reading blogs, and replying to messages around the clock seven days a week. It’s insane. I’ve never worked harder as a teacher or had more fun. Now I’m documenting the process and starting to promote it. I’m writing a white paper, starting a blog, designing the system architecture for the learning exchange, consulting, and speaking about the proof of concept. Next event is the CUE conference in Palm Springs on March 15.

It’s hard to overstate the importance of liberating learning from school. Our present is competitive and our future is uncertain. My old mentor used to say that in chaos there is profit, but success in 2012 is not for the passive, weak, or risk-averse. Intellectual and financial freedom isn’t something that can be given to you. You have to take it.

Return to Cinder: Supergods and the Apocalypse

Grant Morrison describes his growing up through comics books as a Manichean affair: “It was an all-or-nothing choice between the A-Bomb and the Spaceship. I had already picked sides, but the Cold War tension between Apocalypse and Utopia was becoming almost unbearable” (p. xiv). Morrison’s first non-comic book, Supergods (Spiegel & Grau, 2011), is one-half personal statement, one-half art history. It’s an autobiography told through comic books and a history of superheroes disguised as a memoir. His early history of superhero comics is quite good, but it gets really, really good when Morrison enters the story full-bore — first as a struggling but successful freelancer and later as a chaos magician of the highest order, conjuring coincidence with superhero sigils.

As if to follow Kenneth Burke’s dictum that literature represents “equipment for living,” Morrison puts a lot of weight on the shoulders of the supergods. “We live in the stories we tell,” he writes, and he’s not just saying that. Morrison wrote himself into his hypersigil comic The Invisibles and watched as the story came to life and nearly killed him.

In Supergods Morrison tells the story in high relief and stresses the transubstantiation between words and images on a page and thoughts and actions in the real world. His works are largely made up of “reality-bending metafictional freakouts dressed up in action-adventure drag,” as Douglas Wolk (2007) describes them, “metaphors that make visible the process by which language creates an image that in turn becomes narrative” (p. 258). If you’re not one for the magical bent, think of it as a strong interpretation of the Sapir-Whorf hypothesis with a Rortian addendum: If we assume that language creates reality, then we should use language to create the reality we want to live in. Morrison writes, “Superhero comics may yet find a purpose all along as the social realist fiction of tomorrow” (p. 116). He insists that whether we realize it or not, we are the superheroes of this world.

The mini-apocalypse of September 11th, 2001 presented an odd dilemma not only for us, but also for our masked and caped heroes and our relationships to them. On one side, the event questions the effectiveness of our superheroes if something like that can happen without their intervention. Our faith in them crumbled like so much steel and concrete. On the other, after witnessing that day, we were more ready to escape into their fantasy world than ever. The years after that event exemplified what Steve Aylett described as a time “when people would do almost anything to avoid thinking clearly about what is actually going on.”

9/11 is conspicuously missing from Peter Y. Paik’s From Utopia to Apocalypse: Science Fiction and the Politics of Catastrophe (University of Minnesota Press, 2010), as is Morrison, but blurbed by our friends Steven Shaviro and Bruce Sterling, the book provides another look at the link between the printed page and the world stage. As a contemporary companion to Barry Brummett’s Contemporary Apocalyptic Rhetoric, which came out in 1991, Paik’s book provides another peek at the larger picture beyond the page that Morrison alludes to. I do find it odd that there’s no discussion of 9/11, a date that also roughly marks an epochal shift between things that were once considered nerdy and now are not. Morrison rails against the word “geek” as applied to comic book fans saying, “They’re no different from most people who consume things and put them in the corner or put them in a drawer… Anyone who’s into anything could be called a geek, but they don’t call them a geek.”

As much of a nerd as I’ll admit I am, I’ve never really been much for comic books. With that said, I found Supergods enthralling, much in the same way I found the screen stories of Tom Bissell’s Extra Lives. Intergalactic narrative notwithstanding, Morrison’s prose seems both carefully constructed and completely natural. As my colleague Katie Arens would say, he writes to be read. My lack of comic-book knowledge sometimes made following the historical cycles of superheroes difficult, but Morrison’s presence in these pages and personal touch kept me reading hyper-attentively. Here’s hoping he writes at least half of the other books hinted at herein.

————-

My own introduction to Grant Morrison came via Disinformation‘s DisinfoCon in 2000 where he explains the basics of chaos magic in an excitedly drunken Scottish accent [runtime: 45:28]:

HrybcY1Pzlg

References:

Brummett, Barry. (1991). Contemporary Apocalyptic Rhetoric. Westport, CT: Praeger.

Burke, Kenneth. (1974). The Philosophy of Literary Form. Berkeley, CA: University of California Press.

Hiatt, Brian. (2011, August 22). Grant Morrison on the Death of Comics. Rolling Stone.

Morrison, Grant. (2011). Supergods: What Masked Vigilantes, Miraculous Mutants, and a Sun God from Smallville Can Teach Us About Being Human. New York: Spiegel & Grau.

Wolk, Douglas. (2007). Reading Comics: How Graphic Novels Work and What They Mean. Cambridge, MA: Da Capo.

2011: Are You Going to Eat That?

It’s December and time to reassess the year, and 2011 is a joy to revisit. It was easily my best year ever personally. I signed a book deal, spoke at several conferences with some of my best friends, got engaged to a wonderful woman, built some new bikes, redesigned my website (finally), and finished coursework and comprehensive exams on my way to a Ph.D., among other things.

This year was crazy, from the death of Steve Jobs and Occupy Wall Street to the ramping up of some sort of political happening. I also saw, listened to, and read a lot of good stuff. Here is the best of the media I consumed this year:

Album of the Year: Hail Mary Mallon Are You Going to Eat That? (Rhymesayers):  Hail Mary Mallon is the melding of word-murdering minds Aesop Rock and Rob Sonic and the laser-precise cuts of DJ Big Wiz, all three Def Jux alumni and no strangers to the raps and beats in their own rights. In the interest of full disclosure, these dudes are my friends. To be perfectly honest, if they were wack they wouldn’t be.

These three have been touring and clowning together for years in different guises, and it’s obvious when you hear how well they play together. Are you Going to Eat That? is the dopest record out this year.

Production-wise, “Mailbox Baseball” sounds like an Iron Galaxy outtake, while “Grubstake” evokes the stripped down reduction—all 808s and sparse scratches—of a salad-day-era Rick Rubin. Aes and Rob pass the mic like the Treacherous Three. “Table Talk” is a 21st-century “High-Plains Drifter.” But don’t get any of this twisted: this is not a throwback, it’s a leap forward.

It’s all good (“Breakdance Beach” is dope, though it does get grating upon repeated listens), and the skills are barn-razing and bar-raising. Whether it’s Hannibal Lector or Cannibal Ox, Hail Mary Mallon prove that rap will eat itself.

Here’s their video for “Meter Feeder” [runtime: 3:47] directed by Alexander Tarrant and Justin Metros:

G-QxnfpTG6c
 
 
Close Second: Radiohead The King of Limbs (Waste): “I’m such a tease and you’re such a flirt…” The most important band in the world has returned with another cure for the malaise of the age. Pick one: They’ve saved rock and roll, killed rock and roll, and still emerged from the muck of the music industry well ahead of the curve. Everyone in media keeps them under the microscope to see how they will win. Again. Lean in, here’s the secret:

Radiohead makes great records.

And they do it consistently. They’re also quite adept at parsing the patterns on the horizon of the mediascape, but that wouldn’t matter if their records weren’t good. Damn good.

The King of Limbs is no exception. It’s more mellow than the sparsest parts of Amnesiac, but not nearly as insular. It might be their most even record. Thom Yorke’s voice, which I have to admit used to grate on me as often as it moved me, has gotten mature enough to carry the toughest of tunes. He is the voice of Radiohead, literally and figuratively (no small task either way), and he handles it with confidence and control.

Radiohead was never as joyfully abrasive as Sonic Youth or The Flaming Lips, but The King of Limbs reminds me of the releases of the former’s A Thousand Leaves and the latter’s The Soft Bulletin. All three records are still weird in their ways, but they’re also far more subtle than the previous work of their creators. Radiohead have always been masters of subtlety, and with The King of Limbs, they’ve earned their Ph.D. It’s such a tease and such a flirt.

Even Closer Third: Ume Phantoms (Modern Outsider): If ever a band were poised for the next level, Ume has been teetering there headlong for the better part of the past few years. Phantoms is the kind of record that neuters naysayers and emboldens enthusiasts. Lauren, Eric, and Rachel are some of the friendliest folks you’re likely to meet, but on stage they are ferocious. While Eric (bass) and Rachel (drums) are the stable and able drivetrain, Lauren (guitar and vocals) is the high-octane, internal combustion engine, careening ahead on the edge of control. Theirs is pop music in the sense that it’s explosive. Their live shows are where the real, volatile magic happens, but Phantoms captures their energy serviceably. For further evidence, here’s the video for “Captive” from Phantoms directed by Matt Bizer [runtime: 4:01], the most shared video on MTV.com:

kzPwXefCR1w

Runners Up: Wolves in the Throne Room Celestial Lineage (Southern Lord), Seidr For Winter Fire (Flenser), Cloaks Versions Grain (3by3), Jesu Ascension (Caldo Verde), Big Sean Finally Famous (GOOD Music), Knives From Heaven s/t (Thirsty 3ar), Pusha T Fear of God/Fear of God II: Let Us Pray (GOOD/Decon/Re-Up), Random Axe s/t (Duck Down), IconAclass For the Ones (deadverse), Crack Epidemic American Splendor (self-released), Deafheaven Roads to Judah (Deathwish), Panopticon Social Disservices (Flenser), Graveyard Hisingen Blues (Nuclear Blast).
Most Overrated: Opeth Heritage (Roadrunner), Kanye West & Jay-Z Watch the Throne.

Live Show of the Year: Deftones, June 4, 2011, Austin Music Hall, Austin, TX: Say what you will, but it’s absolutely unfair to lump Deftones in with bands they have next-to-nothing to do with (e.g., Limp Bizkit, Korn, Tool, et al). Deftones are as sophisticated as they are heavy and as beautiful as they are aggressive, as much like the Cure as they are Clutch. Their live show confirms all of this and more.
Runners Up: Mogwai, May 16, Stubbs, Austin, TX; Wolves in the Throne Room, September 27, Red 7, Austin, TX.

Comedian of the Year: Louis CK: No one else comes close.

Event of the Year: South by Southwest: SXSW is always a blurry blast, but this year was especially good. I got the opportunity to speak at Interactive and run around with friends seeing great music the rest of the time. You know who you are. Here’s to next year.
Runners Up: SF MusicTech Summit, Geekend Roadshow Boston.
Most Overrated: TEDxAustin.

Book of the Year: James Gleick The Information (Pantheon Books): James Gleick always brings the goods, and The Information is no exception. This is a definitive history of the info-saturated now. From Babbage, Shannon, and Turing to Gödel, Dawkins, and Hofstadter, Gleick traces the evolution of information theory from the antediluvian alphabet and the incalculable incomplete to the memes and machines of the post-flood. I’m admittedly biased (Gleick’s Chaos quite literally changed my life’s path), but this is Pulitzer-level research and writing. The Information is easily the best book of the year.
Runners Up: Insect Media by Jussi Parikka (University of Minnesota Press), The Secret War Between Downloading and Uploading by Peter Lunenfeld (The MIT Press), The Beach Beneath the Street by McKenzie Wark (Verso), remixthebook by Mark Amerika (University of Minnesota Press), Marshall McLuhan: You Know Nothing of My Work! by Douglas Coupland (Atlas & Co.).
Most Overrated: Ready Player One by Ernest Cline (Crown).

Educator of the Year: Howard Rheingold: Howard’s homegrown Rheingold University started this year and quickly established an impressive online curriculum. I took the first class and joined the very active alumni in continuing our co-learning with Howard’s help. It was through this group that I got the opportunity to speak to David Preston’s Literature and Composition class — one of the best experiences I’ve had in education.

Site of the Year: Shut Your Fucking Face and Listen: My man Tim Baker and his band of ne’er do wells have put together a site that’s as hysterical as it is historical. Mostly focused on music, they veer off on pop culture tangents and mad rants that are always more entertaining than their subject matter. Get up on that.

TV Show of the Year: Breaking Bad (AMC): I have Tim Baker from SYFFAL to thank for this one. This show doesn’t just rearrange the furniture in the standard TV drama’s livingroom, it tosses it on the lawn and sets it on fire. I’ve only made it through the first three seasons, but my guess is that by the end of the recently inked fifth and final, this will be hailed as one of the greatest shows ever to creatively corrupt the television medium.
Runners Up: Party Down (Starz); Lie to Me (Fox).

Movie of the Year: The Muppets (Disney): I haven’t laughed so consistently through a movie since maybe first seeing Doug Liman’s Go in the theater. It’s not flawless (maybe one too many metacomments and one too many eighties references), but it is downright entertaining from titles to credits. So good to see a chunk of your chlidhood revived so well.
Runner Up: Tree of Life (Plan B).

Video of the Year: “Yonkers” by Tyler, The Creator: Written, directed, produced, rapped, and eaten by Tyler himself. I’ve already spouted my feelings about OFWGKTA elsewhere.
Runners up: Pusha-T featuring Tyler, The Creator “Trouble on My Mind,” Big Sean featuring Chiddy Bang “Too Fake,” Hail Mary Mallon “Meter Feeder” (embedded above).

So those are a few of the things that caught and held my attention this year. What were yours?

Sharing Music: Kick Out the Spam…

I spent my undergraduate years working at record stores. Not surprisingly, the lulls behind the counter were largely spent talking about and sharing music. We’d all bring in our small CD cases, each stocked with a dozen or so discs for the shift. There was a lot of judging and clowning, but even more sharing and putting each other on to new sounds.

When I first got an iPod in 2003, I thought the practice would continue. Around the time that I procured my refurbished player, my friend Chang came out to San Diego on tour with dälek. Before a show one day, he was hanging out with some of his old college friends, one of whom had a new boyfriend. Chang snagged the dude’s iPod from her, and was judging her new beau on the merits of his mp3s. Maybe this happens more often than I’m aware, but this case is the rarity in my experience. Ironically, our listening experiences tend to be as insular as the devices that facilitate them.

When the Walkman first came out, it was intended for sharing. The first models released had two headphone jacks. I distinctly remember the first one I listened to having dual jacks. When the initial numbers came back, and they found that no one was sharing the devices, Sony retooled their tack. In the ads, Weheliye (2005) writes that “couples riding tandem bicycles and sharing one Walkman were replaced by images of isolated figures ensnared in their private world of sound” (p. 135). And so it has gone, each of us to his or her own.

There is research on the matter though. Termed “playlistism,” the studies aim to highlight the links between music and identity using the practice of sharing playlists. Assuming that we compile playlists to represent our identities, the sharing of them should show how we present ourselves through music. Citing Brown, Sellen, & Geelhoed (2001), Valcheva (2009) found that sharing via peer-to-peer networks “confounded the traditional way of possessing and sharing music, and thus instigating a shift, on one hand, towards a citizen/leech styled community where music sharing interaction tends to be anonymized.” We don’t use P2P spaces to share in a traditional sense. In contrast, “[P]laylistism is underpinned by the practice of capturing and contributing one’s ‘music personality’ in the form of playlists that are either published online or shared through portable devices.” As one article put it, “We are what we like.”

Now that we listen more from the cloud and less as a crowd, the streaming services have adopted a stance of “social integration.” Similar to what Four Square does with your location when you check in to a place (automatically sending it to your social networks), Spotify does with the song you’re listening to. While Spotify doesn’t require that you share your listening, it does require you to have a Facebook account. Some online publications have adopted the practice as well, letting all of your friends know what you’ve been reading online. The trend is troubling. Social integration is the opposite of sharing. Sharing implies intention, and if your playlists are being broadcast without your curation, well, then they’re just spam in the streams of those who follow or friend you. It’s analogous to signing your friends up to newsletters they might not want or adding their numbers to telemarketers call-lists. There is nothing social about it.

I believe sharing music is a powerful practice. I wouldn’t know about most of the bands I listen to or have ever listened to if it weren’t for the friends who shared them with me. Sharing via automation does not make things social. Real sharing requires attention and intention. No algorithm can replicate that.

References:

Brown, B., Sellen, A. & Geelhoed, E. (2001). Music sharing as a computer supported collaborative application. Proceedings of ECSCW 2001, Bonn, Germany: Kluwer academics publisher.

Gelitz, Christiane (2011, March/April) You Are What You Like. Scientific American Mind.

Valcheva, Mariya (2009). Playlistism: a means of identity expression and self‐representation. A report on a conducted scientific research within “The Mediatized Stories” project at the University of Oslo.

Weheliye, Alexander G. (2005). Phonographies: Grooves in Sonic Afro-Modernity. Durham, NC: Duke University Press.

Bring the Noise: Systems, Sound, and Silence

In our most tranquil dreams, “peace” is almost always accompanied by “quiet.” Noise annoys. From the slightest rattle or infinitesimal buzz to window-wracking roars and earth-shaking rumbles, we block it, muffle it, or drown it out whenever possible. It is ubiquitous. Try as we might, cacophony is everywhere, and we’re the cause in most cases. Keizer (2010) points out that, besides sleeping (for some of us), reading is ironically the quietest thing we do. “Written words were meant to evoke heard speech,” he writes, “and were considered inadequate until they did so, like tea leaves before the addition of hot water” (p. 21). Reading silently was subversive.

We often speak of noise referring to the opposite of information. In the canonical model of communication conceived in 1949 by Claude Shannon and Warren Weaver, which I’ve been trying to break away from, noise is anything in the system that disrupts the signal or the message being sent.

If you’ve ever tried to talk on a cellphone in a parking garage, find a non-country station on the radio in a fly-over state, or follow up on a trending topic on Twitter, then you know what this kind of noise looks like. Thanks to Shannon and Weaver (and their followers; e.g., Freidrich Kittler, among many others), it’s remained a mainstay of communication theory since, privileging machines over humans (see Parikka, 2011). Well before it was a theoretical metonymy, noise was characterized as “destruction, distortion, dirt, pollution, an aggression against the code-structuring messages” (Attali, 1985, p. 27). More literally, Attali conceives noise as pain, power, error, murder, trauma, and youth (among other things) untempered by language. Noise is wild beyond words.

The two definitions of noise discussed above — one referring to unwanted sounds and the other to the opposite of information — are mixed and mangled in Hillel Schwartz’s Making Noise: From Babel to the Big Bang and Beyond (Zone Books, 2011), a book that rebelliously claims to have been written to be read aloud. Yet, he writes, “No mere artefacts of an outmoded oral culture, such oratorical, jurisprudence, pedagogical, managerial, and liturgical acts reflect how people live today, at heart, environed by talk shows, books on tape, televised preaching, cell phones, public address systems, elevator music, and traveling albums on CD, MP3, and iPod” (p. 43). We live not immersed in noise, but saturated by it. As Aden Evens put it, “To hear is to hear difference,” and noise is indecipherable sameness. But, one person’s music is another’s noise — and vice versa (Voegelin, 2010), and age and nostalgia can eventually turn one into the other. In spite of its considerable heft (over 900 pages), Making Noise does not see noise as music’s opposite, nor does it set out for a history of sound, stating that “‘unwanted sound’ resonates across fields. subject everywhere and everywhen to debate, contest, reversal, repetition: to history” (p. 23).

Wherever we are, what we hear is mostly noise. When we ignore it, it disturbs us. When we listen to it, we find it fascinating.
John Cage

The digital file might be infinitely repeatable, but that doesn’t make it infinite. Chirps in the channel, the remainders of incomplete communiqué surround our signals like so much decimal dust, data exhaust. In Noise Channels: Glitch and Error in Digital Culture (University of Minnesota, 2011), Peter Krapp finds these anomalies the sites of inspiration and innovation. My friend Dave Allen is fond of saying, “There’s nothing new in digital.” To that end, Krapp traces the etymology of the error in machine languages from analog anomalies in general, and the extremes of Lou Reed’s Metal Machine Music (RCA, 1975) and Brian Eno‘s Discreet Music (EG, 1975) in particular, up through our current binary blips and bleeps, clicks and clacks — including Christian Marclay‘s multiple artistic forays and Cory Arcangel’s digital synesthesia. This book is about both forms of noise as well, paying due attention to the distortion of digital communication.

There is a place between voice and presence where information flows. — Rumi

Another one of my all-time favorite books on sound is David Toop’s Ocean of Sound (Serpent’s Tail, 2001). In his latest, Sinister Resonance: The Mediumship of the Listener (Continuum Books, 2010), he reinstates the human as an inhabitant on the planet of sound. He does this by analyzing the act of listening more than studying sound itself. His history of listening is largely comprised of fictional accounts, of myths and make-believe. Sound is a spectre. Our hearing is a haunting. From sounds of nature to psyops (though Metallica’s “Enter Sandman” is “torture-lite” in any context), the medium is the mortal. File Sinister Resonance next to Dave Tompkins’ How to Wreck a Nice Beach (Melville House, 2010) and Steve Goodman’s Sonic Warfare (MIT Press, 2010).

And how can we expect anyone to listen if we are using the same old voice? — Refused, “New Noise”

Life is loud, death is silent. Raise hell to heaven. Make a joyous noise unto all of the above.

———-

My thinking on this topic has greatly benefited from discussions with, and lectures and writings by my friend and colleague Josh Gunn.

References and Further Resonance:

Attali, J. (1985). Noise: The Political Economy of Music. Minneapolis, MN: University of Minnesota Press.

Evens, A. (2005). Sound Ideas: Music, Machines, and Experience. Minneapolis, MN: University of Minnesota Press.

Goodman, S. (2010). Sonic Warfare. Cambridge, MA: MIT Press.

Hegarty, P. (2008). Noise/Music: A History. New York: Continuum Books.

Keizer, G. (2010). The Unwanted Sound of Everything We Want: A Book About Noise. Philadelphia, PA: Public Affairs.

Krapp, P. (2011). Noise Channels: Glitch and Error in Digital Culture. Minneapolis, MN: University of Minnesota Press.

Parikka, J. (2011). Mapping Noise: Techniques and Tactics of Irregularities, Interception, and Disturbance. In E. Huhtamo & J. Parikka (Eds.), Media Archeology: Approaches, Applications, and Implications. Berkeley, CA: University of California Press.

Refused. (1998). “New Noise” [performed by Refused]. On The Shape of Punk to Come: A Chimerical Bombination in 12 Bursts (Sound recording). Örebro, Sweden: Burning Heart Records.

Schwartz, H. (2011). Making Noise: From Babel to the Big Bang and Beyond. New York: Zone Books.

Shannon, C.E., & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.

Sterne, J. (2003). The Audible Past: Cultural Origins of Sound Reproduction. Durham, NC: Duke University Press.

Tompkins, D. (2010). How to Wreck a Nice Beach. Brooklyn, NY: Melville House.

Toop, D. (2010). Sinister Resonance: The Mediumship of the Listener. New York: Continuum Books.

Voegelin, S. (2010). Listening to Noise and Silence: Towards a Philosophy of Sound Art. New York: Continuum Books.

remixthebook: Guest Post and Tweeting

In 1997, I wrote a piece about turntablism for Born Magazine called “Band of the Hand.” Years later, I wrote a related piece for Milemarker‘s now defunct Media Reader magazine, called “war@33.3: The Postmodern Turn in the Commodification of Music.” I’ve been revisiting, remixing, and revising these previous thesis pieces ever since. I eventually combined the two and posted them here, but I’ve also written other things that spin off from their shared trajectories.

This week, I am proud to be guest-tweeting for Mark America’s remixthebook (Univeristy of Minnesota Press, 2011). In addition, I posted a piece on the remixthebook site. remixthebook and its attendant activities situate the mash-up as a defining cultural activity in the digital age. With that in mind, I tried to go back to the writings above and update them using pieces of relevant things I’ve written since. If you will, my post is a metamix of thoughts and things I’ve written about remix in the past decade and a half or so, pieces which also represent material from my other book-in-progress, Hip-hop Theory: The Blueprint to 21st Century Culture. It’s a sample-heavy essay that aims to illustrate the point.

Here are a few excerpts:

Culture as meaning-making requires participation. In addition to the communication processes of encoding and decoding, we now participate in recoding culture. Using allusions in our conversation, writing, and other practices engages us in culture creation as well as consumption. The sampling and remixing practices of Hip-hop exemplify this idea more explicitly than any other activity. Chambers wrote, “In readily accessed electronic archives, in the magnetic memory banks of records, films, tapes and videos, different cultures can be revisited, re-vived, re-cycled, re-presented” (p. 193). Current culture is a mix of media and speech, alluded to, appropriated from, and mixed with archival artifacts and acts.

We use numerous allusions to pop culture texts in everyday discourse, what Roth-Gordon calls “conversational sampling.” Allusions, even as direct samples or quotations, create new meanings. Each form is a variation of the one that came before. Lidchi wrote, “Viewing objects as palimpsests of meaning allows one to incorporate a rich and complex social history into the contemporary analysis of the object.” It is through use that we come to know them. Technology is not likely to slow its expanse into every aspect of our lives and culture, and with it, the reconfiguration of cultural artifacts is also not likely to stem. Allusions – in the many forms discussed above and many more yet to come – are going to become a larger and larger part of our cultural vocabulary. Seeing them as such is the first step in understanding where we are headed.

Rasmussen wrote, “there is no ‘correct’ way to categorise [sic] the increasing diversity of communication modes inscribed by the media technologies. Categories depend on the nature of the cultural phenomena one wants to investigate.” Quotation, appropriation, reference, and remix comprise twenty first century culture. From our technology and media to our clothes and conversations, ours is now a culture of allusion. As Schwartz so poetically put it: “Whatever artists do, they are held in the loose but loving embrace of artists past.” Would that it were so.

The whole post is here.

Many thanks to Mark America and Kerry Doran for the opportunity and to everyone else for joining in on the fun. Here’s the trailer for the project [runtime: 1:21]:

iXnBVn_OS90

Follow for Now is Now Available at BookPeople

Yep, nearly five years after its release, Follow for Now is now available at BookPeople in Austin, Texas. As you can see in the photo below, it’s in the General Science section, and I am quite proud.

It’s also in Cyberculture & History, and right now, in the New Arrivals.

So, if you’re in Austin and don’t have a copy, stop by and get yours.

Many thanks to Michael McCarthy and everyone at BookPeople for their support. And to you for yours.

Touching Screens: Digital Natives and Their Digits

Since I attempted to brand and explicate the Advent Horizon idea, the following clip has been circulating online. “The new generation is growing up with more digital than print media,” deigns The Huffington Post. “They play with their parents’ smartphones, tablets, laptops. We guess It’s only natural that they examine items that don’t respond to touch — and then move on to the things that do.” Danny Hillis once said that technology is the name we give to things that don’t work yet. I think this baby would disagree with that statement wholesale [runtime: 1:26]:

aXV-yaFmQNk

Though I find the sentiment that Steve Jobs “coded a part of her OS” a bit much, this clip reminds me of a story  by Jaron Lanier from the January, 1998 issue of Wired about children being smarter and expecting more from technology. Lanier wrote, “My favorite anecdote concerns a three-year-old girl who complained that the TV was broken because all she could do was change channels.” Clay Shirky tells a similar story in Cognitive Surplus (Penguin, 2010). His version involves a four-year-old girl digging in the cables behind a TV, “looking for the mouse.”

Without mutual engagement and accountability across generations, new identities can be both erratically inventive and historically ineffective. — Etienne Wenger

These are all early examples of a new Advent Horizon being crossed. The touchscreen, the latest ubiquitous haptic device, is here to stay. To those who are growing up with it, everything else seems “broken” — much like a TV “that only changes channels” to a native computer user. We become what we behold.

Why am I always looking at life through a window?
— Charlie Gordon in Flowers for Algernon by Daniel Keyes

The screen is already the most seductive of technologies. Think about how much time you spend staring at one screen or another. Iain Chambers (1994) writes, “In the uncanny property of the computer to present a ‘world picture’ we confront the boundary set by the screen, the tinted glass that lies between the apparently concrete world and the simulated one of ethereal lights” (p. 64). We want to get in there so bad. Think of the persistent dream of entering the screen and the machine: NeuromancerTRON, Snow CrashLawnmower Man, Videodrome, and even Inception, among many, many others. It has a mythology all its own.

To its end, we’ve gone from wearing the goggles and gloves of most virtual reality systems to using our bodies as input devices via the sensors of Wii and Kinect, bringing the machine into the room. Where our machines’ portability used to be determined by the size of the technology available, the size of our devices are now dictated by the size of our appendages. We can make cellphones and laptops smaller, but then we wouldn’t be able to hold them or press their buttons individually, a limitation that the touchscreen is admittedly working around gracefully. Still, we have to design at human scale. These are the thresholds of our being with our technology.

The Machine is not the environment for the person; the person is the environment for the machine. – Aviv Bergman

The long-range question is not so much what sort of environment we want, but what sort of people we want. – Robert Sommer

We have to think carefully and cumulatively about what we design. Technology curates culture. Technology is a part of our nature. How will we control it? The same way we do our lawns or our weight: Sometimes we will; sometimes we won’t, but we have to remember that we’re not designing machines. We’re designing ourselves.

References:

Chambers. I. (1994). Migrancy, Culture, Identity. New York: Routledge.

Christopher, R. (2007). Brenda Laurel: Utopain Entrepreneur. In R. Christopher (Ed.), Follow for Now: Interviews with Friends and Heroes. Seattle, WA: Well-Red Bear.

Keyes, D. (1966). Flowers for Algernon. New York: Harcourt.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.

Shirky, C. (2010). Cognitive Surplus: How Technology Makes Consumers into Collaborators. New York: Penguin.

Sommer, R. (2007). Personal Space: The Behavioral Basis of Design. Bristol, England, UK: Bosko Books.

Wenger, E. (1998). Communities of Practice: Learning, Meaning, and Identity. New York: Cambridge University Press.

————-

And I say peace to Friedrich Kittler (1943-2011).

Not Great Men: The Human Microphone Effect

The passing of Steve Jobs has sent millions of people into reflection and reverie, and begs questions of the possibilities of repeating his vision and success. “Will there ever be another Steve Jobs?” asks one publication. While another contrarily claims that he “was not god,” still others iconize him, call him a tech-messiah, and lament his passing with something just short of worship. As agnostic as I’ve been computer-wise, I’ve always been a fan of the man, but does the death of Steve Jobs mark the end of a human era, the end of the singular genius, the lone visionary, the thought leader? In some ways, I am compelled to answer affirmatively, but to give Jobs all the credit is to do him and others like him a disservice. As Bonnie Stewart put it, “I fully agree that Steve Jobs left us a legacy. But it is not to be him.” We are the reason he was the last of his kind.

The connectivity of the web has all but killed the archetype of the singular visionary leader. Online, we connect to share with each other, not to listen to a single voice. It’s not necessarily the death of the grand narrative and the birth of postmodernism, it’s more the onset of postMODEMism. Ever since we started modulating and demodulating our ideas, information, and identities, our heroes have been in harm’s way. The web is more about processes and projects than products. The web is inherently a collaborative space. Authorship does not equal ownership. We’re in this together.

In spite of recent reports, the creative class is very real, and, as Scott Smith pointed out, is the larger part of the masses currently occupying Wall Street. The creative class is still here, but like the creative genius, no one owes us a living. We have to make our own way, and we will.

Unlike others, I don’t think the Big Idea is dead either. I think our collaborative, networked thinking makes it more difficult to see the collaborative origins of the singular innovation. If ideas are networks, then big ideas are big networks. Even Jobs brought to market what were previously existing, networked ideas: “He saw what technologies were on the verge of being possible — and what technologies consumers were ready to accept,” Josh Bernoff wrote when Jobs stepped down as Apple CEO in August. “There could have been no iPhone without the habits created by iPods and Blackberry, no Mac without Apple and IBM PCs embraced by those who came before… Apple doesn’t make flash memory, microprocessors, touchscreens, or, for the most part, websites. It just puts them all together.” Toward the end of this 1996 interview with Steve Jobs on Wall Street Week with Louis Rukeyser [runtime: 4:32], Jobs talks about the sheer openness of the internet and how no one single company can ever contain it [the internet bit starts around 3:15]. “We’re going to see innovation contain it,” he says.

SaJp66ArJVI

No weak men in the books at home
The strong men who have made the world
History lives on the books at home
The books at home

It’s not made by great men

The past lives on in your front room
The poor still weak the rich still rule
History lives in the books at home
The books at home

It’s not made by great men
— Gang of Four, “Not Great Men”

It’s downright eerie watching these ideas collide in realtime on the choppy live-feed of Slavoj Žižek addressing the protestors of Occupy Wall Street today, as they respond in unison: “You don’t need a genius to be your leader.” This call-and-response is called “The Human Microphone” and is used due to restrictions on amplified sound in the public space of New York City. In an ironic mix of collaborative leadership, collective allegiance, communication technology, and lacks thereof, The Human Microphone is the perfect metaphor for the death of the hero. There is no “one for all” anymore. History’s not made by great men. As Bonnie Stewart concludes, “So maybe in this new world order, we should stop touting those who are ‘crazy enough to be geniuses’, — which is a romantic notion, even if it is sometimes true, like with Jobs — and reward those who are best able to share and innovate in teams.”

The good news for all is that collaboration makes each of us bigger. Find the folks that empower you to do more, to be more, and avoid the ones who don’t. As the Hopi once put it, “We are the ones we’ve been waiting for.”

————

Here’s a clip of an odd yet amazing cover of Gang of Four’s “Not Great Men” by an appropriately all-female Japanese percussion group [runtime: 4:09]:

K19jPwpP5XY

————

Many thanks to my friend Dave Allen for sharing links and the Japanese Gang of Four cover clip, to Mike Schandorf for sharing the Žižek live-feed, and to my friend and collaborating champion Heather Gold for sharing the Steve Jobs clip. Onward together.

SF MusicTech Summit 2011: Discovery is Disruptive

In 1986, Tony James’ post-Generation X outfit Sigue Sigue Sputnik released a record that included advertisements between its songs (If you haven’t heard it, you probably should. It’s called Flaunt It). James explained the move saying, “Commercialism is rampant in society. Maybe we’re a little more honest than some groups I could mention… Our records sound like adverts anyway.” Though it was taken with the appropriate amount of irony twenty-five years ago, the idea was disruptive. Well, my good friend Dave Allen invited me to join him on a panel at SF MusicTech Summit this year where I heard someone propose — nay they had a business based on — the same idea as the Sigue Sigue Sputnik farce, designed for streaming online… The topic of our panel? The Lack of Disruption in Music Technology.

The "Lack of Disruption" Panel (l to r): Dave Allen, Roy Christopher, Corey Denis, David Ewald, Alex Ljung, and Jesse von Doom.

Audio streaming sites and services seem to be all the rage this year, and whenever he starts a new project with a client as Digital Strategist at NORTH, Dave always asks “What does it solve?” In our panel meetings we added “Who does it serve?” to that. Streaming services have become what Dave calls “the mechanics of consensus.” That is, they all use the same outmoded model (i.e., draw up business plan, acquire venture capital, launch service, place advertising on the free part, charge for premium service without advertising, etc.) as if it’s the only way to do things. This model follows and barely updates the broadcast radio model of the 1920s. As Dave says, “There’s nothing new in digital!” In his pre-talk post, “What happened to the Big Idea in music technology?” he points out that

…when FM radio became homogenized and the US radio stations formed into conglomerates such as Clear Channel, they neutered the DJ. When Wolfman Jack was programming his own rock shows in the USA, and across the Atlantic in London John Peel was exposing young people’s ears to music they’d never heard, they were just two examples of the extraordinary power DJs had on the music business. They were tastemakers, influencers, and filters of music culture. When the conglomerates did away with the role of the DJ in favor of automated playlists they ruined everything. The DJ was the voice of the station and he or she was considered dangerous to the bottom line if they were to offend their advertisers – they had to play nice, or go. The music streaming companies didn’t see the problem that needed solving – the lack of authentic DJs who programmed their own shows – because they thought “interactivity” was the answer.

The streams on these services are controlled by algorithms, and they’re similar on every service. If you like one Norwegian Black Metal band, you’re soon to be recommended every Norwegian Black Metal band. Discovery comes from difference, and these algorithms are based on similarities. They all serve up sameness. How about some Swedish Black Metal for a change? The DJs at KEXP (or whomever), as well as Wolfman Jack, or John Peel might keep you in a stable groove, but they also know when to yank you out of a rut. Dave says that getting up from his desk to flip over a record on the turntable is about as interactive an experience as he can imagine while at home listening to music. Either way: The human element cannot be replaced with playlists.

Dave wondering why he invited me.

RT @rebeccagates: read a comment from #sfmusictech about “need to make music more participatory”. uhhh…how about going to a live show?

It’s not all about interactivity though. There is also a mounting wave of social-media fatigue — on both sides. TAG Strategic’s Corey Denis pointed out that some artists don’t want or like to engage with their fans. We often say that a 21st-century art inherently involves multimedia, and while that might be true more often than not, it doesn’t mean every artist wants or needs to tweet. There are as many kinds of artists, performers, and entertainers as there are arts, performances, and entertainment. Some of them don’t require status updates. Social media killed the video star. Where companies and consultants are still pursuing interactivity and engagement, Dave often pushes for more passivity. People are tired of engaging with you, and sometimes there’s just no reason for you to “be social.” From the other side of the fourth wall, my man Tim Baker just posted this piece at SYFFAL about how social media kills fandom. He writes,

As for artists, I can’t tell you how many have destroyed their legacies and turned me off to their works completely based soley on their Twitter accounts. Artists and Twitter should be a match made in heaven but time and time again it is used as a sounding off board for the most idiotic, self absorbed and generally dickish thoughts, or recaps of the minutiae that only someone on the autism spectrum would need to share. Additionally most artists are not smart in the sort of way that translates into short form quick bursts. It comes off much more as indulgent at best, and idiotic at worst. Gone are the days of artists being interesting because they were mysterious and unobtainable and here are the days where modern artists are overexposed and not even remotely interesting. It is sad really that the tool that when used sparringly is so effective, is abused to such a level.

David Ewald calls this phenomenon the “erosion of trust,” and it happens at every intersection: artists to labels, labels to radio, labels to technology, everyone to “social media experts,” fans to everyone, artists to everyone, etc. Why should they trust you with something they can do themselves? But also, why should they trust you with something that don’t want to do and don’t necessarily care about in the first place? Artists should concentrate on their art. As fans, we’ve bought and replaced every format out just trying to hear the artists we love. If the music is good, we will find it and support it. We don’t need your help. As a lifelong music fan and someone who doesn’t use any of the online services, I can honestly say that my experience with music is better right now than it ever has been. Anyway, by design our panel asked more questions than it answered — and definitely more than we could answer sufficiently in an hour. Here are my thoughts from SF MusicTech Summit, collected in web-ready, low-bandwidth blurbs:

  • Solve real problems and serve real people. Artists and fans are real people. We don’t care where your money comes from.
  • Discovery is disruptive. Discovery comes from difference. Stop seeking and serving sameness.
  • The human element cannot be replaced with playlists. Just because technology can curate doesn’t mean that it should or that it does it well.
  • Social media killed the video star. Be social when it makes sense. Shut up when it doesn’t.
  • Music will take care of itself. Stop acting like music needs you to save it. It doesn’t.

—————-

Many thanks to Dave for inviting me, Lily for going with me, my fellow panelists for the great talk, and to Brian and Shoshana Zisk, Cass Philipps, and all at SF MusicTech Summit for putting this thing together. Also, props to Luke Williams for getting us stoked on this idea in the first place. Onward.

[photos by Lily Brewer]