Geekend Notes by Raise Small Business Marketing

Hilton Head, South Carolina’s own Raise Small Business Marketing did a brief summary and write-up of my “How to Do Stuff and Be Happy” talk from Geekend 2010. Here’s the run-down:

I was excited for this session, mainly because doing stuff and being happy are two major challenges!  Roy Christopher gave a laid back presentation that basically went through some ideas on how to keep your focus and try and stay happy while actually getting things done.

Roy covers a lot of the information that was in his presentation on his own blog right here so we won’t go over all of that however some of the things we really took away from the session:

  1. Roy was a competitive Rubik’s Cube Player (established geek cred for sure!)
  2. Find people who have done what you want to do and emulate them
  3. Feed and water your mentors- let people know you respect them and why
  4. Save your own story
  5. Keep a journal
  6. Keep a promise file
  7. Get organized
  8. Trust your curiosity

You can read the post here.

Many thanks to the folks at Raise and Geekend 2010.

21C Magazine: This is Your Brain Online

I compiled my thoughts on a bunch of recent books about the internet, social concerns, and brain matters into a piece called “This is Your Brain Online: Recent Books on Cognition and Connection” for 21C Magazine, many of which have been hashed out right here on this site.

Here’s an excerpt:

Regarding public cell phone use, comedian Bill Maher once quipped that if he wanted to be so privy to one’s most intimate thoughts, he’d read his or her blog. Nancy Baym addresses this technologically enabled collusion of public and private, as well as the more traditionally debated clashes, in Personal Connections in the Digital Age (Polity, 2010). Baym’s is by turns all-encompassing, in that she covers nearly every epistemological viewpoint on so-called social media expressed thus far, and all-purpose, in that anyone can read this book and see how these structures of knowledge apply to their own use of technology. In her study of technology’s influence on social connections, she breaks it down into seven key concepts: interactivity, temporal structure, social cues, storage, replicability, reach, and mobility. Sure, any time one attempts to slice up such a malleable and ever-changing landscape into discrete pieces one runs the risk of missing something. This process, one of closing off an open system in order to study it, is necessary if we are ever to learn anything about that system. Nancy Baym has collected, synthesized, and added to the legacy of digital media and indeed technology studies at large. This book is a great leap forward.

Many thanks to Ashley Crawford for the opportunity to contribute to 21C. Read the full piece here.

Get Em to the Geek: Geekend 2010

I scarcely know where to start. Geekend is the beautifully geeky brainchild of Sloane Kelly, Jacob Hodesh, and Miriam Hodesh. 2010 marks the second annual meeting of what everyone familiar hopes will be many years of the interactive conference. It has just the right balance of size and intensity.

I didn’t get to Savannah until late on Day 2, so I roamed around downtown by myself Friday night. I stepped into a raucous karaoke session and had the biggest beer I’ve ever seen. No problem not finishing it because in Savannah, you can drink in the streets. To-go cups are a normal courtesy, and I took one and finished my beer, strolling languidly back to my hotel.

Immediately upon arriving at the Coastal Georgia Center on Saturday morning, I was rushed into the geek melée. Swag bag and badge in hand, I sneaked off to the speakers’ green room to finish the final tweaks on my presentation. People always say of SXSW Interactive that the best stuff happens in the margins, that the sideline conversations are always better than the panels and talks. Well, as much as it resembles SXSWi, Geekend is not quite like that. I’m not saying this because I was one of the speakers this year, I’m saying it because Geekend’s organization and size lends itself to round-the-clock stimulation. Sure, the chats in the hallways and at dinners are productive, enlightening, and awesome, but they do not outshine the scheduled talks.

My talk was called “How to Do Stuff and Be Happy” and was loosely based on my previous post of the same name. It seems to have gone over well, and I had numerous inspired chats with attendees and other speakers over the rest of the time I was in Savannah: so many amazing people all in one beautiful city for a very limited time. From futurists (e.g., Frank Spencer and Scott Smith) and future-of-music geeks (e.g., Aaron Ford and Jack DeYoung), to indie entrepreneurs (e.g., Noah Everett and Scott Stratten) and big-media programmers (e.g., Oscar Gerardo and Craig Johnston), as well as just plain badasses (e.g., Maria Anderson, Zachary Dominitz, Pete Hottelet, et al.): It’s a pressure cooker of inspiration.

The closing after-party at SEED Eco-Lounge was the perfect, weekend-ending, chaotic spectacle: fire juggling, ribbon/curtain dancing lady (check the photos), loud, mashed-up hits, and literal dancing in the streets. Geek bedlam!

Geekend 2010 was one of those events where saying “thank you” to the organizers, the speakers, and all of the attendees just sounds ridiculous–but I’ll say it anyway: Thank you! See you next year!

—————

Here are a bunch of pictures I’ve gathered from the event. Many thanks to the camera-wielding folks I borrowed these from (e.g, Sloane Kelly, Jennifer Parsons, Josh Branstetter, and Rhiannon Modzelewski). And a special thanks goes out to Alex Sandoval and Rhiannon Modzelewski for hauling me around, taking me to the fair, and letting me sleep on their couch that last night. You folks are saints!

The Essential Tension of Ideas

One of the key insights in Richard Florida’s latest book, The Great Reset (Harper, 2010) is that rapid transit increases the exchange of ideas and thereby spurs innovation. Where the car used to provide this mass connection, now it hinders it. Increasingly, our cognitive surplus is sitting traffic.

Ideas are networks, Steven Johnson argues in his new book, Where Good Ideas Come From (Riverhead, 2010). The book takes Florida’s tack, comparing cities to coral reefs in that their structure fosters innovation. Good ideas come from connected collectives, so connectivity is paramount.

Human history in essence is the history of ideas. — H. G. Wells

On the other end of the spectrum, in a recent post about Twitter, David Weinberger writes,

…despite the “Who cares what you had for breakfast?” crowd, it’s important that we’ve been filling the new social spaces — blogs, social networking sites, Twitter, messaging in all forms, shared creativity in every format — with the everyday and quotidian. When we don’t have to attract others by behaving outlandishly, we behave in the boring ways that make life livable. In so doing, we make the Net a better reflection of who we are.

And since we are taking the Net as the image of who we are, and since who we think we are is broadly determinative of who we become, this matters.

His description sounds like we’re evening out our representations of our online selves, reconciling them with our IRL selves, initiating a corrective of sorts. Coincidentally, in their sad version of “The SEED Salon,” a recent issue of WIRED had Kevin Kelly and Steven Johnson discuss the roots of innovation (by way of plugging their respective new books; here they are discussing same at the New York Public Library). Kelly states,

Ten years ago, I was arguing that the problem with TV was that there wasn’t enough bad TV. Making TV was so expensive that accountants prevented it from becoming really crappy—or really great. It was all mediocre. But that was before YouTube. Now there is great TV!

It sounds as though Weinberger and Kelly are calling for or defending a sort of “infodiversity,” which one would think would be a core tenet of media ecology. As Kelly puts it in What Technology Wants (Viking, 2010), “Both life and technology seem to be based on immaterial flows of information” (p. 10). He continues in WIRED,

To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.

In science, Thomas Kuhn called this idea “the essential tension.” In his book of the same name (University of Chicago Press, 1977), he described it as a tug-of-war between tradition and innovation. Kuhn wrote that this tension is essential, “…because the old must be revalued and reordered when assimilating the new” (p. 227). This is one of those ideas that infects one’s thinking in toto. As soon as I read about the essential tension, I began to see it everywhere — in music, in movies, in art, and indeed, in science. In all of the above, Weinberger, Johnson, and Kelly are all talking about and around this idea, in some instances the innovation side, and in others, the tradition side. We need both.

One cannot learn anything that is more than one step away from what one already knows. Learning progresses one step or level at a time. Johnson explores this idea in Where Good Ideas Come From by evoking Stuart Kauffman‘s “adjacent possible” (a term Johnson uses hundreds of times to great annoyance). The adjacent possible is that next step away. It is why innovation must be rooted in tradition. Go too far out and no one understands you, you are “ahead of your time.” Take the next step into the adjacent possible that no one else saw, and you have innovated. Taken another way, H. G. Wells once said that to write great science fiction, one must adopt a perspective that is two steps away from the current time. Going only one away is too familiar, and three is too far out. As Kelly puts it in the WIRED piece, “Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.” A new technology, literally “the knowledge of a skill,” is–in its very essence–the same thing as a new idea. For instance, Apple’s Newton was too many steps ahead of or away from what was happening at the time of its release. I’m sure you can think of several other examples.

Johnson, who has a knack for having at least one (usually more) infectious idea per book, further addresses the process of innovation with what he calls the “slow hunch.” This is the required incubation period of an innovative idea. The slow hunch often needs to find another hunch in order to come to fruition. That is, one person with an idea often needs to be coupled with another who has an idea so that the two can spur each other into action, beyond the power of either by itself (see the video below for a better explanation). It’s an argument for our increasing connectivity, and a damn good one.

That is not to say that there aren’t and won’t be problems. I think Kevin Kelly lays it out perfectly here:

…[T]here will be problems tomorrow because progress is not utopia. It is easy to mistake progressivism as utopianism because where else does increasing and everlasting improvement point to except utopia? Sadly, that confuses a direction with a destination. The future as unsoiled technological perfection is unattainable; the future as a territory of continuously expending possibilities is not only attainable but also exactly the road we are on now (p. 101).

———————–

Here’s the book trailer for Steven Johnson’s Where Good Ideas Come From [runtime: 4:07]:

NugRZGDbPFU

————————

References:

Florida, R. (2010). The great reset. New York: Harper.

Johnson, S. (2010). Where good ideas come from. New York: Riverhead.

Kelly, K. (2010). What technology wants. New York: Viking.

Kuhn, T. (1977). The essential tension. Chicago: University of Chicago Press.

Weinberger, D. (2010). “Why it’s good to be boring on the web.” JoHo The Blog.

WIRED. (2010, October) “Kevin Kelly and Steven Johnson on where ideas come from.” Wired.com.

Douglas Rushkoff: The User’s Dilemma

For over two decades, Douglas Rushkoff has been dragging us all out near the horizon, trying to show us glimpses of our own future. Though he’s written books on everything from counterculture and video games to advertising and Judaism, he’s always maintained a media theorist’s bent: one part Marshall McLuhan, one part Neil Postman, and one part a mix of many significant others. Program or Be Programmed: Ten Commands for a Digital Age (OR Books, 2010) finds him back at the core of what he does. Simply put, this little book (it runs just shy of 150 pages) is the missing manual for our wild, wired world.

“Whoever controls the metaphor governs the mind.” — Hakim Bey

Rushkoff agrees with many media thinkers that we are going through a major shift in the way we conceive, connect, and communicate with each other. His concern is that we’re conceding control of this shift to forces that may not have our best interests in mind. “We teach kids how to use software to write,” he writes, “but not how to write software. This means they have access to the capabilities given to them by others, but not the power to determine the value-creating capabilities of these technologies for themselves” (p. 13). We’re conceiving our worlds using metaphors invented by others. This is an important insight and one that helps make up the core of his critique. This book is more Innis’ biases of media than it is McLhaun’s laws of media, and it left me astounded — especially after reading several books on the subject that were the textual equivalent of fly-over states. Program or Be Programmed is a welcome stop along the way.

I first interviewed Doug Rushkoff in 1999. We’ve stayed in touch since and discussed many ideas over the intervening decade, but we haven’t recorded any of these exchanges. I used this book as an opportunity to ask him a few questions.

Roy Christopher: Program or Be Programmed seems to distill quite a lot of your thinking about our online world from the past twenty-odd years. What prompted you to directly address these issues now?

Douglas Rushkoff: I guess it’s because the first generation of true “screenagers” or digital natives have finally come of age and, to my surprise, seem less digitally literate than their digital immigrant counterparts. I’ve written a number of books applying the insights of digital culture — of its do-it-yourself, hacker ethos — to other areas, such as government, religion, and the economy. But I realize that we don’t even relate to digital culture from the perspective of cultural programmers. We tend to accept the programs we use as given circumstances, rather than as the creations of people with intentions.

So I wanted to go back and write something of a “poetics” of digital media, sharing the main biases of digital technologies so that people can approach them as real users, makers, and programmers, rather than just as passive consumers.

If anything in particular prompted me, it was watching the way smart writers and thinkers were arguing back and forth in books and documentaries about whether digital technology is good for us or bad for us. I think it’s less a question of what the technology is doing to us than what we are choosing to do to one another with these technologies. If we’re even choosing anything at all.

RC: You mention in the book that anyone who seems a bit too critical of digital media is labeled a Luddite and a party-pooper, yet you were able to be critical, serious, and hopeful all at the same time. What’s the difference between your approach and that of other critics of all-things-digital?

DR: I think the main difference is that I’m more concerned with human intention and how it is either supported or repressed in the digital realm. Empathy is repressed, the ability to connect over long distanced is enhanced. I go down to the very structure and functioning of these tools and interfaces to reveal how they are intrinsically biased toward certain kinds of outcomes.

So I’m less concerned with how a technology effects us, than how our application or misapplication of a technology works for or against our intentions. And, perhaps more importantly, how the intentions of our programmers remain embedded in the technologies we use. I’m not judging a technology one way or the other; rather, I am calling for people to make some effort to understand what the technologies they are using were made for, and whether that makes it the right tool for the job they’re using it for.

RC: You evoke Harold Innis throughout this book. Do you think there’s something that he covers more thoroughly or usefully than other media theorists since?

DR: I think he was better at looking at media shaping the nature and tenor of the social activity occurring on it, or around it. He’s the guy who would have seen how cell phones change the nature of our social contract on the street, turning a once-public space into lots of separate little private spaces. As far as media-ecology goes, he was probably the purest theorist.

RC: The last programming class I took was a Visual Basic class in which even the programming was obscured by a graphical interface: there was little in the way of real code. For those of us interested, what’s the first step in becoming a programmer now?

DR: I guess it depends on your interests. There are many different places to start. You could go back and learn Basic, one of the simplest computer languages, in order to see the way lines of code in a program flow. Or you could even just get a program like Director, and sequence some events. Hypercard was a great little tool that gave people a sense of running a script.

If I were starting, I’d just grab a big fat book that starts from the beginning, like Dan Shiffman’s book Learning Processing (Morgan Kaufman, 2008). You can sit down with a book like that and, with no knowledge at all, end up with a fairly good sense of programming in a couple of weeks.

I’m not asking everyone be a programmer at this point. Not this generation, anyway. That’s a bit like asking illiterate adults to learn how to read when they can just listen the radio or books on tape. I get that. But for those who will be living in increasingly digital spaces, programming will amount to the new literacy.

RC: Though you never stray too far, you seem to have come back to your core work in this book. What’s next?

DR: I have no idea, really. Having come “home” to a book on pure media theory applied to our real experience, I feel like I’ve returned to my core competence. I feel like I should stick here a while and talk about these issues for a year or so until they really sink in.

I’ve got a graphic novel coming out next year, finally, called ADD. It’s about kids who are raised from birth (actually, earlier) to be video game testers. I’d love to see that story get developed for other media, and then get to play around in television or film. There are also rumblings about doing another Frontline documentary. Something following up on “Digital Nation,” which I’d like to do in order to get more of my own ideas out there to the non-reading public.

I guess we’ll see.

——————–

Astra Taylor and Laura Hanna (the filmmakers behind the film Zizek!) put this short video together to help illustrate the ideas in Rushkoff’s Program or Be Programmed [runtime: 2:18]:

kgicuytCkoY

Browser Don’t Surf: The Web’s Not Dead… Yet.

Remember when people used to “surf the web”? Now it is said that typical daily browsing behavior consists of five websites. William Gibson’s age-old summary of web experience, “I went a lot of places, and I never went back” has become, “I go a few places, and I stay there all the time.” We don’t surf as much as we sit back and watch the waves. I started this post several months ago when I noticed that the lively conversations that used to happen on my website had all but ceased (and eventually ceased altogether). Though the number of visitors continued to increase, the comments had moved elsewhere. A link to a post here on Facebook garners comments galore on Facebook, but none on the actual post. I doubt that I’m alone in experiencing this phenomenon.

I Tweeted (that still sounds silly, doesn’t it?) sometime last year  “Facebook 2009 = AOL 1999.” I was being snarky at the time, but there are good reasons that the analogy holds. As Dave Allen of North pointed out recently, search engine optimization (SEO) and search engine management (SEM) are shams for users. For those that don’t know, SEO and SEM are strategies for gaming Google’s search algorithms, thereby attaining higher page-rank in search results. That’s great if the optimized site actually has what you’re looking for, but unfortunately this is becoming less and less the case (Dave was looking for some bamboo poles from a local source for his backyard in Portland. I challenge you to find one using Google).

Enter closed communities like AOL and Facebook: These social networks help filter the glut by bringing the human element back into the process. So-called “social search” or “social filtering” helps when Google fails. So, even as Facebook has become the new “training wheels” of the Web (as AOL was before it), it also serves as a new organizing principle for all of the stuff out there.

Once I read the Wired cover story on the death of the web, I knew this idea had to be revisited. The claim that the web is dead is more than a ploy to sell magazines and less than a statement of truth. Yes, we’ve used the terms “web” and “internet” interchangeably (even jokingly combining them in the portmanteau “interwebs”) when they’re not the same thing, but don’t get it twisted: The web is not dead. It’s changing, growing, reorganizing, yes. But it’s far from dead.

Organizing principles are just filters; they include, they exclude, they make sense of would-be chaos. Good examples include books, solar systems, and city grids. As an organizing principle, the web is lacking at best, but it’s not lacking enough to wither and die just yet. Sure, the “app-led” (i.e., Appled) future, with its smart phones, iPhones, iPads, and other gadgets is forming closed silos using the internet’s backbone, but you aren’t likely to be sitting at your desk using anything other than the web for a while to come yet.

That brings us back to the shift from outlying sites (like this one) to filtering sites (like Facebook). As long as web search is run by algorithms that can be gamed (thereby rendering them all but useless), then the closed silos will stack — on and off the web proper. Where will that leave sites like mine? I don’t know, but no one is interested in The Roy Christopher App just yet.

SXSW 2011: My Panel/Talks

Voting has begun for South by Southwest 2011. I have proposed two talks and one panel. I am hereby requesting your support. Click on the links below and vote for these ones:

INTERACTIVE: Disconnecting the Dots: How Our Devices are Divisive:
We drive cars to the gym to run miles on a treadmill. Inclement weather notwithstanding, why don’t we just run down the street? The activities are disconnected. We sit in close physical proximity with each other and text others far away. The activities are disconnected. Technological mediation creates a disconnection between physical goals and technology’s “help” in easing our workload. There are at least two types of disconnection enveloping our days: one between ourselves and our environment (e.g., pumping water vs. pumping iron) and one between ourselves and each other (e.g., individual distraction vs. global connection) with technology wedged in between in both cases. If our culture is essentially technology-driven, then what kind of culture emerges from such disconnections between our physical goals and our technologically enabled activities?

FILM: Building a Mystery: Taxonomies for Creativity:
There is a limit — a rule of the grammar, if you will — of the number of elements that the average story can carry. There’s a point at which too many elements cause one story to fall apart, a line across which something else (e.g., a sequel) is needed. This limit is qualitative to be sure, but it’s not hard to tell when it’s been exceeded. While building a theory and weaving a narrative are very different enterprises, one can see parallels in the amount of elements each will carry. It’s less like the chronological restrictions we place on certain activities (e.g., you must be 18 to vote, 21 to drink, etc.) and more like having enough cream and sugar in your coffee. It’s a difference like the one between hair and fur. So, how many elements make a good story?

MUSIC: Finding Success and Thriving on Chaos:
If you need help finding your way into the current music milieu or your way from a rut to a groove, this is the talk for you. Helmed by musicians with lengthy and successful yet unconventional careers and unconventional takes on the upended music industry (e.g., Paul D. Miller a.k.a. DJ Spooky, Dave Allen of Gang of Four/Shriekback, Aesop Rock, Rebecca Gates of The Spinanes, et al.), this panel will be stoked and stocked with helpful information, insight, and inspiration for the aspiring as well as the veteran artist. From punk rock to Hip-hop, all genres are welcome. The unserious need not apply.

Okay, so there are a million other awesome-looking panels and talks, but I must implore you all to vote for these. Voting closes on August 27th, so vote early and everyday until then. Please and thank you.

Obscured by Crowds: Clay Shirky’s Cognitive Surplus

In The Young & The Digital (Beacon, 2009), Craig Watkins points out an overlooked irony in our switch from television screens to computer screens: We gather together around the former to watch passively, while we individually engage with the latter to actively connect with each other. This insight forms the core of Clay Shirky’s Cognitive Surplus: Creativity and Generosity in a Connected Age (Penguin, 2010). Shirky argues that the web has finally joined us in a prodigious version of McLuhan’s “global village” or Teilhard de Chardin’s “Noosphere,” wherein everyone online merges into one productive, creative, cooperative, collective consciousness. If that seems a little extreme, so are many of Shirky’s claims. The “cognitive surplus” marks the end of the individual literary mind and the emergence of the Borg-like clouds and crowds of Web 2.0.

Okay, not exactly, but he does argue for the potential of the cognitive collective. So, Wot’s… Uh, the deal?

Is Clay Shirky the new Seth Godin? I’d yet to read anything written by him that didn’t echo things I’d read David Weinberger or Howard Rheingold (or Marshall McLuhan, of course), and I hoped Cognitive Surplus would finally break the streak. Well, it does, and it doesn’t. As Shirky put it in his previous book, Here Comes Everybody (Penguin, 2008), “society doesn’t change when people adopt new tools; it changes when people adopt new behaviors.” This time around he argues that we adopt new behaviors when provided with new opportunities, which, by my estimate, are provided by new tools — especially online.

Steve Jobs once said that the computer and the television would never converge because we choose one when we want to engage and the other when we want to turn off. The problem with Shirky’s claims is that he never mentions this disparity of desire. A large percentage of people, given the opportunity or not, do not want to post things online, create a Facebook profile, or any of a number of other web-enabled sharing activities. For example, I do not like baseball. I don’t like watching it, much less playing it. If all of the sudden baseballs, gloves, and bats were free, and every home were equipped with a baseball diamond, my desire to play baseball would not increase. Most people do not want to comment on blog posts, video clips, or news stories, much less create their own, regardless of the tools or opportunities made available to them. Cognitive surplus or not, its potential is just that without the collective desire to put it into action.

Shirky’s incessant lolcat bashing and his insistence that we care more about “public and civic value” instead comes off as “net” elitism at its worse. The wisdom of crowds, in James Surowieki’s phrase, doesn’t necessarily lead to the greater good, whatever that is. You can’t argue for bringing brains together and then expect them to “do right.” Are lolcats stupid? Probably, but they’re certainly not ushering in the end of Western civilization. It’s still less popular to be smart than it is to be a smartass, but that’s not the end of the world, online or off-. The crowd is as wise as the crowd does. Glorifying it as such, as Jaron Lanier points out in You Are Not a Gadget (Knopf, 2010), is just plain wrong-headed.

The last chapter, “Looking for the Mouse,” is where Shirky shines though. [Although its namesake echoes a story by Jaron Lanier from a 1998 Wired article about children being smarter and expecting more from technology. Lanier wrote, “My favorite anecdote concerns a three-year-old girl who complained that the TV was broken because all she could do was change channels.” Shirky’s version involves a four-year-old girl digging in the cables behind a TV, “looking for the mouse.”] His ability to condense vast swaths of knowledge into a set of tactics for new media development in this last chapter is stunning compared to the previous 180 pages. Perhaps he is the new Seth Godin afterall.

References:

Lanier, J. (1998, January). “Taking Stock.” Wired, 6.01.

Lanier, J. (2010). You Are Not a Gadget: A Manifesto. New York: Knopf.

Shirky, C. (2010). Cognitive Surplus: Creativity and Generosity in a Connected Age. New York: Penguin.

Surowieki, J. (2005). The Wisdom of Crowds. New York: Anchor.

Watkins, S. C. (2009). The Young & The Digital. New York: Beacon.