Hip-hop Then & Now: Chuck D, Common, and Joan Morgan Come to The University of Texas at Austin

On February 10th, 2011, Chuck D, Common, and Joan Morgan assembled in the brand new Student Activity Center at The University of Texas campus in Austin. It was an evening comprised of in-depth discussion, astute analysis, and the usual gripes.

If you know me, you know that Public Enemy is one of my all-time favorite groups regardless of genre. Their It Takes a Nation of Million to Hold Us Back (Def Jam, 1988) is not only what I consider the best record ever recorded, but was crucial in my lifelong fandom of Hip-hop (my own first book is named after a Chuck D lyric from the record). Chuck and P.E. were essential to my getting through high school and undergraduate studies.

Common has been one of my favorite emcees since I first heard Resurrection (Relativity, 1994) in the early 1990s. Not only was he the first rapper out of Chicago that I heard (peace to E. C. Illa), but he seemed to be keeping the Native Tongues torch burning bright at a time when they were fumbling (no disrespect; they got their grip back). He has taken risks, pushed boundaries, and remained successful where others follow trends or fall off.

Joan Morgan is a bad ass. She’s been doing Hip-hop journalism since before it had a name. Her presence and insights in this talk were invaluable, and I wish we’d had more time to hear from her (I’m hoping to interview her for the site at a later date; fingers crossed). Her angle is vehemently feminist, nuanced with knowledge, and tempered with truth. When Nicki Minaj became the topic of discussion, she was one of the few people I’ve heard speak on the Regis Philbin incident. That story should’ve been in everyone’s face, but it was invariably buried.

If nostalgia is the longing for a past that never existed, then the SAC Ballroom was full of just that. Joan asked if the crowd thought that Hip-hop was better “then” than it is “now,” and most of the hands in the room went up. I find this very troubling. I was one of the few, including our three honored guests, who actually there “then” (I heard students around me say that they didn’t know who Chuck D was until they looked him up after hearing about this event). I continue to argue that Hip-hop is better now. Sure, everything that came out then was that next new shit. The genre was young and finding its way (I would also argue that it still is), so there was plenty that hadn’t been done or heard yet, whereas now those styles have been done and heard. But for every Public Enemy and Common, there was an MC Hammer and a Vanilla Ice. Go back and listen to the average record from 1984, 1986, 1988, 1990, 1994 — pick a year: Most of them sound dated and not near as complex and interesting as the worst thing out today. Sure, there are exceptions, but as a whole, Hip-hop is better now. It just is. Thinking that you missed the best of it is problematic on many levels.

Chuck mentioned the fact that fans now have access to the past in a way that the fans of then never did. This is a key insight. Technology curates culture. You cannot assume that the next generation doesn’t know about something from the past. They might not grasp the historical context of say “Fight the Power” by Public Enemy, “Wicked” by Ice Cube, or even “Fuck the Police” by N.W.A., which were uncompromising responses to volatile times in our nation’s history, or to grasp what it was like to hear The Low End Theory, Straight Outta Compton, Enter the Wu-Tang (36 Chambers) — or It Takes a Nation of Millions… for that matter — when they dropped. But you can’t assume they haven’t heard them or seen the videos. It’s all out there.

On the other hand, Common blamed technology for the lack of creativity and “feeling” in current Hip-hop. This argument troubles me as well. It’s a non-argument that leads to an infinite regress. Hip-hop’s detractors claim that sampling — whether with turntables or sequencers — isn’t really making music. They claim that at best it’s lazy and at worst it’s theft. No one at this talk would agree with that, but it’s the same argument. Saying that technology takes away the human element and therefore the feeling of music or that it makes it too easy thereby giving someone an unfair advantage is the same thing as claiming that sampling isn’t a viable way to make music in the first place. It’s all about what you do with it. Heads know better.

These are not new issues, and I was hoping we’d moved past them. Hip-hop — then and now — is still the most interesting thing happening in music. I will always love H. E. R.

————————-

Here’s a handheld video (no cameras were allowed) of the Q&A session with Common, Joan Morgan, and Chuck D in the SAC [runtime: 8:13]:

9aSqrBrEwxE

Yoxi Live Twitter Interview

The good folks over at Yoxi decided to interview me live on Twitter today mostly about my upcoming SXSW Interactive talk. Below is a transcription of the chat. I’ve edited it for chronology, continuity, and obvious text limitations, but overall it’s just as it appeared live.

Yoxi: Excited to have you as our 4th guest for #yoxichat. We’re stoked about your #SXSW panel!

Roy Christopher: Thank you! Glad to be here. I’m stoked on the #SXSW talk, too. Should be a hoot.

Y: Definitely! Speaking of #SXSW, what inspired your panel, Disconnecting the Dots: How Our Devices Are Divisive?

RC: In the midst of a book about technological mediation, I proposed this talk to #SXSW to work through some of those issues.

Y: Technological mediation, interesting! Tell us more about your research in that area!

RC: The book and talk are about all the ways we mediate our relationships with each other, our world, etc. through technology. Trying to develop a theory of technology that can account for new and old mediation alike.

Y: That’s great and truly relevant. Tech continues to consume our everyday lives.

RC: I’ve been exploring the land between thees lines for years, trying to assess it from the broadest possible perspective.

Y: If our “devices are divisive,” how do we find a balance in our digital lives?

RC: The same way we keep up with our lawns: sometimes we do, sometimes we don’t. Technology is a part of our nature.

Y: Very true but if tech is part of our nature, how do we find a balance? Do you think people should just disconnect?

RC: Meta-attention is key: Assessing what one pays attention to and adjusting accordingly. Disconnection is not the answer.

Y: In terms of solutions, what advice do you have for teams in Yoxi’s Competition #2: Balance Your Digital Diet?

RC: I think “balance” is the wrong word. It’s more of a “tension,” and I think backing up and assessing it is the first step. Meta-attention and metacognition are not as widespread phenomena as they need to be. This is not elitism; it’s literacy.

Y: Thanks so much for joining us. Great thoughts. Hope everyone will check out your #SXSW panel. See you in Austin!

RC: Thank you for the time and attention. I appreciate it. See you in Austin in March!

————-

Many thanks to Arielle and Randy at Yoxi for their interest and for setting this up.

Distant Early Warning: Coupland on McLuhan

If I had to pick a patron saint, a hero, or a single intellectual influence for my adult self, it would undoubtedly be Marshall McLuhan. If you’ve spent any time at all reading my work, you’ve seen his name and his ideas. Marshall McLuhan: You Know Nothing of My Work! (Atlas & Co., 2010) is the latest biography of the man and differs from previous versions in many ways, not the least of which is the author. Having struggled through several of Douglas Coupland’s novels, I had my reservations about his writing this book. I am glad to say he eloquently quelled most of my concerns.

The world weighs on my shoulders
But what am I to do?
You sometimes drive me crazy
But I worry about you — Rush, “Distant Early Warning”

There are several things that people often overlook or misunderstand about McLuhan that Coupland nailed in this book. One was his devout Catholic faith, which rooted his thinking in many ways once he found it, and another was his deep disdain of the media and its attendant technology. In spite of his insight, foresight, and prescience, he hated this stuff. Coupland points out many times that McLuhan wouldn’t have liked our current reliance on technology and connectivity one bit, but he would’ve found it interesting. Another of Coupland’s key insights is that, above all else, McLuhan was an artist, “one who happened to use ideas and words as others might use paint” (p. 16). Seen in this way, a lot of his work might make a hell of a lot more sense to newbies, critics, and haters alike. Like the best artists, he was a pattern perceiver of the highest order.

There’s really no considering this book, its author, or its subject without considering Canada. Yes, Canada, The Great White Wasteland that brought us Rush, hockey, Bob and Doug McKenzie, Justin Bieber, Coupland and McLuhan, as well as the latter’s most obvious forebear, Harold Innis. It’s cold up there, folks — cold and spread out. It makes one appreciate the human element.

“Call it religion or call it optimism,” Coupland writes, “but hope, for Marshall, lay in the fact that humans are social creatures first, and that our ability to express intelligence and build civilizations stems from our inherent social needs as individuals” (p. 165). Or, as McLuhan himself put it, “The user is the content” (Take that, so-called “social media experts”). McLuhan’s consistent focus on the individual is what has kept his ideas fresh in the face of new contrivances.

I know it makes no difference
To what you’re going through
But I see the tip of the iceberg
And I worry about you — Rush, “Distant Early Warning”

My problem with Coupland’s past work has had less to do with his writing ability (he’s an excellent writer) and more to do with his appropriation of Salingerisms, and not even a biography could escape. Coupland alludes to Catcher in the Rye by comparing McLuhan to Holden Caulfield on page 111. It’s an apt comparison, and it characterizes The Mechanical Bride-era McLuhan accurately, but I have to admit being irked at the reference.

With all of that said, You Know Nothing of My Work made me proud (I fancy myself something of a McLuhan scholar, so this is meant as a heartfelt compliment), and it made me cry (Though I already knew the story of McLuhan’s last days, a word-man unable to use words is still one of the saddest things I can imagine). I’d like to think Marshall McLuhan would’ve liked this book. It’s treats him with respect, humility, and humor, and I think it “gets” him. What else could he want from a biography?

————

Here is a scene that illustrates the heights of McLuhan’s fame, what Coupland calls “every geek’s dream,” and this book’s namesake: Marshall Mcluhan in Woddy Allen’s Annie Hall (1977) [runtime: 2:43]:

OpIYz8tfGjY

Geekend Notes by Raise Small Business Marketing

Hilton Head, South Carolina’s own Raise Small Business Marketing did a brief summary and write-up of my “How to Do Stuff and Be Happy” talk from Geekend 2010. Here’s the run-down:

I was excited for this session, mainly because doing stuff and being happy are two major challenges!  Roy Christopher gave a laid back presentation that basically went through some ideas on how to keep your focus and try and stay happy while actually getting things done.

Roy covers a lot of the information that was in his presentation on his own blog right here so we won’t go over all of that however some of the things we really took away from the session:

  1. Roy was a competitive Rubik’s Cube Player (established geek cred for sure!)
  2. Find people who have done what you want to do and emulate them
  3. Feed and water your mentors- let people know you respect them and why
  4. Save your own story
  5. Keep a journal
  6. Keep a promise file
  7. Get organized
  8. Trust your curiosity

You can read the post here.

Many thanks to the folks at Raise and Geekend 2010.

21C Magazine: This is Your Brain Online

I compiled my thoughts on a bunch of recent books about the internet, social concerns, and brain matters into a piece called “This is Your Brain Online: Recent Books on Cognition and Connection” for 21C Magazine, many of which have been hashed out right here on this site.

Here’s an excerpt:

Regarding public cell phone use, comedian Bill Maher once quipped that if he wanted to be so privy to one’s most intimate thoughts, he’d read his or her blog. Nancy Baym addresses this technologically enabled collusion of public and private, as well as the more traditionally debated clashes, in Personal Connections in the Digital Age (Polity, 2010). Baym’s is by turns all-encompassing, in that she covers nearly every epistemological viewpoint on so-called social media expressed thus far, and all-purpose, in that anyone can read this book and see how these structures of knowledge apply to their own use of technology. In her study of technology’s influence on social connections, she breaks it down into seven key concepts: interactivity, temporal structure, social cues, storage, replicability, reach, and mobility. Sure, any time one attempts to slice up such a malleable and ever-changing landscape into discrete pieces one runs the risk of missing something. This process, one of closing off an open system in order to study it, is necessary if we are ever to learn anything about that system. Nancy Baym has collected, synthesized, and added to the legacy of digital media and indeed technology studies at large. This book is a great leap forward.

Many thanks to Ashley Crawford for the opportunity to contribute to 21C. Read the full piece here.

The Essential Tension of Ideas

One of the key insights in Richard Florida’s latest book, The Great Reset (Harper, 2010) is that rapid transit increases the exchange of ideas and thereby spurs innovation. Where the car used to provide this mass connection, now it hinders it. Increasingly, our cognitive surplus is sitting traffic.

Ideas are networks, Steven Johnson argues in his new book, Where Good Ideas Come From (Riverhead, 2010). The book takes Florida’s tack, comparing cities to coral reefs in that their structure fosters innovation. Good ideas come from connected collectives, so connectivity is paramount.

Human history in essence is the history of ideas. — H. G. Wells

On the other end of the spectrum, in a recent post about Twitter, David Weinberger writes,

…despite the “Who cares what you had for breakfast?” crowd, it’s important that we’ve been filling the new social spaces — blogs, social networking sites, Twitter, messaging in all forms, shared creativity in every format — with the everyday and quotidian. When we don’t have to attract others by behaving outlandishly, we behave in the boring ways that make life livable. In so doing, we make the Net a better reflection of who we are.

And since we are taking the Net as the image of who we are, and since who we think we are is broadly determinative of who we become, this matters.

His description sounds like we’re evening out our representations of our online selves, reconciling them with our IRL selves, initiating a corrective of sorts. Coincidentally, in their sad version of “The SEED Salon,” a recent issue of WIRED had Kevin Kelly and Steven Johnson discuss the roots of innovation (by way of plugging their respective new books; here they are discussing same at the New York Public Library). Kelly states,

Ten years ago, I was arguing that the problem with TV was that there wasn’t enough bad TV. Making TV was so expensive that accountants prevented it from becoming really crappy—or really great. It was all mediocre. But that was before YouTube. Now there is great TV!

It sounds as though Weinberger and Kelly are calling for or defending a sort of “infodiversity,” which one would think would be a core tenet of media ecology. As Kelly puts it in What Technology Wants (Viking, 2010), “Both life and technology seem to be based on immaterial flows of information” (p. 10). He continues in WIRED,

To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.

In science, Thomas Kuhn called this idea “the essential tension.” In his book of the same name (University of Chicago Press, 1977), he described it as a tug-of-war between tradition and innovation. Kuhn wrote that this tension is essential, “…because the old must be revalued and reordered when assimilating the new” (p. 227). This is one of those ideas that infects one’s thinking in toto. As soon as I read about the essential tension, I began to see it everywhere — in music, in movies, in art, and indeed, in science. In all of the above, Weinberger, Johnson, and Kelly are all talking about and around this idea, in some instances the innovation side, and in others, the tradition side. We need both.

One cannot learn anything that is more than one step away from what one already knows. Learning progresses one step or level at a time. Johnson explores this idea in Where Good Ideas Come From by evoking Stuart Kauffman‘s “adjacent possible” (a term Johnson uses hundreds of times to great annoyance). The adjacent possible is that next step away. It is why innovation must be rooted in tradition. Go too far out and no one understands you, you are “ahead of your time.” Take the next step into the adjacent possible that no one else saw, and you have innovated. Taken another way, H. G. Wells once said that to write great science fiction, one must adopt a perspective that is two steps away from the current time. Going only one away is too familiar, and three is too far out. As Kelly puts it in the WIRED piece, “Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.” A new technology, literally “the knowledge of a skill,” is–in its very essence–the same thing as a new idea. For instance, Apple’s Newton was too many steps ahead of or away from what was happening at the time of its release. I’m sure you can think of several other examples.

Johnson, who has a knack for having at least one (usually more) infectious idea per book, further addresses the process of innovation with what he calls the “slow hunch.” This is the required incubation period of an innovative idea. The slow hunch often needs to find another hunch in order to come to fruition. That is, one person with an idea often needs to be coupled with another who has an idea so that the two can spur each other into action, beyond the power of either by itself (see the video below for a better explanation). It’s an argument for our increasing connectivity, and a damn good one.

That is not to say that there aren’t and won’t be problems. I think Kevin Kelly lays it out perfectly here:

…[T]here will be problems tomorrow because progress is not utopia. It is easy to mistake progressivism as utopianism because where else does increasing and everlasting improvement point to except utopia? Sadly, that confuses a direction with a destination. The future as unsoiled technological perfection is unattainable; the future as a territory of continuously expending possibilities is not only attainable but also exactly the road we are on now (p. 101).

———————–

Here’s the book trailer for Steven Johnson’s Where Good Ideas Come From [runtime: 4:07]:

NugRZGDbPFU

————————

References:

Florida, R. (2010). The great reset. New York: Harper.

Johnson, S. (2010). Where good ideas come from. New York: Riverhead.

Kelly, K. (2010). What technology wants. New York: Viking.

Kuhn, T. (1977). The essential tension. Chicago: University of Chicago Press.

Weinberger, D. (2010). “Why it’s good to be boring on the web.” JoHo The Blog.

WIRED. (2010, October) “Kevin Kelly and Steven Johnson on where ideas come from.” Wired.com.

Douglas Rushkoff: The User’s Dilemma

For over two decades, Douglas Rushkoff has been dragging us all out near the horizon, trying to show us glimpses of our own future. Though he’s written books on everything from counterculture and video games to advertising and Judaism, he’s always maintained a media theorist’s bent: one part Marshall McLuhan, one part Neil Postman, and one part a mix of many significant others. Program or Be Programmed: Ten Commands for a Digital Age (OR Books, 2010) finds him back at the core of what he does. Simply put, this little book (it runs just shy of 150 pages) is the missing manual for our wild, wired world.

“Whoever controls the metaphor governs the mind.” — Hakim Bey

Rushkoff agrees with many media thinkers that we are going through a major shift in the way we conceive, connect, and communicate with each other. His concern is that we’re conceding control of this shift to forces that may not have our best interests in mind. “We teach kids how to use software to write,” he writes, “but not how to write software. This means they have access to the capabilities given to them by others, but not the power to determine the value-creating capabilities of these technologies for themselves” (p. 13). We’re conceiving our worlds using metaphors invented by others. This is an important insight and one that helps make up the core of his critique. This book is more Innis’ biases of media than it is McLhaun’s laws of media, and it left me astounded — especially after reading several books on the subject that were the textual equivalent of fly-over states. Program or Be Programmed is a welcome stop along the way.

I first interviewed Doug Rushkoff in 1999. We’ve stayed in touch since and discussed many ideas over the intervening decade, but we haven’t recorded any of these exchanges. I used this book as an opportunity to ask him a few questions.

Roy Christopher: Program or Be Programmed seems to distill quite a lot of your thinking about our online world from the past twenty-odd years. What prompted you to directly address these issues now?

Douglas Rushkoff: I guess it’s because the first generation of true “screenagers” or digital natives have finally come of age and, to my surprise, seem less digitally literate than their digital immigrant counterparts. I’ve written a number of books applying the insights of digital culture — of its do-it-yourself, hacker ethos — to other areas, such as government, religion, and the economy. But I realize that we don’t even relate to digital culture from the perspective of cultural programmers. We tend to accept the programs we use as given circumstances, rather than as the creations of people with intentions.

So I wanted to go back and write something of a “poetics” of digital media, sharing the main biases of digital technologies so that people can approach them as real users, makers, and programmers, rather than just as passive consumers.

If anything in particular prompted me, it was watching the way smart writers and thinkers were arguing back and forth in books and documentaries about whether digital technology is good for us or bad for us. I think it’s less a question of what the technology is doing to us than what we are choosing to do to one another with these technologies. If we’re even choosing anything at all.

RC: You mention in the book that anyone who seems a bit too critical of digital media is labeled a Luddite and a party-pooper, yet you were able to be critical, serious, and hopeful all at the same time. What’s the difference between your approach and that of other critics of all-things-digital?

DR: I think the main difference is that I’m more concerned with human intention and how it is either supported or repressed in the digital realm. Empathy is repressed, the ability to connect over long distanced is enhanced. I go down to the very structure and functioning of these tools and interfaces to reveal how they are intrinsically biased toward certain kinds of outcomes.

So I’m less concerned with how a technology effects us, than how our application or misapplication of a technology works for or against our intentions. And, perhaps more importantly, how the intentions of our programmers remain embedded in the technologies we use. I’m not judging a technology one way or the other; rather, I am calling for people to make some effort to understand what the technologies they are using were made for, and whether that makes it the right tool for the job they’re using it for.

RC: You evoke Harold Innis throughout this book. Do you think there’s something that he covers more thoroughly or usefully than other media theorists since?

DR: I think he was better at looking at media shaping the nature and tenor of the social activity occurring on it, or around it. He’s the guy who would have seen how cell phones change the nature of our social contract on the street, turning a once-public space into lots of separate little private spaces. As far as media-ecology goes, he was probably the purest theorist.

RC: The last programming class I took was a Visual Basic class in which even the programming was obscured by a graphical interface: there was little in the way of real code. For those of us interested, what’s the first step in becoming a programmer now?

DR: I guess it depends on your interests. There are many different places to start. You could go back and learn Basic, one of the simplest computer languages, in order to see the way lines of code in a program flow. Or you could even just get a program like Director, and sequence some events. Hypercard was a great little tool that gave people a sense of running a script.

If I were starting, I’d just grab a big fat book that starts from the beginning, like Dan Shiffman’s book Learning Processing (Morgan Kaufman, 2008). You can sit down with a book like that and, with no knowledge at all, end up with a fairly good sense of programming in a couple of weeks.

I’m not asking everyone be a programmer at this point. Not this generation, anyway. That’s a bit like asking illiterate adults to learn how to read when they can just listen the radio or books on tape. I get that. But for those who will be living in increasingly digital spaces, programming will amount to the new literacy.

RC: Though you never stray too far, you seem to have come back to your core work in this book. What’s next?

DR: I have no idea, really. Having come “home” to a book on pure media theory applied to our real experience, I feel like I’ve returned to my core competence. I feel like I should stick here a while and talk about these issues for a year or so until they really sink in.

I’ve got a graphic novel coming out next year, finally, called ADD. It’s about kids who are raised from birth (actually, earlier) to be video game testers. I’d love to see that story get developed for other media, and then get to play around in television or film. There are also rumblings about doing another Frontline documentary. Something following up on “Digital Nation,” which I’d like to do in order to get more of my own ideas out there to the non-reading public.

I guess we’ll see.

——————–

Astra Taylor and Laura Hanna (the filmmakers behind the film Zizek!) put this short video together to help illustrate the ideas in Rushkoff’s Program or Be Programmed [runtime: 2:18]:

kgicuytCkoY

danah boyd: Privacy = Context + Control

danah boyd is one of the very few people worthy of the oft-bandied title “social media expert” and the only one who studies social technology use with as much combined academic rigor and popular appeal. She holds a Ph.D. from UC-Berkeley’s iSchool and is currently a Senior Social Media Researcher at Microsoft Research New England and a Fellow at Harvard University’s Berkman Center for Internet and Society. As the debates over sharing, privacy, and the online control of both smolder in posts and articles web-wide, boyd remains one of a handful of trustworthy, sober voices.

boyd’s thoughts on technology and society are widely available online, as well as in the extensive essay collection, Hanging Out, Messing Around, and Geeking Out (MIT Press, 2009). In what follows, we discuss several emerging issues in social media studies, mostly online privacy, which has always been a concern as youth and digital media become ever more intertwined.

Roy Christopher: Facebook is catching a lot of flack lately regarding their wishy-washy Terms of Service and their treatment of their members’ privacy. Is there something happening that’s specific to Facebook, or is it a coincidental critical mass of awareness of online privacy issues?

danah boyd: Facebook plays a central role in the lives of many people. People care about privacy in that they care about understanding a social situation and wisely determining what to share in that context and how much control they have over what they share. This is not to say that they don’t also want to be public; they do. It’s just that they also want control. Many flocked to Facebook because it allowed them to gather with friends and family and have a semi-private social space. Over time, things changed. Facebook’s recent changes have left people confused and frustrating, lacking trust in the company and wanting a space where they can really connect with the people they care about without risking social exposure. Meanwhile, many have been declaring privacy dead. Yet, that’s not the reality for everyday folks.

RC: Coincidentally, I just saw yours and Samantha Biegler’s report on risky online behavior and young people. The news loves a juicy online scandal, but their worries are always seem so overblown to those in-the-know. What should we do about it?

db: Find a different business model for news so that journalists don’t resort to sensationalism? More seriously, I don’t know how to combat a lot of fear mongering. It’s not just journalists. It’s parents and policy makers and educators. People are afraid and they fear what they don’t know. It’s really hard to grapple with that. But what really bothers me about the fear mongering is that it obscures the real risks that youth face while also failing to actually help the youth who are most at-risk.

RC: NYU’s Jay Rosen maintains that his online presence is “always personal, never private.” Is that just fancy semantics or is there something more to that?

db: The word “private” means many things. There are things that Jay keeps private. For example, I’ve never seen a sex tape produced by Jay. I’ve never read all of his emails. I’m not saying that I want to, but just that living in public is not a binary. Intimacy with others is about protecting a space for privacy between you and that other person. And I don’t just mean sexual intimacy. My best friend and I have conversations to which no one else is privy, not because they’re highly secretive, but because we expose raw emotional issues to one another that we’re not comfortable sharing with everyone. Hell, we’re often not sure that we’re comfortable admitting our own feelings to ourselves. That’s privacy. And when I post something online that’s an in-joke to some people but perfectly visible to anyone, that’s privacy. And when I write something behind a technical lock like email or a friends-only account because I want to minimize how far it spreads, that’s privacy. But in that case, I’m relying more on the individuals with whom I’m sharing than the technology itself. Privacy isn’t a binary that can be turned on or off. It’s about context, social situations, and control.

RC: Hannah Arendt defines the private and public realms respectively as “the distinction between things that should be hidden and things that should be shown.” How do you define the distinction?

db: I would say the public is where we go to see and be seen while minimizing our vulnerabilities while the private is where we expose ourselves in a trusted space with trusted individuals.

———————–

Ed. Note: It has come to my attention that what Jay Rosen actually said was, “In my Twitter feed I try to be 100 percent personal and zero percent private.” Apologies to everyone, especially Jay, for the misquote.