Best of Technology Writing 2009 (and 2010!)


You might have heard the news already, but I am pleased to confirm it: The Best of Technology Writing 2009 collection is out, and it includes the article I wrote for Wired last year on griefer culture (“Mutilated Furries, Flying Phalluses”). This makes the third year in a row I’ve had an article in the series, and I’m more dazzled than ever by the illustrious company I’ve been put in. There are terrific pieces here by Clay Shirky, Nicholas Carr, Dana Goodyear, danah boyd, and yadda yadda, not to mention a meaty, thoughtful introduction by this year’s presiding editor, Mr. Illustrious himself, Steven Johnson.

Now, here is where traditionally I announce that I’ll be giving away one free copy of the collection to each of the first three readers who email me in response. But first, I have a bigger  announcement I want to make: I am officially out of the running for inclusion in next year’s Best of Technology Writing because — ta-da! — I’m going to be its editor. This entails, on the one hand, the honor and challenge of filling the god-size shoes of my predecessors (Clive Thompson, Stevens Levy and Johnson), and on the other, the pain in the ass of winnowing this year’s metric crap-tonne of technology writing down to its very finest crap-milliliter. Care to help?

I’m serious. The nominations page is now open for submissions. And to kickstart the crowdsourcing, I hereby announce a slight modification to the rules of my traditional free-book offer: This year the freebies go to the first four of you who send a nomination straight to me (at It has to be a nomination no other first responder has already sent in (so including multiple nominations in your message will improve your chances), but otherwise the only restrictions are the guidelines given on the submissions page:

Both readers and writers are welcome to nominate pieces, and self-nominations are encouraged. Profiles, policy, and Big Think pieces; blog posts, features, and investigative reporting; human interest, humor, business and gadgetry are all welcome. But the ideal submissions will:

* be engagingly written for a mass audience;

* be no longer than 5,000 words;

* have been published between January 1 and December 31, 2009.

Got it? Right then: Start your browsers aaaaaaand go!

UPDATE (11/10/2009): We have a winner: Anne Trubek of Cleveland, Ohio, has submitted a nomination and claimed her very own copy of The Best of Technology Writing 2009. But hold on, I’ve still got two more copies to give away. No, tell you what, make that three more copies. Yes: I’m going to give away a grand total of FOUR FREE COPIES of this year’s Best of Tech Writing collection. And hell, I’ll make it EVEN EASIER for you to win one: Yes, I am hereby loosening the contest rules and taking any valid nomination as a winner, even if it’s already been submitted. So go ahead: Give me one nomination, give me twenty, your chances of WINNING are just as SPECTACULAR either way. Are the implications sinking in here? Yes? THEN WHAT IN THE NAME OF SWEET BABY JESUS ARE YOU WAITING FOR? Send. Me. Your nomination(s). NOW.


The Saddest Banknote in the World



My wife and I had dinner the other night with our friends John and Jean Comaroff, South Africans by birth, jetsetting anthropologists by trade, just back from Cape Town and bearing — as a gift for us, it turned out — the Zimbabwean billion-dollar banknote depicted above. I had heard of these, as maybe you have too. I knew that hyperinflation in Zimbabwe has led in recent months to the printing of even higher denominations: the trillion-dollar bill, the hundred trillion. But knowing that these bills existed, I discovered, was not at all the same thing as encountering the billion-dollar bill itself, laid out delicate and dignified on the dinner table before me, its head-spinning pathos catching me so utterly off guard that, days later, I am still sorting it out.

There was, of course, the vast ironic gap between the boldly lettered phrase ONE BILLION DOLLARS, in all its astronomic magnitude, and the actual value of the bill it was printed on. Currently the Zimbabwe dollar lists on foreign-exchange trading sites at a price of roughly 3 millionths of a U.S. cent (a rate so outlandish that its like nowadays can only be found among the currencies of massively multiplayer online fantasy games, such as Ultima Online’s Britannian gold piece, lately selling for about a million to the dollar). But even this pitiful valuation appears to be a polite fiction, papering over the reality that the Zimbabwe dollar scarcely trades at all, the U.S. dollar having, for all intents and purposes, become Zimbabwe’s legal tender, and the Zimbabwean finance minister having declared the national currency, as of late last week, “essentially dead.” Amid the heartbreaking vertigo of all those zeroes, then, the billion-dollar note now carries a hint as well of the memento mori’s sad uncanniness.

But saddest and most vertiginous of all, in the end, is what is written in the bill’s fine print: Zimbabwe Reserve Bank governor Dr. G. Gono’s signed promise, at once personal, official, and majestically absurd, to “pay the bearer” the billion dollars represented by the note should the bearer appear before him and demand that payment.

Sure, the chief cashier of the Bank of England makes a similar promise on every British pound sterling note in circulation, and no doubt the Reserve Bank of Zimbabwe borrowed the exact phrasing at least partly in hopes of borrowing a little of the pound’s credibility. Sadly, though, the rhetorical effect is quite the opposite. Printed on the pound, the promise seems a quaint yet venerable archaicism, like wigs on judges — an assurance of essential continuity between today’s postmodern banking practices and the days when real money was made of gold and silver and a banknote simply guaranteed that you could get your hands on some. But on the Zimbabwe dollar, it has the flavor of a desperate stunt — as if Dr. Gono, like some “craaaaazy” discount-electronics guy on late-night television promising to eat his shorts if you can find a better deal, were literally daring you to walk into his office and trade banknotes with him if you doubt the value of his product — and only drives home the dizzying ungroundedness of all contemporary money.


Chris Cox Lets His Critique Flag Fly


Daniel Roth’s article on the future of financial-data analysis in the latest Wired (Road Map for Financial Recovery: Radical Transparency Now!) is a must-read if only for the last few paragraphs, in which Christopher Cox, outgoing SEC chairman and a card-carrying Orange County Reagan Republican, starts getting all post-Marx/postmodern widdit like some kind of white-shoed Baudrillard of the Beltway:

“The SEC was founded on the legal concept of disclosure and transparency,” [Cox] says. “It was not a technological concept.” He flashes a politician’s smile, a quick display of blindingly white teeth—cover while he thinks about what comes next. “Today, we have technology that was unimaginable in the early part of the 20th century, that can reify this idea in ways that are far more expansive and consequential.”

As Cox sees it, that massive computational power has primarily been used by financial engineers, who create abstract models of how the market should operate and make bets based on those models. “You know Borges, the writer?” Cox asks. “He wrote those fantastical short stories. He has one called On Exactitude in Science.” The parable tells of a kingdom obsessed with creating a perfect map of itself—an essentially useless quest that leads them to draw a map that is the same size as the territory it is supposed to represent. Cox sees the story as a metaphor for the modern financial industry, which is so obsessed with modeling the market that it has lost sight of the data beneath those models. But make more data available and you don’t need the perfect map. “To the extent that we can atomize what now are these hopelessly complex forms, dense with legalese, and let people have ready means to pull from actual reality what it is that they need, it’s no longer a model. It’s real.”

I’m sorry, did he just say reify? And did he, or did he not, then recapitulate almost move for move the opening of Jean Baudrillard’s key essay “The Precession of Simulacra” — from the tell-tale deployment of that very same Borges allegory to the annunciation of a higher order of coding in which distinctions between the map and the territory, the real and the represented, collapse entirely?

More to the point, what does it tell us that the man who ushered the U.S. financial system to its rendez-vous with doom seems to have viewed his world in much the same way the high postmodernists viewed theirs? Perhaps that their critique of late capitalism has all along been more correct than its detractors claim. Maybe also that it’s every bit as useless.

No Comments

Kittens, Kittens, and the Online-Cultural Crisis of Propriety


This week’s memetic-research news begins with an experimental human-subject trial I conducted the other day — the human subject being my seven-year-old daughter, whom I deliberately exposed to the viral video “Kittens Inspired By Kittens”. As you probably know, the video features an adorable little girl doing adorably inventive voice-over kitten impressions while paging through her copy of the children’s photo book Kittens, all of which you might think would have been too much for the young test subject to resist. Interestingly, however, subject displayed high levels of immunity to the video’s virulent cuteness, responding only when the little girl’s uncanny rendition of the sound of a tiny, fluffy, impossibly cuddlicious white kitten going pee-pee managed to elicit a chuckle — and even then perhaps only because the manifest amusement of subject’s father suggested it might be polite to play along.

What can we conclude from these results? Given the small sample size — not to mention the conflicting results in at least one report from the field — some might say not much. But as the hour is late and I have a point to make, I say we agree the experiment proves exactly what I suspected all along: That “Kittens Inspired By Kittens” is, for all the childish charm of its content, a peculiarly adult entertainment.

Continue Reading »


One True Random Thing


me in a tutu, c. 1972, originally uploaded by Julian.

And now, finally, lest it be said that I am too caught up in wisecrack and critique to respond to the essentially open-hearted social gesture that is a Facebook 25 Random Things About Me tagging, I give you this one true thing and the photographic evidence that confirms it:

1. I wore a tutu to school one day when I was in the third grade.

Which besides being true has the additional virtue of being truly random, as I cannot for the life of me remember why I did this or think how to fit this moment into the narrative of my life. Judging from the other boys’ attire, it wasn’t any kind of dress-up day, nor did I make a habit of cr0ss-dressing or grow up anything but straight.

On the other hand, that’s not to say that my “progressive” Southern California public elementary school wasn’t the sort of place that might encourage this sort of thing. My third-grade teacher, in particular, a wonderful woman named Billie Vincent, pretty much let us each design our own curriculums, and I’d always assumed there was some high-flying educational theory behind that, which I suppose there was. But I’m guessing now that one of the most valuable things I took away from that third-grade year was a taste for randomness.


Random Things, Revisited


I was kind of hoping last week’s 25 Random Things About Me spoof would be my last expenditure of mental energy on that life-sucking Facebook phenomenon. But then along came Slate with their epidemiological analysis of the 25 Random Things outbreak and I, being the sucker for semi-rigorous sub-Gladwellian pop-social-science-of-the-everyday that I am, found myself obliged to contemplate the damn thing some more. In particular, I was reminded that it’s de rigueur nowadays to think of things like 25 Random Things not as glorified fads and chain letters but as memes, with all the bioscientific conceptual baggage that entails. Whence I was led, in turn, to realize that my substitution of 25 random pop-song lyrics for 25 actual self-revelations was not, in fact, a passive-aggressive attempt to play nice with my tag-happy Facebook friends while at the same time still sort of being a smart-assed dick about it but, rather, an act of memetic engineering — an antiviral intervention aimed at slowing the original meme’s contagion by splicing in junk DNA.

Alas, the delusion was short-lived. What I’d found most interesting about Slate’s analysis was its tracking of the meme’s mutation through various numerical configurations — from 16 Random Things to 15, 17, 35, 100, and finally 25, at which point the meme, presumably having reached its evolutionarily optimal size, went viral. But as I watched my own re-engineered virus drift from host to host, I realized how little that particular variable really explained about the meme’s trajectory. My variant was a list of 25 things, too, but it was immediately obvious it would never spread fast enough to crowd out the original meme. In a way, of course, the problem remains one of simple math: While just about everybody has 25 little personal facts that maybe 25 personal acquaintances will have sufficient personal interest in to click through to and read, there are only so many Random-Things-ready pop-song lyrics to go around. More to the point, though: The key variables here aren’t essentially quantitative or even, strictly speaking, objective, but woven into the web of relationships and meanings that constitutes the life of cultures. And that’s not something you’re ultimately going to make sense of with tools built for understanding the origin of species.

In saying so I am echoing much of what the media scholar Henry Jenkins had to say on his own blog last week, in a long and lucid post that blasted “the idea of the meme and the media virus, of self-replicating ideas hidden in attractive, catchy content we are helpless to resist” as “a problematic way to understand cultural practices,” proposing instead to found the study of memes on the distinctly non-genetic principle that “that these materials travel through the web because they are meaningful to the people who spread them.”  I’m not sure I’ll be heeding Jenkins’s call to replace the term “viral media” with “spreadable media” (I’ll take something that sounds less like it goes on a bagel, thanks), but I’m otherwise down with the program. And whatever life-sucking Web phenomenon next enters my Facebook feed or my Twitter stream or my email inbox, I will try not to let my inevitable ambivalence about it tempt me to believe it’s anything but the most human sort of artifact there is: A token of our need to fashion meaning from a world of random things.


25 Random Things About Me

  1. I shot a man in Reno.
  2. I’m too sexy for Milan.
  3. I’ve got lots of friends in San Jose.
  4. I am everyday people.
  5. I’m every woman.
  6. I’m the tax man.
  7. I am the walrus.
  8. I want to know what love is.
  9. I want to run naked in a rainstorm.
  10. I’d like to teach the world to sing.
  11. My hips don’t lie.
  12. I remember when rock was young.
  13. I’m special. So special.
  14. I can’t go for that.
  15. There is always something there to remind me.
  16. I’ve been alive forever, and I wrote the very first song.
  17. I’m Rob Base, and I came to get down.
  18. Regrets? I’ve had a few.
  19. I believe the children are our future.
  20. I believe I can fly.
  21. I know what boys like.
  22. I’ve been working in a coal mine.
  23. I was working as a waitress in a cocktail bar.
  24. I’m on the top of the world, looking down on creation.
  25. I am a lineman for the county.
1 Comment

Mind in the Cloud


New Scientist the other week reviewed a book called Supersizing the Mind: Embodiment, Action, and Cognitive Extension by Andy Clark. I want it.

Not that I have the time or even, frankly, the money to spend on $30, 320-page slabs of hardcore cognitive philosophy. I just think it would be nice to have on my shelves a book that legitimizes once and for all my natural inclination to think of search engines, wikis, blogs, and other online repositories of knowledge as literal extensions of the human minds that engage with and create them. I admit I’m a little embarrassed by this inclination. Bolder supporters of the notion that mind is bigger than brain — that it’s “immanent,” as Gregory Bateson wrote, “not only in the body… [but] also in pathways and messages outside the body” — always seem to end up teetering on the brink of New Age pantheistic cheese (“there is a larger Mind of which the individual mind is only a subsystem,” added Bateson) if not falling straight in. And while I’m also aware that the perfectly respectable fields of social and cultural psychology seem to make regular use of theories of distributed cognition without going up in a metaphysical haze, I can’t help guessing their claims are a little more provisional than the cyborg fantasies that haunt my thinking about the online data-cloud.

Clark’s claims, however, appear to be both strong and grounded, if this choice quote selected by reviewer Owen Flanagan is anything to go by:

To unravel the workings of these embodied, embedded, sometimes extended minds, requires an unusual mix of neuroscience, computational, dynamical, and informational-theoretic understandings, ‘brute’ physiology, ecological sensitivity, and attention to the stacked designer cocoons in which we grow, work, think, and act.

There is, on the one hand, nothing airily rhetorical about the research project here proposed nor, on the other, any sense that Clark shies away from insisting his “extended” cognition is indeed just what we talk about when we talk about mind. Flanagan doesn’t say whether Clark makes the obvious metaphorical leap from “cocoons” to the Web, but he says enough to make it clear that for mainstream cognitive philosophy these days, the leap is hardly out of bounds — or even merely metaphorical. Hell, even Sergey Brin’s famous claim that Google, perfected, would be “like the mind of God” seems wimpy in comparison. In Clark’s formulation, it appears, Google already is the mind of humanity — a part of that mind, at least, and in a manner that is no less complicated than humanity itself, but without the need to wait for perfection or rely on simile. And that is more than enough to satisfy my cyborg fantasies.

I want this book.

No Comments

Raelians, Drummers, Star Wars Troopers


Two weeks ago, while visiting Southern California to report on the activities of the local chapter of Anonymous anti-Scientology protesters (about which more, perhaps, later), I attended Pasadena’s approximately annual Doo Dah Parade and happened upon a harmonic convergence of Raelians, Star Wars troopers, high-school marching-band drummers, and flash-blinding California sunshine, which I captured and have reproduced here in hopes that someone among you can decipher its almost certainly prophetic message for me.


“A Taxpayer May Wonder”-land


In honor of the U.S. National Taxpayer Advocate’s delightfully thorough remarks on the through-the-looking-glass income-tax implications of virtual worlds, I have reproduced those remarks in their entirety right here in full living HTML.  I have also made some remarks of my own over on Terra Nova, if you’re interested.

No Comments
« Older Posts

  • Latest Articles