David Byrne

March 7th, 2005

byrne2-small.jpgAt least 300 people showed up tonight to see David Byrne give a Powerpoint presentation about Powerpoint. [Free video of the talk may be available soon.]

What luck it was to happen upon one of the best seats in the front row. As Byrne was introduced he sat on the stage, about 5 feet away. The only camera I had was the crappy one on my mobile phone but I couldn’t resist a few shots.

Along with David Bowie and a few others, Byrne got me through my childhood during the 1980s. As a kid I loved his otherworldly tunes and that deeper, darker, subtler vibe that set him apart from the shrill, candy-colored MTV culture that overtook pop music in those days.

I can’t agree with all his Powerpoint points, but it’s fascinating to consider how he views this tool. The user interface geek in me is dying to watch him work with it firsthand in his natural habitat.

I wanted to step up and hug him when he said that a Powerpoint presentation is just part of a larger “performance” which includes not just the person speaking but the audience, the room, the surroundings. Software designers, even self-professed user interface and needs analysis experts, can learn a lot from Byrne. The point seems like an obvious one, but we’re still stuck in “user-centered” tunnel vision: we design for a prototypical single person staring at a single computer, as if that person and computer operate in a vacuum. This approach can be worse than meaningless if you ignore the surrounding context. This tunnel vision can be downright dangerous as we design software that moves beyond the desktop and into public spaces.


Byrne presented another intriguing argument: that Powerpoint’s constraints, particularly its “low resolution,” can be a benefit. (He meant “resolution” in the way the Powerpoint-loathing Edward Tufte uses the word: in terms of graphics quality but also in more general terms of how much information standard Powerpoint templates allow you to convey to an audience at a time). Simpler, lower-resolution images force the audience to become involved more in the presentation because they have to actively connect the dots.

This brought to mind a couple of analogies. Think of how books and radio can seem richer than television — the lack of visuals forces the audience to actively imagine the action, to envision many details that aren’t explicitly described.

Scott McCloud pointed out in his book Understanding Comics that many protagonists in popular comics are drawn in a simpler, less detailed style than other characters and their surroundings. Think of Tintin or Orphan Annie. McCloud theorizes that readers can more easily sympathize with minimally-drawn heroes because they can more easily project themselves into those characters. The more details you give a character, the less that character shares in common with a given reader. On the other hand, the story can be more compelling if faraway lands that the character visits, and other characters that the character encounters, especially bad guys, are drawn in a detailed manner — because intricate detail in itself can make those thing seem more foreign, interesting or even frightening.

Does this apply to Powerpoint? I don’t think so… I still hate Powerpoint and the agonizingly dull, ubiquitously unimaginative corporate communication style that its use has embodied and encouraged since Microsoft purchased the software and took over its development and marketing. The world needs more elegant and customizable presentation tools, which can be made just as easy to use for non-techies as Powerpoint. Constraints can be a blessing, but the wrong sorts of constraints can be a curse.

Anyway, it’s fun to watch Byrne turn the Powerpoint tradition on its head.

Phonecams: Beyond the Hype

November 19th, 2003

“Do I really need a camera attached to my mobile phone? Honestly, isn’t this just a gimmick?”

Lately I’ve fielded those questions many times over from friends and family, and even from other tech people.

Even the phonecam manufacturers don’t seem to have a clue what people will really use these things for, judging from the foolish scenarios they portray in TV commercials. But that’s typical; new technologies are never born fully-formed. Nobody knows how networked cameras will evolve, and nobody knows just how we’ll grow to use them. But special properties of networked cameras have convinced me that these tools won’t be abandoned any time soon. Some of these capabilities haven’t emerged yet but I think they’re all on the way.

Here are five important capabilities that seem unique to networked digital cameras:

1) A photographer can use such a camera to send all her photos to a single, central storage place as she takes them. This eliminates the handling of film, smart cards and other intermediary media. It means that cameras can be smaller and cheaper because they don’t need massive amounts of storage space. It dramatically simplifies problems involving backups, sorting, and after-the-fact annotation. No more rooting through PCs, CDs, servers, drawers and albums to find that great family portrait from last Thanksgiving.

No single firm or agency can or should store and control everybody’s photos. Nobody’s photos should -physically- be stored in just one facility. The media should be backed up and mirrored at multiple sites in case fires, floods or whatnot destroy the data at one site. But as far as the user is concerned, the photos should “live” in one secure spot in cyberspace. You should have just one virtual “place” to search through when seeking your photos, so that you don’t have to worry about inadvertently losing important photos, and so that you don’t have to constantly copy collected photos from one device or place to another.
Read the rest of this entry »

Experimental Interaction Unit

November 18th, 2003

eiu.gifThe dark side of interaction design:

Experimental Interaction Unit.

Publish or Perish

June 16th, 2003

The good news: A version of my column “How to Fix an Election” appeared this month in the Association for Computing Machinery’s SIGCHI Bulletin. (I wrote the original version for the general public; I rewrote this newer version to target readers in the Human Computer Interactions industry).

This was not an academic paper. But still it’s my first publication in an HCI periodical. Hooray.

The bad news: I submitted the essay (and posted the original on cheesebikini) nine months ago. That was just after the second Florida election fiasco, which the essay addresses. Now the Florida elections are old news. But the essay’s points still hold true.

Language, Dolphins and Garage Cinema

May 11th, 2003
dolphin as projector?

What if dolphins communicate by sending and receiving images? What if humans can learn to do the same, on the fly, via computer mediation?

I know what you’re thinking: this guy’s been in California far too long. You’re probably right.

But bear with me on this.

As a kid growing up by the sea in Florida I was obsessed with bottlenose dolphins. I read everything I could find about them. When I was 13 I borrowed a fancy underwater microphone from an oceanographer and used it to record dolphin sounds at Ocean World, the local marine theme park. I played back the recordings into another dolphin tank. But I didn’t get much of a reaction at all. After gathering around this strange noise-making machine for a few minutes, the dolphins quickly grew bored and ignored the tape recorder. They were far more interested in my cheap watch. And dead fish.

Plenty of more serious research (and writing and movie-making) was devoted to the prospect that dolphins’ clicks and whistles might be a language. Scientists showed that dolphins convey instructions to one another, but still nobody has proven whether a high-level dolphin language exists.

This week I read a paper by Berkeley’s Professor Marc Davis that dramatically changed my thinking about this by pointing out that a dolphin language might not be based on words. Most linguistic dolphin research I’ve seen seeks dolphin sounds strung together as words, and I always unconsciously assumed that any high-level language must be based on words.

yukaghir love letterBut Davis points out in his paper that some written human languages don’t use words at all but instead directly represent meaning visually. The image to the right is a message written in such a language, by a member of the Yukaghir tribe in Siberia. (See Davis’ paper for more details and a translation of the letter.)

Like bats, dolphins use echolocation. They emit waves of sound and use the resulting echoes to pinpoint locations, sizes, shapes, densities, and even internal states and structures of animals and objects, with astounding precision and accuracy. If dolphins use their own sounds so skillfully to probe their environments and to “see” what’s around them, can they also use sound to create artificial imagery that’s “visible” to other dolphins?

Dolphins exhibit a superhuman ability to convey spatial instructions to one another. Nobody’s sure how exactly they work this out, but if you watch a group of dolphins carrying out tasks in which they have to quickly synchronize very complicated sets of movements — during a theme-park performance, for example, or during hunts in which they round up thousands of fish into dense schools — you’ll be amazed at their powers of spatial coordination.

Can you imagine dolphins sending each other visual cues mapped to real-world environments — or even sending entire artificial “video” scenes showing planned activities — on the fly?

This may be a stretch; it’s probably fiction and so far it’s not backed by much science. But it’s a very powerful idea that we can use. Even if dolphins cannot communicate this way, perhaps we will be able to, with the help of computers.

Davis and his Garage Cinema Research group at Berkeley are working on it. They’re designing systems that they hope will allow regular people to easily and quickly build video compositions, without putting forth the tremendous amount of time, expense and technical knowledge necessary for today’s film production. Thanks to smart systems that can recognize media assets and automate much of the video capture, editing and production process, Davis hopes to allow us all to “write” video as often and as easily as we “read” video today. The promise lies not just in replacing the current wasteful and corporate-dominated system of creating polished high-end feature films, but in providing humanity with a new, more powerful form of everyday communication.

One Conference, Two Worlds

April 27th, 2003

laptops.jpgThis week’s O’Reilly Emerging Technology Conference in Santa Clara, California was great fun. I enjoyed the presentations and the ideas, but it was the behavior of the attendees that really fascinated me.

The conference wasn’t all “there.”

Much of it took place elsewhere, and everywhere — in cyberspace. My attention was always torn between the physical conference and the virtual conference.

I’ve never seen so many networked gadgets in use simultaneously in one place. During any given session, much of the audience had their laptops open and online thanks to power outlets and wireless Internet service throughout the conference rooms, lounges and hallways. I was immersed in bandwidth; I was surrounded by a chorus of whirring laptops and clicking keys.

For me, this was a totally new sort of event — but soon, experiences like this will become commonplace.

The typical scene: up front the speaker presents her talk, projecting a slide show or a demo onto the wall-sized screens. A glance around the darkened room reveals dozens of ghostly blue-white faces gazing into laptop screens.

confab.gifMany of them are engaged in online chat rooms. ConFab, a Web-based chat tool, was built just for the conference. It allows a person to mouse-over a map of the conference rooms, to specify which physical-world room he’s sitting in, to engage in text chats with other attendees, and to see how many people are logged into each conference room. He can even pay a virtual visit to another conference room to find out what people are chatting about over there.

(Network problems made staying connected to ConFab very difficult. But people conferred in more traditional Internet Relay Chat rooms too.)

In the chat rooms people crack jokes and trade opinions about what the speaker is saying, and they write brief summaries of what’s going on for people who are tuned in to the conference from other parts of the planet.

People read other folks’ comments. They examine the speaker’s Web site. They tune in to chats going on simultaneously in the other conference sessions, judging whether to step out and join the session going on next door.

And they blog. I watched at least three people pull out digital cameras during presentations, take snapshots and upload the images to their blogs right there.

laptops2.jpgPeople collaborate to take notes on the presentations and discussions using wikis. Groups of people use Hydra, a collaborative editing tool that allows multiple users to elegantly write, edit and add to a single document simultaneously.

That pattern was repeated endlessly throughout the conference. Everyone’s energies were divided between cyberspace and the physical world. This is a fascinating phenomenon, but when the novelty wears off will such connectedness make for better or worse conferences?

Did the average attendee go home with more or less knowledge, with more or fewer useful acquaintances, with more or less encouragement than they would have acquired without the digital networking? What do you think?

The conference left me more confused about these questions than ever. For one thing, I wasted a lot of my attention and energy dealing with a couple of basic technical problems that the organizers can easily iron out in time for next year’s conference. But next year, won’t my attention be devoted to a new set of problems to wrestle or configurations to fine-tune as more real-world subtleties slip by unnoticed?

I want to experiment more with this, and I know I won’t have to wait long.

(A freakish footnote: I’m writing this entry on my laptop in a Berkeley WiFi cafe, days after the conference ended. Three other geeks bend over three other laptops by the window. They’re talking about their experiences at the same conference, as they post entries to their own blogs about it. Should I laugh or cry?)

(Photos in this entry by Derrick Story of the O’Reilly Network.)

Head Games

April 20th, 2003

Strange things are afoot in the world of Human-Computer Interactions. This month at CHI 2003, the biggest annual HCI-related conference on the planet, bathroom talk was all the buzz.

  • Massachusetts Institute of Technology students Dan Maynes-Aminzade and Hayes Solos Raffle presented what I assume is the world’s first urine-based computer interface: “You’re In Control.” They built a special urinal fitted with sixteen pressure sensors that detect the location of the user’s urine stream.

    At eye-level above the urinal a video game appears, complete with jumping hamsters and a simulated urine stream that’s mapped to the location and movement of the user’s real urine stream. Hit a hamster and it turns yellow, screams and spins out of control as your score increases by ten points. The MIT students even built a penis simulator that allows women to spray water into the urinal. A urinal like this might persuade the neighborhood pub’s patrons to refuel by purchasing more beer.

    Toilet Entertainment System

  • Swedish design students Par Stenberg and Johan Thoresson presented the Toilet Entertainment System, which collects a user’s interests while he or she sits on the toilet, and then prints out customized news content on the toilet paper. It “keeps you discreetly entertained while visiting the toilet,” according to the inventors. My advice for the future: be afraid. And prepare yourself for longer bathroom lines.
  • Human Factors International distributed free calendars to conference attendees. For the month of April the HFI calendar features a cartoon about potential toilet interfaces of the future.



  • Einstein’s Advice

    February 27th, 2003

    “Concern for man himself and his fate must form the chief interest for all technical endeavors. Never forget this in the midst of your diagrams and equations.”

    – Albert Einstein

    Location-Aware Thumb Ratings

    February 20th, 2003

    People have predicted very complex “augmented-reality” systems that might arise in the near future, when many folks will carry around location-aware devices. But how about a simple thumbs-up/thumbs-down rating system?

    Here’s how it might work: your device includes a green thumbs-up button and a red thumbs-down button, TiVo-remote style. As you move through the city, when you enter a favorite restaurant or club or cafe you click the “thumbs up” button. When you pass that restaurant where you got food poisoning or that stuffy overpriced bar, you click “thumbs down.” And if you enter an especially wonderful place, you click “thumbs up” twice to give it two thumbs up.

    The key: you don’t have to interrupt your daily activities; just reach into your pocket and click one of two buttons whenever you think of it.


    Each time you press the button, the device records your geographical location and the thumb rating. Soon you have a little database, a map that shows the spots around town that you love and the spots that you hate.

    So what? So nothing, until people share their preference maps. Now you have a powerful concept.

    Thanks to this network, you can share your preference map with anyone who wants to use it, and you can freely use other peoples’ preference maps. You decide which of your friends have tastes closest to your own, and you subscribe to those friends’ preference maps.

    Software on your device notifies you when you’re near a spot that friends have rated positively; if a dozen of your friends rated a place highly, the device specifies via sounds or spoken words that the spot got a lot of thumbs up from the people you trust. And another thing — software maps this for you, visually overlaying the green and red thumb-clicks over a map of a city or a region or a building.


    You can also form preference groups, just as you form e-mail discussion lists. Everyone who shares an interest adds their account to a particular list, and that list compiles all members’ preference maps into a master map for that group. Then anyone in the group can subscribe to the group map and use it or turn it off as desired. (Of course, if you no longer trust a person or a group’s tastes, you can filter out their thumb-clicks on your map by removing that person or group from your list).

    For certain events you use time-sensitive preference maps with thumb-clicks that fade over time. This could be great at an art fair or an outdoor festival — you form a preference group with a bunch of friends who will attend the same event, and as you all explore the place, you each tag the coolest things and the most worthless things that you see. You might glance at a map and notice a dozen bright green blips at bandstand 3, which suggests that something amazing is going down there right now. Those green blips by the coat-check, on the other hand, have faded, so you probably missed whatever happened there. So you head straight to the action at bandstand 3.


    (I’m reading the book “The Orchid Thief,” and I just finished a scene that takes place at an orchid convention. Word spreads through the convention center that “you’ve got to check out the orchid that smells just like grape Kool-Aid.” Frustrated flower freaks are milling around, blindly trying to find that particular orchid among hundreds of flowers on display. The orchid freaks would immediately know just where to find the most talked-about flowers in the show if they used such a preference system. That’s what sparked this idea.)


    October 12th, 2002

    lobotomy“We are increasingly entrusting to software the various gathering, sorting and linking operations that we used to perform for ourselves and that were part of the process of thinking about a subject… The shift from book to screen may in its eventual impact on what knowlege is be as transformative as the shift from Newtonian to Einsteinian physics.”

    – Sven Birkerts in Sense and Semblance

    “Just as calculators can diminish our mathematical capacities, computers can rob us of the ability to synthesize the threads of data into the whole cloth of knowledge.”

    – Neurologist Richard Restak, M.D.
    in Mozart’s Brain and the Fighter Pilot

    Read the rest of this entry »

    « Previous Entries