In terms of numbers, this collaboration was a success. 206 unique people from 131 cities all over the world viewed the site 358 times. (See map on right for distribution.) There were 54 comments made by 17 people. It was blogged by three sites, including Beth Kanter of the highly regarded non-profit social media site Beth’s Blog.
Logistically, it was simple. It took Eric and I about 2 hours each to get the site up and running (content plus distribution plan). We each spent another 2 hours throughout the week checking in on the voicethread and responding to comments. There were no financial costs. There were no problems with spam or inappropriate comments. This was an unmoderated experiment, though I did add additional slides halfway through the experiment to add more venues for contribution. But impact is what really counts.
Here are some observations from this experiment, gleaned from my impressions and yours:
A lot of you like this technology. Several people were impressed by the sound quality, the personal nature of voice, and the ease of use, and a few indicated that they would use Voicethread in their own institutions. Some of you were more fascinated by the technology’s demonstration than the specific content (which is fine!).
Participation was high. On this blog, about 0.5% of people who read a given post comment on it. On the voicethread, 8.5% who viewed it made comments, and many came back a second time to see how it had evolved. The participants were diverse, ranging from museum exhibit developers to NPR accessibility engineers to content experts to e-learning professionals. There was some emergent behavior where content experts previously unknown to Eric or me offered their support to the exhibition.
There was an inverse relationship between time of first view and participation. Participation dropped significantly after the first four days. The conversation reached a critical mass of participants quickly. After that point, many people emailed me to comment that it felt unwieldy, or that they perceived it as something already completed. It's hard to browse through lots of audio. As one person said, “it felt like watching a disjointed play.” It seems that there’s a sweet spot where just a few people have contributed to the conversation and you feel like it’s open to you. Too many and it feels overwhelming or like your contribution is not needed. It’s easier later in the process to look at the voicethread and feel like enough has already been said—thus promoting lurking over participating.
The content was interesting, but not always what was asked for. Some (including the creators of the technology) found it varied and fascinating. But there was no easy way to spin off individual “threads” of conversation on a single slide, so a divergent (interesting) point brought up by a couple people became hard to follow. The content stayed fairly surface-level, though many interesting comments, both personal and professional, were contributed. -The purpose wasn’t totally clear. While Eric and I actively responded to other contributors, I think we could have done better to give people explicit challenges or goals so they could apply themselves concretely to solving a problem. The problem given, related to collaboration, was somewhat open-ended and proved less appealing than the Human + controversies themselves.
There was no clear way to identify the people speaking, except via their name, image, and voice. A few people commented that it would have been nice to see some basic information about speakers’ expertise and professional interest in the topic. I also would have liked an update function where people (myself included) could be notified when a new comment was added to the stream.
I left the experiment with a few core questions:
- How can we encourage sustained participation throughout the life of a project, rather than just at its outset? How do we encourage new users to join partway through?
- How can we guide collaboration towards a goal? What’s the balance between inviting people to talk about what they want versus what you want?
- What platforms or technologies humanize rather than dehumanize the process?
What are your questions or comments? I look forward to doing more experiments with other technologies in the future. If you or your institution wants to get involved, let me know.