Update 2010-08-09: added second video to show simulated tagging.
Monika, Adam and Jim were having an educational conversation when I happened in. Two minutes of getting settled on the Junto Alpha platform.
Same clip edited to show only the part of this convo with relevant tags.
Imagine this process automated: you tag while you watch, your computer filters, transparent and near real-time. Or the audience tags. Exciting?
So much time is spent (wasted even) debating semantics... if we want the time together to matter.. or time in review to matter.. we should make key verbage very clear.
This is why I support Venessa Miemis' original Junto vision with time-tag and sort by meaning. Who has time to watch 1/2 hour of on-line video for the 1-2 minutes that are really worth it. What a waste of educational time. If we tag for meaning (better than in this rough demo) and support jumps to a specified time, we can leave the context in the background as a matter of record. We effectively make it context-on-demand.
The problem with meaning
Computers can find and sort data, like keywords from a chat timeline or a transcript, but it takes people to assign meaning. And my meaning may be different than your meaning, how are we ever going to find out, except by further conversation? That is why some meetings drag on so long.
This mock-up of a user interface visualizes colored buttons by which participants and viewers of an digitally connected conversation can add tags to the timeline of their copy, or to a shared resource. Here an earlier version that got lots of comment.
People who talk to each other may generate valuable insights they wish to share with others. Others observing the conversation, e.g. as teleconference or as a recording, may find the knowledge and wisdom they seek obscured by a volume of communication not useful to their purposes and the topic at hand.
Traditional ways to provide the focus needed to share insights involve note-taking, transcription, hiring subject experts to present, and traditional teaching/lecturing models. All of these require significant time and manpower, an expense we are used to.
The assumption is we can realize untapped value of a conversation for future audiences and improve speed and outcomes of collaboration if we
- support open collaborative models, usually run as circles. Examples: open space technology, unconference, or world cafe. The benefits of such framewoks for self-organization are well supported in literature.
- improve focus and enable findability of relevant conversation passages in context.
We may agree focus and relevance may differ for each participant or listener in a conversation. This means we can each tag the convo for ourselves, on our own notes or copy of the video stream, and it all makes sense to us. Yet, if we want to get more out of it by comparing notes, we need to
a) agree on a common representation or vocabulary, which is the scope of this post.
b) aggregate the data. This helps find hot spots of agreement. We leave this Indra's Net for later.
What we have
We start with four basic meanings to keep it simple for the first level, which lends itself to tagging realtime, as the convo happens. We then divide further into eight. These could serve as categories for metacodes.
What we need
More answers. Do you like this? How could it help you? Are we on a path leading to useful, or convenient? If so, want to join a motley bunch of volunteers joining around Venessa Miemis and her Junto vision?