Mojo #4 : Direct participation Mock up on video

In my last post I underlined the distance existing between the document, the comment (participation), and the tool website propose.

Why I spoke about that, because, I think good tool to participate is one of the key  to improve journalism.
During is talk last week Mohamed Nanabhay present the big challenge for news channel to be in front of the hudge flow of data to process in realtime. And the problem to filter when you open the input process. If filter is important as I said in my first and second  MOJO post with this idea of repetition rate, it’s impossible to filter raw data if you don’t have good metadata on it.
And the best way to have metadata is to open the process and cristalise it by a good participation process.

To present this paradigme of direct participation (contribution) with example I propose few case study and mock up :
- first example will be on the relation of video and comment.
- the second will be on the relation between a webpage and comment.

Video and comment on Youtube 

The picture below present the repartition between document and participation, place to read, place to write, and the “ligne de flotaison” (I don’t know the term in english but the major end of the browser windows). The video is an intervention during the EG8 from Jeremie Zimmermann about hadopi a french law to censor the web . We can see a the bottom of the picture there is 4 pages of comments.
Unformtunatly all this comment are not timestamped and addressable by an URI, so we can’t  use it to address a special offset of the video. But If you have a lot of rush uploaded by people like  in the nice project Aljazeera Creative Commons Repository  you have to find a way to index deeply video to find what you want. It’s for that make a link between comment and video offset will probably enrich you capabilities to filter and find the relevant information.

 

What we propose is to make a tool of comment participation more closer to the document. Below a draft mock up to reduce the distance between document and comment and enhance metatdata production. It could be a popchorn.js module to make a light client to produce hypervideo but for that it’s urgent to have a standart format as we try to think with  Julien Dora, Mark Boras, Nick Doiron  :

1/ When the user rollOver the timeline, the slider show an extension to select a segment of time by dragging and dropping. After this action the user add a comment or tag … here the purpose is to make a deep link between the video and the participation.


2 / After that the segment can be show ( with a lot of segment we can imagine a system to couple that with a system of peer reviewing, and editorialisation ) And it’s for deep research.


3 / Inspired by Sputnick website we can enhance the relation between segment of video by making a UI to allow user to link the segment of one video to an other segment or an other video, or URL (by a form) .

4 / With a visualisation like the polemic timeline of comment user can see the “hotspot” of the social activities and his browsing experience is enhance by this information. Below You can see Tree peak of comment during the video and use it as an indicator.

Sound Cloud understand this and make a really interesting interface for comment by reducing the distance between the user and the object of participation but not perfect because not really readable if their is to much comment :

Cover image by CC BY NC ND Some rights reserved by Judy **

6 Comments

  1. Julien wrote:

    Timestamped commenting is a tought UX issue to adress! Viddler for example had this for years now, but it’s not really a success in term of usage.

    Mixing many short texts with several minutes of video is not trivial.

    Twitter + Live conference stream seems a great use case, as tweets, are responding to a particular moment in the video – it’s not always the case on youtube. And of course in this case tweets are of course in sync with the conference, so we got “free accurate timestamps” :-)

    Maybe we should reverse the relation between the video and the comments: what happens if we create a view where comments *are* the main content, pointing us to the video as the secondary content?

    This is somehow the absolute opposite of the Popcorn.js idea where the video is the puppet master – but if we are serious about comments, we should try to use them/recognize them as the entry point, not some little piece of data lost in space :-)

    • admin wrote:

      I’m totally agree with this idea of “reversing the relation between the video and the comments” !

      In this idea of link economy (from Jeff Jarvis talk yesterday), make me think a lot about the problem of cristallisation of value in a conversation model.

      For instance, in wikipedia you have tree space for an item ( the article / the conversation / the history ) so you have a common goal of the conversation it’s the product of this conversation (the article) . And some shared value about how you play the redaction game.

      I will be interested to know if their is some profile who participate to conversation and don’t participate to the the article redaction…

      If the conversation is distributed, and it the case (between all social network) how to manage a extract of social network with common rule to make more synthetic access to this conversation …

      good game

    • Rick Waldron wrote:

      To note, Popcorn.js actually doesn’t lock video interaction development into a “video is the puppet master” idea – In fact, it would be trivial to control the video via interaction with arbitrary data or elements.

  2. Julien wrote:

    Another idea about Tweets+Video, for pre-recorded videos: if the video player allows me to tweet when I’m watching the video, on the spot via an embedded twitter box – or if it at least knows my twitter ID,

    then the player can grab my tweets and sync them with what I was watching when tweeting.

    That would allow to totally bypass the commenting system, and use twitter as a timestamped, outsourced, commenting system for video

  3. admin wrote:

    encyclopedia + wisdom of crowd = wiki
    news paper + wisdom of crowd = ?

    If the wiki is the tool to cristallise the wisdom of crowd in an encyclopedia, what is the one for newspaper ?

    • Julien wrote:

      «encyclopedia + wisdom of crowd = wiki
      news paper + wisdom of crowd = ? »

      One issue here is that an encyclopedia is for, well, forever, but a newspaper article or video is for now, sometimes for a couple of minutes if it’s something from AFP, Reuters, AP.

      The incentive to add and perfect is then maybe lower than on an encyclopedia article that is supposed to stay and be read online for the next 20 years, 50 years, 100 years.