[FoRK] announce: y! + xspf

Ken Meltsner meltsner at gmail.com
Thu May 12 14:25:23 PDT 2005


> On 12-May-05, at 1:33 PM, Lucas Gonze wrote:
>> I dunno about realtime streaming in general. There isn't much internet
>> audio/video that needs to be real time,

[Disclaimer: this stuff is new to me, so I'm probably going to say
stupid things.  That's how I learn....]

Video has its own real-time metadata as well -- "closed" (text)
captioning comes to mind; descriptive audio might be another.  I have
a decent handle on how this is included in traditional analog
broadcasts, and it's got to be straightforward to piggyback additional
info using time codes onto a recorded piece (external annotation), but
how does metadata get carried with streaming formats?  Streaming
video/audio, I assume, works by breaking up a feed into a series of
frames, with special frames reserved for metadata.  Or does it wedge
the metadata into each frame, sort of like using the unused portion of
a TV field for other purposes (e.g. captioning)?

Which reminds me, in the senile manner that I've adopted since hitting
forty: A long time ago, the MIT Media Lab had a project that used
caption text to index news programs.  This was analog TV + captions,
not any new-fangled metadata stream, of course.  Could your playlist
format include the equivalent -- lyrics as well as band info, for
example, or an educational text track describing the techniques used
by the sitar player, or the imagery in a recurring motif?  Music
teachers and budding pop culture specialists around the world would
love it....

And overextending things, could a playlist format like this be used to
combine visuals (e.g. PPT slides or handwriting) in sync with the
audio track?  Or this is all covered by SMIL, which I sorta'
understand since it's just XML after all, and not relevant to the
current topic?

Ken Meltsner


More information about the FoRK mailing list