This page is under discussion.
The content model for video is primarily maintained by the Eviada project. This model is under development in conjunction with the Audio Content Model.
Important Note: The projects that we're currently dealing with consist of many "collections", where a collection is one or more audio/video items that come from a single donor (and usually contain related content). This content model reflects that structure. In the more general case, where there is no "collection", the collection-level objects and item-level objects should be combined.
Objects at the collection level will store all bibliographic and structural metadata. References from collection-level metadata and item-level content will be stored as PURLs. For some collections, it may be convenient to break the logical structure and/or bibliographic metadata up into smaller objects. However, these objects will exist in a hierarchy separate from the physical structure and item-level objects, to allow logical sections that span multiple physical files.
An item-level object represents a single physical item (tape, disc, etc.). All files related to the item are stored in a single object. At least, redirect datastreams to these files will be stored in the object. The files themselves may be stored outside of Fedora, on a filesystem optimized for use by a streaming server. The item-level object contains metadata describing any editing that was applied create deliverable files from the masters.
Big open question: Will we store relationships between master and deliverable files in METS, or in an ADL?
Behaviors may include:
The structure of the Eviada content model is partially driven by the fact that ethnomusicoligical videos are primarily searched by scene.
It would be useful for a generic video content model to include a getSegment behavior that would return a SMIL capable of playing a specific segment from a video.