The role of metadata in automating workflows has developed far beyond driving basic search and automation. Today, run-time decisions are made either during automated workflows based on existing metadata, or metadata generated or updated during workflow processes.
These decisions can be quite simple. For example, if the result of a media analysis shows that an ingested media asset is not in a “house format” a transcode step can be dynamically added into an ingest workflow. Or, as a more advanced example, it can be decided to “fast track” processing if the result of a context analysis of a transcription (perhaps from a separate speech-to-text analysis) matches trending keywords.
However, each of the advancements above has brought about their own challenges.
From a standardization perspective, this has, of course, been largely positive – increasing interoperability and minimizing data transforms. However, there will never be a “one size fits all” schema for metadata that works for all media organizations. Indeed, in many cases there is no “one size fits all” schema for the different departments within a single media organization – for example, the needs of an editor and an archivist will be quite different. To address this challenge, we have to consider the data model we use to store metadata, such that all the metadata can be stored together, but in a way that it can be easily accessed and presented to meet the needs of different users.