We Need Smarter MAM

By Guest - September 13, 2017

Famous Notorious B.I.G. quote “mo money, mo problems” translated into the media asset management landscape by Alex Buchanan from NMR. More media means more problems, and to remedy we need a smarter MAM.

As The Notorious B.I.G. once lamented “mo money, mo problems” the same could be said of the Media Asset Management (MAM) landscape. The ability to produce media content has been democratised to such an extent that it’s everywhere. Really, everywhere; from multinational broadcasters to mobile phones in the playground, and anyone who’s been tasked with managing media and its associated storage, transformation and distribution is likely to have trodden a path from the chaos of files scattered everywhere, to the basic organisation of files and folders, before investing in a MAM system.

mo media, mo problems”

In general, most companies want similar things from a MAM and most MAMs meet those requirements with varying degrees of functionality, flexibility and price. But when things begin to scale beyond a few thousand assets and some workflows, it’s likely we be making demands of the assets that may not be possible with the data that already exists in the system, such as; “find all the assets featuring my customer’s logo so they can evaluate the effectiveness of a sponsorship deal”, or “find all the assets featuring a rock star so I can compile a tribute package”. To achieve this we need to know more about our assets but we rarely have enough people, time or money to find out and we find ourselves thinking like B.I.G.: “mo media, mo problems”.

In this scenario, we need the MAM to be smarter; and to be smarter it needs to store more information about the assets. But what are the intrinsic values of a piece of media and how do you extract it?

What?

We think this can be expressed as three main types;

  1. Information about the content (e.g. file name, series, advertiser, brand, campaign, production personnel, etc.)
  2. Description of what’s in the content (e.g. speech, faces, music, products, graphics, on screen text, logos etc.);
  3. Technical qualities of the content (e.g. technical quality control characteristics; flash patterns, picture quality, audio levels, etc.).

The about information is standard MAM metadata and should be populated automatically (if your workflow isn’t doing this then please speak to us).

The in and of values require analysis of the content itself and, to be useful in a large-scale operation, must be time-based to allow accurate and fast identification. To recognise the value of a piece of content and increase efficiencies, the company needs the ability to quickly identify this information. Logically, it should be stored as metadata alongside the asset in the MAM and the business can then really begin to realise the value of the media content it’s managing.

How?

There are a growing number of solutions that utilise machine learning and computer vision techniques (debatably assuming the “AI” moniker) to automatically extract this type of data from assets. Generally, they should be able to ingest files and streams, perform some form of analysis to detect items of interest (e.g. faces, objects, logos, speech, quality markers, etc.) and give a value to the items detected (e.g. recognise an object as an oak tree), and then present the data in some way for further consumption.

A few exhibitors at IBC 2017 are showing “AI” products for media content. Some use commercially available analysis services from Amazon, Google, Microsoft, IBM, etc. A few are deployed on premises but most are in the cloud.

The challenge for users is to find a solution which is applicable to their business, rather than using a service which is too generic that can’t easily be tuned to specific business requirements. For example, we recently heard of a well-known public cloud provider’s analysis algorithms identifying a “clock” (the countdown timer which typically appears before an item of broadcast content) as an automotive speedometer. Which is not altogether surprising, but poses a challenge if the requirement is to find all the clocks in an archive and perform optical character recognition (OCR) tasks.

Similarly, the requirement may be to find images of a particular employee, but they’re not a celebrity and the chances of a public analysis service being able to recognise them is very low. Furthermore, they are a private individual who doesn’t want their identity to be stored on the internet.

There are, of course, similar challenges associated with the location of the AI services in relation to the content to be analysed. If the archive is in the cloud but the incoming live stream is delivered down an SDI cable, then where should the solution be deployed?

Summary

Perhaps unsurprisingly for such a young market, there is some jostling amongst vendors and solutions providers to find the right approach and develop the best software to address the myriad media content management requirements of an enterprise against a landscape of data protection and privacy. As a business, this is the time to help the vendors shape the future of this AI as one thing is for sure; the amount of media content is only going to increase and, consequently, so are the problems associated with managing it and maximising its potential.


This guest post was written by Alex Buchanan, Chief Operating Officer of NMR and Project Manager of the ReCAP project. NMR build and implement tools to help companies manage and deliver media content. ReCAP (Realtime Content Analysis and Processing) is a project co-funded by a grant from the European Union’s Horizon 2020, led by NMR. NMR will be demonstrating a beta version on the Vidispine stand (3.A23)

Categories

Guest post