Can AI Deliver Serious Value to the Media Business

By Guest - September 10, 2017

As I write this, our planet apparently faces a dilemma, looking forward either to a new golden age of machine-assisted prosperity – or being mere months away from a machine-enabled apocalypse, as super-intelligent machines carelessly wipe their unworthy human creators from the face of the Earth.

I refer of course to the technology subject currently on everyone’s lips – not to mention countless, breathless press releases – Artificial Intelligence (AI). I should probably say something right up front. I’m an old AI hacker going right back to the 80s (pretty much the last time AI was popular), and designing and building “intelligent” systems was both my first ever job, and has been my first and enduring technology love affair – and yes, I wish there was a way to write that sounds slightly less disturbing.

On a slightly more serious note, a long association with AI does provide a sense of perspective – I think we’re finally arriving at a point where AI can deliver some serious value to media businesses, amongst others. I want to talk about where some of this value is going to come from, and how we get there. But first – there are a ton of assumptions about what people understand by “AI”, so I’d like to go back to basics a little, and provide a view on what Artificial Intelligence is and how it works – and as part of that, think briefly about some of the organisation and technical challenges that need to be thought through to make exploiting AI a success.

What is AI? Despite the hype surrounding AI as the new hotness, AI is actually one of the oldest disciplines in computing – scientists and researchers have been fascinated by the potential of “machines that can think”, right back to the days of Alan Turing. Slightly formally, Artificial Intelligence is a collective name for “computer systems that simulate human-level cognitive processes” – i.e. they mimic the human brain by reasoning, learning, judging, predicting, inferring and initiating action.

More usually, we talk about AI by giving examples of systems that seem “intelligent” in some way. These can be very complex tasks such as recognising speech or driving cars. They can also be simpler seeming or more mundane tasks such as automatically clipping up the important bits of a football match – or recommending video clips that you might be interested in watching. This list starts to suggest the breadth of possible areas in which AI can be called upon to support businesses – whilst at the same time giving us some food for thought – one of the reasons that we use examples so readily is because we find it so difficult to capture and define what “intelligence” is. If we find it so hard to define, how can we build technology to emulate it?

In common with many areas of study and technology, AI comes in lots of shapes and sizes. You’ll hear about “strong” and “weak” AI, symbolic and connectionist approaches, many different varieties of “deep learning” – there are thousands of books, Medium articles and blogs covering AI, and we can only think about what is effectively the view from orbit in this piece. Nonetheless, I’d like to plant the seed of a couple of important approaches.

Firstly, let’s consider what is probably considered a traditional approach to AI, where we build systems (knowledge-based systems) that try and emulate the way that humans think and reason. This usually involves quite a complicated process of human analysts (once upon a time referred to as Knowledge Engineers) working with experts to try and understand how they go about solving particular problems in quite constrained domains, and then codifying that knowledge into a general set of rules that can be applied to get to a solution when you start from a given set of facts. Simple example of this:

IF UK_Broadcast_Customer THEN RequiredDeliveryPackage = DPP
IF US_Studio_Customer THEN RequiredDeliveryPackage = IMF

So, given we assert that UKTV is a UK_Broadcast_Customer – if we then ask the system what RequiredDeliveryPackage we require for UKTV – the answer comes back as “DPP”, and we can start to follow the rules that deal with actions for DPP packages. It’s a ridiculously trivial example, but it shows simply that we can build systems that can reason, generate new knowledge and take specific actions, based on general rules. That’s a powerful concept, especially when you think about the amount of time spent across the industry coding up workflows to do repetitive tasks with similar outcomes. Obvious problems with this approach is that it can be very time consuming to build and maintain knowledge-bases, and they can only infer and reason about stuff they know about – intuition and leaps of deduction that humans make all the time are completely out of reach. But it’s a proven approach and still a valid one.

Our second glance is at a different class of techniques, involving constructs called neural networks, which again have been around in theory since about 1968. These are often referred to as machine learning techniques (beware – there are lots more ML techniques than just neural nets), or commonly these days “deep learning” techniques. This style of AI is essentially based on trying to emulate some of the physical processes in the human brain. A neural network consists of a set of inputs, a collection of internal (hidden layer) nodes connected together (with each connection having different weighting), and a set of outputs, as shown in the example diagram below.
Neural Network
Depending on the weighting of the connections in the hidden layer, each combination of inputs will result in a defined output. Sounds very simple – but the beauty of the neural network is that you don’t need to set the weights either manually or through some kind of programming – instead the net can be set up to learn the correct weightings through example. Putting the net into learning mode and providing a collection of examples of what inputs produce a corresponding output, allows it to automatically to the connection weights. Once trained, the net will provide a “best fit” response to a set of given inputs – even if it has never seen them before.

This approach is fantastic for dealing with complex problems such as image recognition. In the very simplest example possible, the set of pixels that make up an image are used as inputs to the neural network, with the outputs being one of ten different digits. The network is then fed many pictures of different digits (style, font etc.), being told in each case what the output is – this trains the network to discriminate between them. Once trained, when a new image is input into the network, it will tell us with an associated level of certainty what digit the network thinks it most looks like – e.g. a handwritten “2” might be assessed as looking 70% like a “2”, 40% like a “3” and 0.05% like a “1”, for example.

Recognising a 2

It’s practically impossible to build knowledge-based systems to do work like image and speech recognition – for a start it’s extremely difficult to build and maintain rule sets that can reason about such topics – but the machine learning approach can carry complex tasks like this at speed and scale. Of course the example above is completely trivial – real applications often involve nets of nets, one feeding another to break down a very complex input set into something understandable.

So – magic bullet? Not quite – as for other approaches, there are some pitfalls. A non-obvious immediate contrast to the other approach that we looked at is that the neural network is a black box. We can train it, but we don’t know why it gives the outputs that it does. With a set of rules, you can follow the pattern of reasoning. With a neural net, you simply have to accept the outcome, and for domains where why is as important as what, that may be not be enough.

The biggie though, is that machine learning applications tend to be data intensive. You need to work out the data that you need for the inputs to the net, where you get it from and how it needs to be cleaned up – and once you have that, you need to generate enough examples to be able to properly train the net, and then you need to update datasets and retrain as the problem area evolves. Many organisations simply don’t yet have either the amount of data required, or the capability to collect and move it around the organisation, in order to make machine learning useful. Organisations like Facebook and Google do – and companies like Google and Amazon are already looking to provide AI-as-a-service for features like speech to text processing

“Imagine a business where AI systems put together and optimise your linear and VoD schedules for you”

So there are several different ways to do “AI” and there are advantages and pitfalls for each – and there is actually a growing and quite exciting field of combining different techniques together in a classic best of breed approach to solve very complex problems. Is it all worth it though?

Well, imagine a business where AI systems can carry out 80% of your editorial compliance processing, detecting and perhaps removing swearing and nudity automatically, and flagging up those areas where the compliance teams need to make human judgement. Team capacity goes up dramatically, with most of the boring bits being removed.

Imagine a business where AI systems put together and optimise your linear and VoD schedules for you, based on your rights warehouse, audience demographics and habits, social media reactions and viewing numbers. Again, the computer handles the boring bits, allowing skilled schedulers to set the shape of the schedule and tune it.

Imagine a business where you don’t need an army of programmers to cope with the workflows necessary to deliver your content to different VoD platforms, but where your knowledge base has all the rules that define where to get content and metadata, and how to transcode, package and deliver material – you just to tell it where you want it, and the system builds and continuously optimises and automates your workflows. The technology teams can focus on product and service innovation rather than continually re-writing bits of workflow glue.

Imagine a business where an AI is continually keeping track of your production projects for you, and is able to re-plan work for you as real-time events such as weather and team illness happen – and simultaneously recommend ways of saving costs and getting a better creative output.

Imagine an environment where you introduce your colleagues to a computer assistant by email, and it manages the process of finding the right slots in diaries for that all important customer product development kick-off…

“the harder part is making sure that our goals and expectations are meaningful, precise and realistic”

All of this is either available as commercial services, or buildable today, tomorrow, or the day after (relatively speaking). This is the promise of AI, if we understand the outcomes we’re looking for, and are able to make the right choices on how we get to those outcomes. What’s difficult is defining what “good” looks like in such a way that it can be explained to the computer – for example who is to say whether a given schedule is good or bad? In many ways the technology is the easy bit, especially with the easy availability of resources in the public cloud – the harder part is making sure that our goals and expectations are meaningful, precise and realistic.

Final point. As we all know, there are no such things as technology projects – every project or new technology introduction is all about organisation change, and how people react to it. This is profoundly true when considering AI solutions. Again, the current hype concentrates on the extremes – AI will make everyone redundant, or it will free everyone for a life of leisure, and this is profoundly unhelpful.

What we have with AI is a real opportunity to provide tools that genuinely support people and free up their time and creativity for the higher-order problem solving and collaboration that machines simply can’t do. At the moment, we use people for lots of repetitive, data-oriented tasks (think of legal document discovery, or content quality control, or planning and scheduling), because traditional techniques can’t handle them. AI can. The obvious issues are then about making sure that AI systems are designed around people; that people are given new and interesting challenges as parts of their jobs are made easier and simpler by the technology; and that we take into account the aspirations, concerns and development needs of people as we make the change. There can be a very emotional reaction to machines replacing parts of our jobs, especially those bits that need even low level “human” analysis and judgement – and we need to be very mindful and respectful of that as we implement change. AI will be great – but it’s unlikely to ever be able to match the creative, improvisational and intuitive skills of successful people. In a business like ours, we would forget that at our peril.


This guest post was written by Steve Sharman, Director at Hackthorn Innovation Ltd. Find Steve on Twitter and in the Vidispine stand (3.A23) during IBC.

Categories

Guest post