Transforming Captioning with the Public Cloud
Is the cloud ready for the media industry, and is the media industry ready for the cloud? Paul Markham from Ericsson gives his view in the second of our pre-IBC guest posts.
Public cloud services have revolutionised the internet start-up business over the past few years. It’s now unthinkable to conceive a start-up with a traditional IT infrastructure model, but this transformation is only just beginning to have an impact on TV broadcast infrastructure.
Captioning services straddle countries and languages, but also technical worlds. We deliver into the traditional broadcast chain but aren’t constrained by the large bandwidth and intensive I/O requirements of playout and media management. Our technical solutions require broadcast quality resilience, but also highly variable capacity across the day, week and year. Our competition in many regions originates from outside the broadcast world, unencumbered by legacy infrastructure and historical expectations of how things might be done.
“Captioning services straddle countries and languages, but also technical worlds.”
There’s a need to transform relatively isolated regional operations, many of which started life as individual businesses, into a coherent multi-lingual whole where work can be shared, handed over and collaborated on regardless of global location. The drive is towards more flexible working, including homeworking, piece rate, freelance and follow the sun. This improves business agility; with a cloud-based platform and a flexible workforce we can scale up to accept new work in hours rather than months, but also scale down to save money whenever that capacity is not needed.
Employee tolerance for outdated corporate software with poor interfaces is diminishing; the rise of online SaaS tools such as Dropbox and Google Docs is significant. Employees use these tools because they’re more convenient and easier than the corporate equivalents, but they’re a form of shadow IT that presents a number of business risks. It’s important therefore not only to develop a secure platform that makes it difficult for staff to break security rules, but also to build a user experience good enough that they won’t want to.
Running on public cloud infrastructure has allowed us to spend our time, capital and management focus building and evolving an asset management interface that encapsulates our specific processes and enhances our productivity, rather than buying and installing hardware. Using automated testing and deployment, software changes to aid productivity or meet new customer requirements can be delivered in days, and we’re ready to incorporate new developments in automatic speech recognition and artificial intelligence as they happen.
Is it secure?
Security is an ever escalating area of concern in the enterprise, and the security of cloud deployments is a common discussion point. Security professionals are starting to agree that public cloud infrastructure offers opportunities to improve security when used in the right way; it’s notable that most of the high profile hacks and data leaks of recent memory have been from private infrastructure.
Cloud deployments aren’t constrained by capital budgets or frozen in time at the point of installation. They can be totally isolated from office IT and each other in ways that would be cost prohibitive on dedicated infrastructure, reducing the exposure to social engineering and the attack vector of the employee desktop. Providers offer built-in encryption such that there’s no scope for low-level leakage and every authorised decryption can be tracked with certainty. The myriad of technical staff who would otherwise have unrestricted access to sensitive data, such as pre-broadcast video material, is eliminated.
Since the EU-US safe harbour program was ruled invalid, major cloud providers such as Amazon have formally agreed with the EU that data protection compliance can achieved on their infrastructure, including for the upcoming General Data Protection Regulation (GDPR).
How do costs compare?
Cost is a significant driver for moving to public cloud, but a reliable Total Cost of Ownership (TCO) comparison is very hard to do. Traditional on premise solutions are costed on installed capacity over a contract period of years. Cloud solutions are charged on actual capacity used by the hour. When buying hardware we’re in the habit of over-specifying what we need because we account for a lifespan of years; for the cloud we need to break that habit and try to specify what we need hour by hour. This can mean growing a deployment week by week rather than trying to build five years’ capacity up front, but it can also mean looking at capacity usage across the day, week, month and year.
With live captioning there are specific peaks of activity during the day when networked stations run local programmes. This has defined technical capacity to the extent that for 23 hours of the day, most of it is lying idle. We know exactly in advance when these peaks are, so provisioning capacity for them doesn’t require any sophisticated auto-scaling capability, it just requires integration with the broadcast schedules.
For Disaster Recovery (DR), a bit of engineering can get a resilient structure without paying for double the capacity. By adopting a “pilot light” approach to DR, we can have a continual data backup with the full duplicate infrastructure ready to be instantiated within a few minutes and collapsed again as soon it’s no longer needed.
Transforming for the future
Much is said about the transformational value of the cloud, but it’s important to recognise that this is a business transformation more than a technical one. We’re used to locking ourselves into a long hardware investment cycle which defines and limits what we can achieve for a fixed period of time. Public cloud removes the need for that commitment. We pay by the hour for what we need, when we need it. It means there’s much more scope for experimentation, innovation and faster evolution in the services we provide. It means capacity management is no longer a factor when winning or losing business, and with complete automation of the build process, it means deploying an identical service for a new customer can take minutes rather than months, even when significant additional infrastructure is required.
“We pay by the hour for what we need, when we need it. It means there’s much more scope for experimentation, innovation and faster evolution in the services we provide.”
Fundamentally, the use of public cloud means we can focus business capacity on the software and people that directly allow us to deliver new and better services to our customers, rather than having to continually manage and finance the lower layers of technical provision. Public cloud is increasingly ready for the broadcast chain, and its time is coming.
Paul Markham is the Head of Access Services Platform Architecture at Ericsson. Ericsson Access Services will demo their solution for transforming captioning services in stand 1.D61 at IBC 2017.