The State of AI in Media & Entertainment: Pedal to the Metal

As the world turns its sights on the “new normal,” the future of the media and entertainment industry is starting to come into focus. Cloud-based everything. Internet of Production. Digital distribution. Automation. Truth is, most of these changes were already in progress. It’s the timeline that has been dramatically shortened: 5-year plans now have to be implemented in 5 months. And among the handful of technologies being fast-tracked, artificial intelligence holds a special place because of its ability to solve two of the industry’s most pressing post-COVID challenges: (1) how to better manage its inherent product risk, and (2) protect and optimize its precious financial, human and technological resources.

There are at least 3 ways that AI will play an even bigger role in media and entertainment moving forward:

1. Powering Holistic Knowledge for the New Normal

What we see happening all around us is first and foremost a failure of knowledge. For decades, the free-flow of goods, people, information, turned the world into a single highly connected “system” transcending boundaries, markets, languages and cultures. By becoming so tightly interconnected, human civilization became an entirely new mathematical object: a single, integrated, and nonlinear “complex adaptive system,” where even the smallest of inputs (someone gets infected at the Wuhan wet market) can have the largest outputs (world shuts down), and because we can’t predict the behavior of the whole by just looking at the behavior of all the parts.

Meanwhile, we forgot to update our ways of knowing the world. Our traditional, linear forecasting methods — the very math we used to model the world — were like a hammer striking a beehive: a tragi-comedic failure. Think about how the human body is organized as a highly interdependent “system,” yet how we’ve organized knowledge about it in functional areas (eyes, gut, feet, brain, etc.), like parts in a car.

This is what makes our current challenge so profound: what we need is no less than a revolution in how we develop knowledge about ourselves and the world around us. And getting there will take a Manhattan-project-sized intellectual and technological effort.

The good news is, the combination between an abundance of data (expect an explosion in the next few months, as smart cities and environmental/health sensor ecosystems get prioritized), cheap compute, and the coming of age of sophisticated computational intelligence methods, all carry immense potential to create this “Knowledge Renaissance” that will change our very understanding of everything.

This is particularly true in the media and entertainment industry, where tremendous amounts of very unstructured product and audience data — a considerable challenge in and of itself — will now need to be unified and integrated with more contextual datasets (weather, epidemiology data, etc.), which have quickly moved from “nice to haves” to “must-haves” for the New Normal.

What is the risk of releasing a title theatrically during flu or COVID-19 season? And when do those start and end, by the way? And if tentpole releases are crammed in a short, pandemic-free theatrical season, how do you make sure a title’s target audience or fanbase doesn’t overlap with another release in the same window? Is your target audience at a higher risk of developing serious complications from COVID-20, 21, 22?

Sure, AI can provide some serious answers to most of these existential questions and help develop brand new knowledge about how to succeed in the “new normal” media market. But not easily or cheaply. And, most important, not without a dramatic shift in the methods and tools we use. For example, answering any of these windowing and scheduling questions will require the unification and holistic analysis of dozens of datasets, each in its own format, ontology, and location. This is a part of AI (knowledge representation) that isn’t machine learning, is poorly understood overall, yet is going to become even hotter in the New Normal.

Expect to talk a lot more about ontologies and RDF formats. Expect a lot more graph computing (which allows for connecting data points faster and more semantically). Expect a lot more “Auto ML” (one-click, out-of-the-box data and machine learning pipelines) to be deployed quickly low-hanging fruit problems like windowing. Expect to see “data exchanges” pop up within and across industries, as contextual data like environmental conditions, weather, and epidemics, increasingly drive marketing and scheduling decisions. None of this will be easy, or cheap, or fast. But it’s the price to pay to develop “Knowledge 3.0” and approach risk more holistically in the New Normal.

2. Powering the “Internet of Production”

Running all of its operations from its employees’ living rooms carries a very special price for the media and entertainment industry. The compute, security and network requirements of managing even a single distributed post-production pipeline are a Black Swan in and of itself. When it’s all said and done, the industry will need to celebrate the tireless CTOs and SVPs-VPs of production who, in a matter of weeks, stood up entire cloud-based, distributed post-production workflows. Like it or not, this hastily cobbled together “Internet of Production” is here to stay. And because of its layout, and the costs associated with its operation, it very much needs artificial intelligence.

Consider this: in the past few weeks the media and entertainment industry has not only decentralized its entire organizational model, but dramatically sped up its transition — already under way- to cloud-based workflows, archives and computing resources. Without a doubt, this was the right thing to do. And without a doubt, this was also the most expensive thing to do. Storage, compute and data egress costs are likely skyrocketing throughout the industry, so this won’t be sustainable for long. Even simple AI tools can offer a better, smarter way of managing data and compute across a cloud architecture which will evolve towards a hybrid mix of providers and capabilities. The media industry’s next technological challenge will be that of costs and flexibility.

The slide above is a simplified vision of some of the ways AI could power the “Internet of Production.” In this architecture, an “AI QB” (for “quarterback”) would intelligently manage cloud assets based on their attributes and needs (nature, origin, version, size, accessibility and security requirements, static/dynamic, low/high latency, etc.) relative to costs, capabilities and level of security.

For example, the “AI QB” could autonomously detect a rendering job’s urgency and compute requirements, calculate its probability of success/failure, and use this data to copy it to the instance that maximizes the price/performance mix. This “super-agent” could also centrally manage data exchanges (where industry players and the federal and local government can exchange critical contextual information), access controls and admin privileges, and ensure security by monitoring the entire network to look for unusual usage patterns. And because of this centrality, it should also have the simplest of user experiences: natural language. There is no reason at all that the user experience of enterprise AI should be anything different than Alexa or Google Home: a simple Q&A interface.

Beyond cost savings, this “AI super-agent” would allow the entire ecosystem to be managed safely and centrally, giving senior technical executives overseeing the New Normal’s “Internet of Production” near-complete and near-real-time control of their costs, security architecture and use cases. Even better: this expansive vision can be broken down to extremely bite-sized steps, each of which would create substantial value. Start with visualizations and centralized management. Simple visualization and reporting tools could be created to give admins and senior executives the ability to centrally monitor assets and compute resources in near-real time. Media organizations can gradually add intelligence through quick builds and quick wins, with AI coming to play when the architecture has already proven value. But always keep AI on the forefront of architecture and systems designs, as it is the inevitable endgame.

3. Powering the Golden Age of Distribution

As the pandemic widens the gap between direct-to-consumer “haves” and “have-nots,” it’s worth remembering that distribution is the area of media where AI and machine learning hold the most promise. This is a truly unique time for OTT companies to not just acquire new audiences, but lay the groundwork for a real “relationship” with them. After all, the intensified competition between OTT providers dictates that time in app dominates all other metrics. Not to mention the potential of OTT to increase monetization of content catalogs.

But like any other relationships, the one between audiences and distributors is based on trust. Trust that OTT providers will develop as granular as possible an understanding of their audiences’ tastes, e.g. that recommender systems will navigate as closely as possible to that “cognitive relationship” between audiences and media content. This means going above and beyond traditional content recommendation models and digging far deeper into what attributes of content resonate in what way with what audience segments. This is a job for AI, not just machine learning.

We’re still in the early days of even being able to extract metadata attributes of content to use as variables in recommender system modeling, and this is where a big machine learning effort needs to be directed. Until we can successfully use machine learning to extract and index attributes like shot types, emotional tonalities, character arcs, narrative domains, edit cuts, etc., until we can use AI methods (knowledge representation) to semantically connect all of these data points contextually, content recommendation will still look like taking pot shots in low light.

But this era is fast approaching. And the current pandemic should even accelerate these efforts. In AI as in other domains, there are only two possible speeds: “idle” and “pedal to the metal.” And this is definitely the time when the idle gets devoured.

To dive deeper into this issue, and more generally how technology in media is being upended by the pandemic, join the field’s leading experts and practitioners for an exclusive ETC-Equinix webcast on April 30.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.