How AI, ML Transform Media

The media and entertainment industry has long sought new ways to improve workflow efficiency through technology. For just as long, NAB Show has been the venue where the technologies fueling the drive for greater efficiencies have been presented and explored.

This year’s NAB Show promises to be no different, as it will serve as the focal point for the latest, and perhaps most significant, tech buzz in years when it comes to workflow efficiency for broadcasters, filmmakers and other content creators.

That trend — the rise of artificial intelligence, machine learning and machine intelligence for media applications — promises to reduce or eliminate burdensome tasks humans must currently undertake, such as creating closed captions and subtitles with power speech-to-text algorithms; reshape how common post-production tasks, such as editing, visual effects and animation, are done; and even change how the brands that support the media industry with their ads can optimize spending and increase revenue by relying on AI for better ad targeting.

Amazon Web Services will explore the latest cloud-based media workflows combined with advanced machine learning to deliver next-level, immersive viewing experiences during a three-part keynote on Wednesday at 11 a.m. on the NAB Show Main Stage.

Sessions within the Next-Generation Media Technologies and the Broadcast Engineering and Information Technology Conferences will tackle these game-changing technologies from their own unique perspectives. “AI will be featured in the [Broadcast Engineering and IT Conference] this year to a large extent,” said NAB’s Vice President of Technology Education and Outreach Skip Pizzi. “Three separate AI sessions appear on the program. One covers AI’s use in the content creation and media production environments. Another considers how AI can be used to assist media delivery systems, and a third looks specifically at AI’s use in closed captioning.”

The BEITC’s AI-related sessions devoted to content creation and media production will be held Wednesday. They include: “How Advances in AI, Machine Learning & Neural Networks Will Change Content Creation,” a look at automated content creation without human intervention; “How AI Will Take Productivity in the Broadcast Industry to the Next Level,” an examination of how AI can overcome automation limitations, enhance the creative process and improve live programming; “AI-Driven Smart Production,” a look at using AI to extract useful information from big data to be presented to producers and to convert broadcast data automatically into forms that are more accessible, such as sign language CG animation for the hearing-impaired; and “Looking Ahead: How AI Is Powering the Intelligent Future of Video Editing.”

Two BEITC sessions tackle different components of media distribution. Enhancing the value of content to the advertising community has the potential to drive greater revenues. How to do that — whether through gathering granular knowledge about video topics to give advertisers insight into strategic placement or by better targeting of ads — will be the focus of the “AI-ding Advertising and Increasing Your ROI.”

Following that session will be “Machine Learning for OTT: Improving Quality of Experience With Data,” which will examine how machine learning and centrally managed overlay networks can detect and predict quality of experience issues and improve streaming performance.

The BEITC takes on AI’s potential role in improving closed captioning in the Thursday session “How Can AI Elevate Your Closed-Captioning Solutions?” Among the areas to be investigated will be how AI can meet the challenge of creating accurate, synchronous and complete closed captions and its part in identifying background audio for description.

The Next-Generation Media Technologies Conference addresses machine intelligence on Tuesday, with six separate sessions. “We will look at machine intelligence technologies and their application to crafts, tasks and workflows, plus the tech’s potential to alter careers and company business models,” said Rochelle Winters, conference curator and producer.

For example, film editor and USC School of Cinematic Arts professor Norman Hollyn will discuss how machine intelligence will affect today’s edit bay processes, as well as the type of edit bay staffing that will be required in the future during his session “How Machine Intelligence Is Transforming Editorial.”

Other sessions will explore the role of machine intelligence in animation and computer graphics, production planning and the evolution of content production.

The “From Dailies to Master: Machine Intelligence Comes to Video Workflows” session will examine how information that traditionally has been recorded only on paper during production will be captured and carried throughout from dailies through post with the help of machine intelligence.

Jeff Kember of Google and Usman Shakeel of Amazon Web Services will address machine intelligence in separate sessions. Kember’s session will discuss the future of content production with AI and machine learning. Shakeel will focus on the application of AI in content creation from preproduction to post. In both instances, the cloud is a key enabler providing the processing to power AI for media applications.

“Most M&E machine intelligence processes are made possible by moving large portions of content creation into the cloud,” Winters said. “The two worlds fit together hand-in-glove and are likely to propel significant change in production and post.”

Besides the conference sessions, attendees have more opportunities on the show floor to focus on AI.

The new A.I. Experiential Zone, hosted by NAB and produced by AWS Elemental, will enable 2018 NAB Show attendees to see firsthand how machine learning is transforming the media and entertainment industry. Located in the Central Lobby adjacent to the M.E.T. 360 Studio featuring “NAB Show LIVE,” this educational showcase will feature real-world applications and content workflows for automatic speech recognition, natural language processing, text-to-speech applications, deep learning-based image and video analysis, and more, powered by AWS machine learning services.

At the Connected Media|IP Presentation Theater Helge Høibraaten of Vimond Media Solutions will present the Tuesday session “AI — From Buzz to Bucks,” in which he will discuss ways AI can lower costs, reduce publishing time and create better experiences for editors and consumers. During a Wednesday session, also in the same theater, Renato Bonomini of ContentWise will present “The AI-Powered TV User Experience.” The presentation will examine how new technology will improve the experience of media consumers.

While AI, machine learning and machine intelligence could change the workflows the industry relies upon daily, the full weight of the transformation won’t be felt immediately, said Winters.

“These are early days. At NAB Show, there will be the chance to look at what is available today and discuss where we’ll be in a year and five years out,” she said.