Imaging Experts Offer Glimpse of What’s to Come

From left: Michael Chambliss, Gavin Miller, Steve Sullivan, Andrew Shulkind and Jon Karafin

Tuesday’s Super Session “Next Generation Image Making — Taking Content Creation to New Places” brought together an eclectic group of technologists to preview image creation technology in development and discuss how these capabilities might change visual storytelling as we now know it.

Moderator Michael Chambliss, tech expert and business representative for the International Cinematographers Guild, guided the discussion with Dr. Gavin Miller, leader of research at Adobe; Dr. Steve Sullivan, head of Microsoft’s Holographic Video initiative; Cinematographer Andrew Shulkind (whose credits include the feature film “The Vault” and the virtual reality thriller “Gone — VR 360”); and Jon Karafin, CEO and founder, Light Field Lab.

“We have lived in the age of machines,” Miller declared early on, “but now is the beginning of the age of machine thought.”

Miller, whose team at Adobe explores and develops a great deal of technology, screened a presentation about some of the efforts that are only just beginning to appear in Adobe’s Creative Suite — algorithms that use semantic segmentation in the process of searching through vast image libraries for specific types of images with very particular parameters. By using massive numbers of test images and human input (from Adobe interns and some crowd-sourced effort), algorithms his group creates can “learn” from patterns to become increasingly effective at identifying objects so that an end user can search a store of images and, for example, bring up all that have a human on the left side of the frame and a dog on the right. The function, Miller said, could be extremely useful in preproduction for creating storyboards and previsualizing scenes; and in the future, it could support production. Realistic-looking sky replacement and even far more elaborate image manipulation, Miller predicted, could be accomplished with a keystroke.

“These [technologies] are game-changers,” Shulkind said. “So much of our time [as filmmakers] is spent on greenscreen photography and sky replacement.” The ability to automate that type of work, he noted, could allow people to use more energy for creative storytelling.

Sullivan, who came to Microsoft following a successful tenure as director of R&D and then chief technical officer at ILM/Lucasfilm, ran a demo reel of the work being done in capturing and displaying holographic imagery at his company’s large studio in Redmond, Wash., where performer likenesses and movements are captured by multicamera rigs, transformed into data and placed into backgrounds shot with 360-degree capture. “We’re out of the science project stage,” Sullivan said. “This is a turnkey operation.”

The discussion took a pivot to the important question of how this kind of tech will be used. What will the content actually be? “We thought telling stories this way was easy,” Sullivan recalled, “but that’s really been a tough nut to crack.”

“Audiences and creators want more interactive engagement,” Shulkind added, relaying his own experience with interactive shows. “I think 15 years from now, passive experiences” — movies and TV shows as we know them — “will be a quaint thing we used to do.”

But, Shulkind mused, what does this mean in terms of production? “How will we shoot all the multiple versions [of the story that will be required]?”

Moving back to the tech angle: Karafin’s Light Field Lab technology expands beyond other photographic methods by capturing light rays from multiple angles and then computing and replicating the illumination and spatial data in virtual space. It has been eagerly anticipated by multiple industries. Karafin shared his ideas about what holographic imagery is and isn’t. It needs to be “truly 3D, so you can see accurate perspective and move around” the objects it’s showing. It isn’t something that only seems to be 3D from a particular point of view. It isn’t the projection systems they use to resurrect artists like Tupac Shakur and Michael Jackson. “VR and AR today are not holographic.”

He discussed tools his company is developing to create imagery that meets all the criteria and requires no glasses. This involves displays with more than a gigapixel of resolution and formats that can fit inside commercially available bandwidth — something he expects to be a reality in a matter of years, not decades.

When will we achieve the ability to capture, manipulate and deliver enough visual information so that an experience feels completely genuine to viewers? “We’ll have a visual Turing Test,” Miller speculated. “Is that a display or is it real life?”