This is a three-part monthly series about the role software will play in Virtual Reality storytelling, seen through the lens of Adobe Research and creators.
Part 1: Inside Adobe’s Head(set)
You stand near a white picket fence, feet away from a cliff near Pigeon Point Light Station on the San Francisco Bay. Sightseers mill and read signs in bright blue daylight, gazing at the 145-year-old matte-white lighthouse, rising over clapboard buildings. You turn around to see waves lapping on the rocky shore, gulls swooping overhead, streaky horizon clouds hovering beneath the noon sun on this pleasant California day. Wait. Nope. Now you stand on a small airport tarmac, watching four people examine a vintage prop engine biplane, bright yellow.
That jarring transition needs to be fixed. You pull off your Oculus headset and now you are, actually always were, in an Adobe SF conference room.
“What have you learned about transition?” you ask Brian Williams, senior computer scientist for Premiere Pro at Adobe.
“Uhhh,” he hesitates, as his colleagues in the conference room chuckle. Transitioning a half-globe of visual information is, it turns out, pretty tricky to do well. “Okay, so, 80 percent of most effects are dissolves, color and titles. A horizontal wipe will work, vertical wipes is going to look funky.”
“Star wipes,” jokes Laura Williams Argilla, Adobe director of Services and Worklows for Creative Cloud Video.
“Oh God,” Williams.
Bronwyn Lewis, Adobe product manager for Video Editing, brings up Corridor Digital’s “Where’s Waldo” VR video. “They had this wipe, it was like a diagonal wipe, from the sky.”
You could not have had this conversation with the Creative Cloud Video Team, or very many people at all, even two years ago.
“Making VR content actually requires a huge amount of technical skill,” Williams Argilla says. “That often conflicts with creative intent, because you have to have both skill sets. So Brian is making sure that the ability to create content that doesn’t exclude people who are more creative than technical.”
For filmmakers, the blending of creative and technical aptitude has been beneficial. But there’s a limit, one that has more elements than the kind of rigs you use, or the headsets they will eventually populate. In the center of all of that, there is software. Until recently, the pioneers of Virtual Reality storytelling, especially live action, were using the digital equivalent of baling wire and duct tape to tell their stories. For the Video Team, it was hearing multiple times that video creators were using Premiere to edit VR that sprung them into action. Turns out it was not the easiest sell.
“I think there’s a lot of hesitance to invest in new platforms,” Williams Argilla says. “People remember everyone running toward 3D TV and that never took off.”
That meant the team, especially Brian Williams, solved problems in the crevices of the workday and well beyond.
Last April, Adobe announced it would release VR editing capabilities into its Premiere Pro software. The new capabilities include auto-detection of VR, and affordances for the editor to assign properties to the sequences, track the head-mounted display and seamlessly publish to specific platforms, such as YouTube and Facebook. You can’t blame any company for wondering about the future of VR. So what changed?
“I think it’s when Brian started making stuff,” she says. “What did it for me was the enthusiasm around VR and spherical content from people who aren’t held to other companies, like big media companies, it’s that kind of groundswell.”
From YouTubers to Hollywood start-ups, there is broad and independent coalition who believe you will put on a headset, or perhaps Augmented Reality lenses, to consume a new kind of story. This can scale quickly, especially because your smartphone and an affordable, and decent, headset allows you to jump into the world quickly.
But to even get this far, the team had to be comfortable with a whole list of ifs.
If audiences are going to be interested in Virtual and Augmented Reality stories, beyond the initial novelty, really good narratives must draw them in like any other media.
If filmmakers are going to create those great immersive stories, they need to put their energies into inventing new possibilities for the headset.
If that is going to succeed, an even wider range of creators, both professionals and enthusiasts, must experiment with, and ultimately deliver, content that audiences need to consume and want to discuss.
If creators are going to do that, they need intuitive software that will enable experimentation and iteration close to real-time and at high capacities.
If all of that happens, software will become, as it so often is, a quiet center of the Virtual and Augmented Reality revolution. Adobe and their partners’ would want to be there for that, of course. As one of the leaders of a powerful crossover market (enthusiast to professional editors), the chance to ease users from the flat screen to a spherical one ensures they would keep pace with a rapidly changing creative need.
But you know, that’s only a small slice of the “ifs.”
You have been reading a lot about VR. If you’re a longtime tech nerd, you’ve been reading about it, on and off, for nearly three decades now. If you think in terms of panoramic imagery, you’re looking back centuries.
“The problem was that few people could experience it,” Jaron Lanier, often dubbed the “founding father” of VR, told New Scientist about the early days of the 80s and 90s. “The decent set-ups were insanely expensive.”
But Lanier, speaking four years ago, could also foresee this current set of hurdles: “There are two problems: hardware and software. The hardware problem is solving itself as the costs of components come down naturally. The software problem is of a different order. Virtual worlds have to respond quickly enough for human users, yet need to be shared by multiple people connecting over imperfect networks. It will take a while to sort that out.”
Gavin Miller, head of Adobe Research, was trying to sort some of that out in the early 90s. He was, in fact, working on some of the very issues Lanier is referring to in the quote above. But there’s more than one hill to climb when stepping into a new visual reality.
“For these media to really catch on, there needs be an authoring process that can scale up to the long tail of content,” Miller says. “There has to be a distribution medium and a business model. And there have to be devices for consuming it that is an order of magnitude better than traditional media, because traditional media is already great. And maybe even be more optimal for a small screen. This is another shot at it. And doing this for video is a natural follow on. So, 20 years later, it’s coming back for another shot.”
,
,
Gavin Miller, head of Research. Courtesy of Adobe.
Miller understands that dreaming of future worlds is fine, and has been a cottage industry for writers for generations. It’s something that the robotics hobbyist (as in snake robots, so only click if you want to merge two fears), creative writer and longtime researcher has spent plenty of time imagining. But when you work for a corporation, at some point the living, breathing customers of today must matter.
“One thing that’s hopeful is there are lots of players, rather than one or two people making an investment,” he says. “I think the twin evolution of 360 photography and 360 viewers together may mean that one survives in its current form. Ultimately, in the long run, in AR in particular, there will be these alternate realities that are geo-tied to the real world. How you experience it will be multi-modal based on what devices you have on you. … If we cross the chasm to the point where this is really going to be huge, then there will be this “meta-verse,” or however you want to name, which is published and populated by a large number of companies and experienced by regular devices. The question is, ‘Is that time now?’”
The future of technology, so say the sages, is invisibility. You might think of the “space” as an “infosphere,” as philosopher Luciano Floridi termed it, in which we live as a “transmediated self,” in the words of J. Sage Elwell. The point being that the digital reality and the physical reality are merging. Technology has cozied up to us, surrounds us and, at higher rates, might even enter us. Virtual reality feels like the motion goes the other way: We enter technology.
Like storytelling for all times, the effect rests on a mental trick that tells your brain, “You are here.” For game developers and other Computer-Generated Imagery storytellers, the trick rests on verisimilitude: Quasi worlds must feel both real but the possibilities expand beyond our normal powers. For live action VR to take off, the trick is a little different. It’s about replicating real spaces, from the spectacular to the everyday, and rendering them real again. And then potentially doing something more than storytelling, but rather, story-worlding.
You might be surprised how much software engineers contemplate these things.
Take “presence” as an example. It is among the most common terms you hear when talking about VR. Presence, the feeling of “being” in a mediated space, is the wow factor of virtual reality. It leads to another key term, empathy, what many storytellers believe will make them stay. But there are challenges with presence that cameras and headsets cannot solve alone.
Brian Williams shows you a feature in Premiere that has been there for a long time, called Offset effect, which pans the image within a clip, changing its center. It’s a somewhat obscure effect, used usually for a cool transition when mixed with the same clip and some level of transparency. But it turns out that it works for 360-degree images very well, allowing editors to change the “True North” of the image, or the place a viewer looks first.
You are playing with the offset control on Premiere, in the Adobe SF conference room, while another person in the room wears the headset to see what you’re doing. When you rapidly change the offset, she yells, “Oh geez!” It’s like you’ve pulled the rug out from under her feet. Everybody laughs.
“I’ve never felt such power in my life,” you say.
“This is why, most of the time, the camera has to stay stationary,” Williams says.
We all have to try it now, putting on the headset and whipping the offset. It’s like being spun. This is an intense sense of presence and it reminds you, as the editor, how much everything has changed. In traditional video, a shaky picture and rapid series of frame cuts creates a sense of orderly disorientation, but in VR it’s like falling, or worse, freaking out. So VR storytelling might need to slow things down.
Even a bigger issue is “framing,” something that was once in the storytellers’ hands. True North is a way to begin the journey for the viewer, but the rest is up to you. Presence, as a novelty especially, comes with agency to look any direction. Where that agency ends, and storytelling begins, is an interesting tension for the editor.
“If there’s critical portion of the plot that are visual,” Miller says, “and you haven’t looked that way yet to experience it, we’re going to need linger or idle modes. The scene still seems live, the water still flows, the trees still sway, but it will have a cognitive model of you to say, ‘You’ve experienced that plot point, so now we can move on.’”
This leads to another tension. Presence, in the VR sense, happens with an augmentation, the headset. But story, in itself, has transportive powers too. You don’t even need visuals. A good book, blended with the powerful storyworld builder that is your brain, can take you to places you’ve never been and may not even exist. Do these two kinds of transportation compete?
“It’s going to require a retooling of some sort,” Miller answers. “Maybe the right way to think of it is that when you’re thinking of a story, it doesn’t come out linearly, right? You think of forces, and things you want to happen, then internally we translate that into this linear string. We’re going to have to come up with new representations for those original thoughts and the arc that you want the recipient to go through, in terms of seeing the characters evolve.”
But there’s that troublesome question of your own movement, as the audience member. If you see a character walking off the screen that happens to interest you, or if there’s action you’d like to be closer to, you realize then you can’t move. So, at least for now, the VR storytellers must either create a sense of forward motion or divert our attention away from the need.
Editing matters in all of these challenges. Leaving that sphere for a flat earth editor each time you need to edit is limiting the flow of thought. For now, some companies have at least allowed editors to keep the headset on, so the toggle between flat and global is less onerous. But one researcher at Adobe has been thinking about another idea.
Adobe MAX 2016 is a spectacle in general. The giant exhibition floor, the concert by Alabama Shakes, keynote sessions with creative giants such as Quentin Tarantino, photojournalist Lynsey Addario, sculptor Janet Echelman and designer Zac Posen. The Sneaks are another order of spectacle. Yes, there was celebrity on hand, with Jordan Peele as the MC, but the glitz and glamor generally gives way to the geeks, who awe and inspire with experimental new tools and digital tricks. When one wows — such as putting words into people’s digitized mouths simply by typing those words — the crowd goes wild.
Stephen DeVerdi, a senior research scientist, walked before that crowd of several thousand and, in some ways, promised them an experience many never knew they might need someday. He presented #CloverVR, which gives you the power to edit inside the 360-environment.
,
Source: Forbes