jonah tobias, 

 

Collaborating With jonah

Whitney is a producer who specializes in science, education and natural history. With a strong scientific and medical academic background, Whitney excels at communicating dense technical information to audiences at every level. At Pixeldust, Whitney manages a team of editors, writers, animators and production coordinators. She writes, produces, develops, and directs visual effects for a wide range of non-profit, corporate and scholarly projects. Her work for PBS, NOVA and National Geographic has taken her from the tribal villages of Papua New Guinea to the bow of an Inuit whaling boat off the north coast of Alaska.

 

 
 

Future of VR

"Already in YouTube, we have hotspots. They’re clickable. So, it’s not a big leap to say that I can now make what is seemingly linear media recognize what I’m looking at. Think about it: it’s television that’s watching you back. It sounds creepy if you put it that way. "

 

Skillset

Marketing Strategy, Concept, Storyboards, Script, Production, Editing, Motion Design, Color Treatment, Audio Mix, Mastering

What are the differences between working in the medium of photography and in VR?

The composition of a good photograph uses the frame, either a composition in positive or negative space. You can’t just have taken a picture and have really loose framing and the corner is actually where the action is. That’s what you crop eventually.

The same is true in VR. When you compose something for VR or you do a scene in VR, you have to say, what are the rules of this medium? “I have to actually use the 360 degrees of the space; otherwise, I’m better off telling the story with a traditional camera that has a frame.” There are certain things that VR is terrible at. It has no optics, essentially. It has no telephoto. So, if you think about those amazing things you see on National Geographic, like a lion taking down a zebra shot with a 400-millimeter lens – a crop – that will never work in VR, because VR depends on the rules of human perception. We don’t have telephoto lenses on our eyes. 

How can we interact with these virtual environments?

Now we have all this head-tracking data. We have 4K worth of media, but you’re only taking a tiny crop out of it for when a person is looking. That means that, at any given point, the app actually knows where the user is looking. The app knows that you’re looking at this spot over here in the 360 degrees, and it’s showing you those images.

Already in YouTube, we have hotspots. They’re clickable. So, it’s not a big leap to say that I can now make what is seemingly linear media recognize what I’m looking at. Think about it: it’s television that’s watching you back. It sounds creepy if you put it that way.

That’s a 360-degree story. I’ll look over here, and because I’m looking over here at the person that’s in this chair, the software can trigger a different set of frames to play so that that person turns to me and speaks to me because I looked at them, the way that it happens in the real world. Now we’re talking about actual interactivity.