I decided to buy a Daydream virtual reality headset, as I already have a phone that supports that. It’s basically a face mask with a slot for your phone, and when you put it on, an app uses the phone’s display to generate a virtual world for you. The Daydream hardware is very well-built, especially for something that costs $100. A good phone display is actually higher resolution than the Oculus or Vive VR headsets, although the phone can’t power super-detailed game graphics like those can, as they’re hooked up to beefy desktop GPUs.

The YouTube VR app is well-done, and unlike some skeptics, I think immersive VR videos have a lot of potential to be better than normal videos. However, I was disappointed with the lack of good content so far. Given the hundreds of millions recently spent on VR hardware, I’d have naively expected more attention to be given to VR movies. Most of the content that does exist is of terrible quality, which is a shame, since it’s vastly easier to improve content than to improve screens or lenses or compression algorithms. In case anyone wants to fill this market hole, I figured I’d write up some tips on how to do it well, at least from my perspective:

1) Don’t think like a conventional film or video editor – minimize the number of cuts, especially abrupt cuts. The best Hollywood movies look like this clip from Lord of the Rings:

This seven-minute video is really made up of a hundred and sixty independent shots, none longer than fifteen seconds or so, which are stitched together during editing to create a fun overall experience. In theaters, this works great, but it would be very jarring in VR because of the constant perspective switches. In real life, unless you’re trapped in some kind of cosmic horror story, people don’t walk around being uncontrollably teleported from place to place to place every ten seconds. When cuts are needed, it’s much less disruptive if people and objects and the background at least roughly stay in place before and after the cut. When scene changes are needed, it’s easier to handle if there are fades or transition periods to smooth it out.

2) Likewise, don’t mix in clips of regular video with the VR/immersive video. This includes intro sequences, commercials, screenshots, and so on. With some software magic, they can be stretched to cover a full sphere, but the experience probably won’t be good because of the rapid cuts that such videos use. Many of these shots also aren’t physically realistic, eg., there is no real way to make a 2D cartoon drawing “feel” like a virtual world. Luckily, there is a trick one can use when it’s really needed: project the conventional video or image onto a flat screen, embedded within a larger 3D virtual environment. This is what Netflix does for their VR app, and it works much better than just stretching out the video file.

3) For similar reasons, during a cut, be careful to not abruptly reset the viewing angle. Part of the experience of VR is that the viewer can turn their head to look at something; don’t suddenly rotate the video perspective back to the “front”, as this kills immersion.

4) Always film everything in 4K resolution. For normal videos, 4K has a reputation as a frippery, both because 4K screens are expensive and because the extra resolution doesn’t buy you much quality. However, in a VR video, each pixel has to cover a much wider area at any given resolution. Right now, my laptop screen is about 50 cm from my eyes, and it has an area of 540 cm^2. The sphere around my head at that distance has an area of 30,000 cm^2, or about sixty times the screen size. Therefore, a 720p video looks fine on YouTube, but in VR each pixel will get stretched out to be sixty times as large, making the image extremely blurry despite being “high definition”. 1080p or “Full HD” still looks blurry, and even 4K will look like standard video, rather than “HD crisp”. Fortunately, 4K 360-degree cameras have gotten much cheaper, with many models now available for a few hundred dollars.

5) When practical, make videos longer than standard YouTube clips. A typical YouTube viewer is surrounded by distraction, from the other browser tabs on the screen, to any other people in the room, to any noises or alerts that might pop up on their monitor or phone. Therefore, viewers start dropping out from distraction if videos get longer than a few minutes or so. (And this goes double for Facebook, Lord have mercy.) In VR, of course, all distractions are blocked out, and the only thing you see and hear is the video itself. Hence, it’s good to give the viewer time to get immersed in the scene, rather than yanking them out after a minute or two.

6) For similar reasons, it’s usually good to err on the side of a smaller number of longer scenes, rather than lots of short scenes interspersed with each other.

7) Viewers will experience the scene as if their virtual “body” was attached to the camera, with their head near where the camera lenses are. It’s good to make sure the position of their “body” would make sense for someone in that situation. For example, people don’t go on amusement park rides strapped to the front of a roller coaster. They don’t go on cruises on top of a pole floating twenty feet above the boat deck. They don’t hang off the underbelly when they take a helicopter ride, and so on.

The good news is that because current standards are so low, it’s super easy to do better. Honestly, in my opinion, putting a good VR camera in Ohlone Park, pressing On, and recording the dogs playing for ten minutes would be better than 90% of existing content. If anyone reading this wants to make a VR video, I will volunteer to watch it, regardless of why or what it’s about.