Advertisement
Advertisement
training video
ATD Blog

Best Practices for Creating 360 Video

Wednesday, February 26, 2020
Advertisement

With the ever-increasing number of digital photos and videos being recorded these days, many of us have gained experience in framing and shooting media. However, the jump to immersive 360-degree media brings a new set of possibilities, challenges, and other things to keep in mind.

Everything Is in the Shot

There is no "behind the camera." When shooting any scene, you need to either leave the room or make yourself part of the shot. Most of the cameras that are available today work with a companion smartphone app that allows you to remotely take a photo or video, giving you the opportunity to hide close by, out of sight. Recently, I did a shoot of the front of our office and used a pole in the front of the building to remove myself from the scene.
Blackmon1.png

Limit Movement of the Camera

When planning your shoot, try to let the person viewing the experience choose where they go. Use hotspots to allow for "teleporting" around an area or between scenes in a first-person perspective. Any disconnect between the camera's movements and the viewer's body movements can be disorienting at best and nauseating at worst. We've seen enough YouTube videos of grandma riding a roller coaster in VR to know that camera movement can be problematic in an immersive situation.

Camera Positioning

The camera should be four to five feet away from the subject. That same experience you get when someone is "in your personal space" is easy to achieve and generally not what you want in VR. Conversely, objects too far away from the camera quickly fade out of view, so if detail is important, it needs to be close.
Blackmon2.png
Equally important is that the camera should be slightly below eye level of the subject being filmed.

Be aware of where stitch lines will fall in the final shot. If your camera has two fisheye lenses, objects that lie along the "prime meridian" between the lenses can be subject to distortion from the stitching process, so limit the movement that is visible to the camera in these areas.

Finally, even though it is capturing a full 360 degrees, the front of the camera is important. It is what will be the initial view that a person sees.

Lighting

Often, for standard photography, you will use additional lighting or a flash to illuminate an area that doesn't have good natural light.

Advertisement

The best solution is to have natural lighting in the scene but that's not always possible. If there is an area that’s important to have light on, try using extra lighting behind other objects in the scene, or small directional spotlights pointing toward the subject and away from the camera.

Resolution

Always capture your source material in 4K resolution or higher. Remember that 4K resolution is split between multiple lenses and spread over a full 360 degrees, so 4K images and video only give you HD in each hemisphere. If the resulting video is too large or requires too much bandwidth, you can lower the resolution to HD if the subject matter doesn’t need it, but you can't go the other way.

After the Shot

One thing that most new 360 videographers don’t realize is, unlike 360 images, the raw video that comes off the camera isn’t ready to be used. When the initial video is taken from the camera, the video looks like this:
Blackmon3.png
Each frame of the video consists of two fisheye views from in front of and behind the camera. The video then needs to be run through dedicated stitching software that comes with the camera and matches the edges of the fisheye view together to form a sphere. When that process is done, you get an equirectangular image that looks like this:
Blackmon4.png
Equirectangular is the actual format used for video and images, and it's a big word that just means that it represents a full sphere mapped on to a rectangular surface, which gives the stretched appearance on the top and bottom of the image. These images taken by the camera are a single frame, so they’re not as processor intensive to map to equirectangular format and can be stitched on the actual device.

Advertisement

Video Compression

4K video can generate incredibly large files that are hard to work with on your local machine, much less able to be streamed over the Internet or transferred to a phone.

Let's examine the case of uncompressed 4K video. To be considered 4K, each frame should be a minimum of 3840 by 1920 pixels. Each pixel requires 24 bits to represent its color, and for 30 frames a second, you would get:

BlackmonEquation1.png
At 22MB per second, a one-minute video would be more than 1.3GB in size. This would be an unusable amount of data generated from the camera for most people and certainly not something that could be used on the Internet.

Fortunately, the cameras do some compression before storing it. For example, a Ricoh Theta V produces approximately 430MB of 4K video for each minute recorded, which is about one-third of the size of the raw footage.

This is helpful for manipulating the files themselves, but it is still a large file. Consider the bandwidth that would be required to view this file over the Internet, keeping in mind:

BlackmonEquation2.png
So, to stream that video smoothly would require a dedicated 70Mbit connection or a whole lot of time to download it. This is not something that can be done in most situations.

The answer to the problem is video compression. Experiment a bit with high, medium, and adaptive bit rates to see if you can achieve a good balance between file size and quality. There are also free options available to handle video compression that works very well. Give HandBrake a try.

Want to learn more? Join me at the ATD 2020 International Conference & EXPO for the session, Virtual Reality in Real Life: A Case Study in Ultrasound Training.

About the Author

As chief technology officer for Trivantis, John Blackmon is responsible for all aspects of development and strategy for the company’s e-learning products. John is the original developer of the Lectora product line, which is the flagship product of the company.

Prior to joining Trivantis, John was co-founder and lead engineer at BocaSoft, a company that created various software utilities for the OS/2 operating system. John began his career at Electronic Data Systems, where he designed a number of automatic identification systems using various technologies for plant floor applications. John then spent time at IBM, working on plant floor data collection, and then design work on the OS/2 operating system. At IBM, John was awarded a patent for his work on seamlessly running Windows applications under the OS/2 operating system.

John regularly speaks at various eLearning industry events on topics ranging from xAPI to VR, and has a patent pending on his work for responsive course design.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.