Facebook’s F8 Goes 360 Degrees With New Camera Designs, Tech Advancements

The x24 and x6 joined Facebook's Surround 360 family of open-sourced specifications for 3-D 360-degree camera systems

There was enough news about 360-degree videos at Facebook’s F8 developers conference in San Jose, Calif., Wednesday to leave attendees spinning.

Facebook chief technology officer Mike Schroepfer announced two additions to the social network’s Surround 360 family of open-sourced specifications for 3-D 360-degree camera systems, which was introduced at F8 2016.

The two new camera designs are the x24 and the x6, which were given their names due to having 24 cameras and six cameras, respectively.

Both cameras shoot in 6DoF (six degrees of freedom), and they are smaller and more portable than the design introduced at last year’s conference, which Facebook has renamed Surround 360 Open Edition.

Schroepfer also announced that Facebook teamed up with post-production companies and visual effects (VFX) companies Adobe, Otoy, Foundry, Mettle, DXO, Here Be Dragons, Framestore, Magnopus and The Mill on workflows and toolchains for the two new camera designs.

He added that the social network does not plan to sell the new cameras directly, but it will license the designs to “a select group of commercial partners” with an eye toward a release later in 2017.

Facebook provided more details about the two new camera designs in an email to Social Pro Daily:

  • x24 captures full RGB and depth at every pixel in each of the 24 cameras. By oversampling four times at every point in full 360, x24’s depth-estimation algorithms produce best-in-class image quality and full-resolution 6DoF point clouds that unlock new dimensions in storytelling. (x6 uses six cameras to oversample by three times.)
  • x24’s and x6’s depth-estimation technology gives us depth information for every frame in the video. The cameras’ output enables live-action video to be directly supported in existing VFX software tools. Because objects are captured in 3-D, computer-generated imagery can be seamlessly integrated into live action sequences captured by x24.
  • Creating the x24 and x6 cameras is just the start: All parts of the end-to-end 6DoF workflow are still in development across the industry. We’re working with some of the most important players in the video ecosystem to develop the 6DoF workflow faster, bringing the full creative capacity of the cameras to creators as soon as possible.
  • x24 and x6 help filmmakers realize the dream of 6DoF/volumetric video, the future of immersive media.
  • Both x24 and x6 were prototyped in Facebook’s on-site hardware lab, Area 404. x6 was built completely by Facebook, machining parts on site in the lab and using off-the-shelf component cameras. For x24, Facebook partnered with an external company, FLIR, to incorporate its camera system integration into our x24 architecture. The shell for the x24 prototype was created completely in Area 404.

On the software side, Facebook announced its 360 Capture SDK (software-development kit), which enables developers to allow users to capture their virtual reality experiences in the form of 360-degree photos and videos, and then upload that content to News Feed or VR headsets.

Product managers Homin Lee and Chetan Gupta revealed in a blog post that Facebook used cube mapping rather than stitching, which is the tradition method for creating 360-degree images, and they cited the following benefits from that decision:

  • Accessibility: People no longer need a supercomputer to capture their VR experience.
  • Quality: Facebook maintains a high-quality viewing experience for people viewing captured 360 content in VR or on News Feed.
  • Speed: Facebook maintains 90 frames per second performance on virtual reality systems like Oculus Rift, while capturing VR-quality 360 video at 30 frames per second in a single second.

Lee and Gupta wrote:

VR is an immersive technology that lets you experience anything, anywhere. However, it’s been difficult to share these experiences with people who aren’t physically with you—until now. We’ve created an easy way for you to capture and share your PC VR experiences through 360 photos and videos.

We solved the problem by rethinking the way 360 content is created. Typically, the process starts by capturing various photos, stitching them together and then finally encoding them. Previously, we needed to capture the content within a game engine, while ensuring that we could produce a high-quality image quickly and on baseline hardware for VR. Now, all of that is possible with the 360 Capture SDK. With the new SDK, VR experiences can be captured in the form of 360 photos and videos instantly and then uploaded to be viewed in News Feed or a VR headset.

Finally, Facebook described its new view-prediction technology for high-resolution 360-degree videostreaming.

“VR hacker” Evgeny Kuzyakov, research scientist Shannon Chen and tech lead Renbin Peng detailed the new technology in a blog post:

  • Gravitational view prediction: In this technology, our engineers use physics and heatmaps to predict the most-likely view location in a 360 video. This allows them to deliver the highest concentration of pixels where they are needed most. Then the resolution in the periphery does not need to be as high, meaning that they can send less through the pipes but keep the effective resolution higher. When applying that model to our VR video streaming technology (dynamic streaming), our engineers improved the resolution by up to 39 percent.

  • Content-dependent streaming technology for non-VR devices: In an effort improve resolution of 360 videos on non-VR devices—even in low bandwidth—the team developed content-dependent streaming. It’s an encoding technology that improves quality while still allowing 360 videos to be buffered, cached and played offline. It uses artificial intelligence to prioritize delivering resolution to areas likely to be of most interest in video stream. In testing, it improved effective video resolution for streaming to non-VR devices by up to 51 percent.
  • AI view prediction: The team developed an AI model that can predict what’s interesting in a 360 video. It does this by creating a saliency map (looks like a heat map but based on what will be most likely to earn peoples’ interest in a 360 video). It’s used by both the gravitational model and content-depending streaming to decrease bitrate and increase effective resolution.

More F8 coverage from Adweek.com:

Facebook Is Working on Technology That Lets You Type and Control VR Devices With Your Mind

Spotify, Apple Music and Other Branded Bots Are Coming to Facebook Messenger 2.0

Facebook Debuts the Future of Augmented Reality, and It’s on Mobile

david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.