We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

An efficient approach to playback of stereoscopic videos using a wide field-of-view

Formal Metadata

Title
An efficient approach to playback of stereoscopic videos using a wide field-of-view
Title of Series
Part Number
21
Number of Parts
31
Author
License
CC Attribution - NoDerivatives 2.0 UK: England & Wales:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The affordability of head-mounted displays and high-resolution cameras has prompted the need for efficient playback of stereoscopic videos using a wide field-of-view (FOV). The MARquette Visualization Lab (MARVL) focuses on the display of stereoscopic content that has been filmed or computer-generated using a large-scale immersive visualization system, as well as head-mounted and augmented reality devices. Traditional approaches to video playback using a plane fall short with larger immersive FOVs. We developed an approach to playback of stereoscopic videos in a 3D world where depth is determined by the video content. Objects in the 3D world receive the same video texture but computational efficiency is derived using UV texture offsets as opposing halves of a frame-packed 3D video. Left and right cameras are configured in Unity via pulling masks so that they only uniquely show the texture for the corresponding eye. The camera configuration is then constructed through code at runtime using MiddleVR for Unity 4, and natively in Unity 5. This approach becomes more difficult with multiple cameras and maintaining stereo alignment for the full FOV, but has been used successfully in MARVL for applications including employee wellness initiatives, interactivity with high-performance computing results, and navigation within the physical world. © 2016, Society for Imaging Science and Technology (IS&T).