We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Towards perceptually coherent depth maps in 2D-to-3D conversion

Formal Metadata

Title
Towards perceptually coherent depth maps in 2D-to-3D conversion
Title of Series
Part Number
12
Number of Parts
31
Author
License
CC Attribution - NoDerivatives 2.0 UK: England & Wales:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
We propose a semi-automatic 2D-to-3D conversion algorithm that is embedded in an efficient optimization framework, i.e., cost volume filtering, which assigns pixels to depth values initialized by user-given scribbles. The proposed algorithm is capable of capturing depth changes of objects that move towards or farther away from the camera. We achieve this by determining a rough depth order between objects in each frame, according to the motion observed in the video, and incorporate this depth order into the depth interpolation process. In contrast to previous publications, our algorithm focuses on avoiding conflicts between the generated depth maps and monocular depth cues that are present in the video, i.e., motion-caused occlusions, and thus takes a step towards the generation of perceptually coherent depth maps. We demonstrate the capabilities of our proposed algorithm on synthetic and recorded video data and by comparison with depth ground truth. Experimental evaluations show that we obtain temporally and perceptually coherent 2D-to-3D conversions in which temporal and spatial edges coincide with edges in the corresponding input video. Our proposed depth interpolation can clearly improve the conversion results for videos that contain objects which exhibit motion in depth, compared to commonly performed naïve depth interpolation techniques. © 2016, Society for Imaging Science and Technology (IS&T).