Show filters Hide filters

Refine your search

Publication Year
1-29 out of 29 results
Change view
  • Sort by:
20:21 IS&T Electronic Imaging (EI) Symposium English 2014

Depth consistency and vertical disparities in stereoscopic panoramas

CONTEXT: In recent years, the problem of acquiring omnidirectional stereoscopic imagery of dynamic scenes has gained commercial interest and, consequently, new techniques have been proposed to address this problem [1]. The goal of many of these novel panoramic methods is to provide practical solutions for acquiring real-time omnidirectional stereoscopic imagery suitable to stimulate binocular human stereopsis in any gazing direction [2][3]. In particular, methods based on the acquisition of partially overlapped stereoscopic snapshots of the scene are the most attractive for real-time omnistereoscopic capture [1]. However, there is a need to rigorously model these acquisition techniques in order to provide useful design constraints for the corresponding omnidirectional stereoscopic systems. OBJECTIVE: Our main goal in this work is to propose an omnidirectional camera model, which is sufficiently flexible to describe a variety of omnistereoscopic camera configurations. We have developed a projective camera model suitable to describe a range of omnistereoscopic camera configurations and usable to determine constraints relevant to the design of omnistereoscopic acquisition systems. In addition, we applied our camera model to estimate the system constraints for the rendering approach based on mosaicking partially overlapped stereoscopic snapshots of the scene. METHOD: First, we grouped the possible stereoscopic panoramic methods, suitable to produce horizontal stereo for human viewing in every azimuthal direction, into four camera configurations. Then, we propose an omnistereoscopic camera model based on projective geometry which is suitable for describing each of the four camera configurations. Finally, we applied this model to obtain expressions for the horizontal and vertical disparity errors encountered when creating a stereoscopic panorama by mosaicking partial stereoscopic snapshots of the scene. RESULTS: We simulated the parameters of interest using the proposed geometric model combined with a ray tracing approach for each camera model. From these simulations, we extracted conclusions that can be used in the design of omnistereoscopic cameras for the acquisition of dynamic scenes. One important parameter used to contrast different camera configurations is the minimum distance to the scene to provide a continuous perception of depth in any gazing direction after mosaicking partial stereoscopic views. The other important contribution is to characterize the vertical disparities that cause ghosting at the stitching boundaries between mosaics. In the simulation, we studied the effect of the field-of-view of the lenses, and the pixel size and dimension of the sensor in the design of the system. NOVELTY: The main contribution of this work is to provide a tractable method for analyzing multiple camera configurations intended for omnistereoscopic imaging. In addition, we estimated and compared the system constraints to attain a continuous depth perception in all azimuth directions. Also important for the rendering process, we characterized mathematically the vertical disparities that would affect the mosaicking process in each omnistereoscopic configuration. This work complements and extends our previous work in stereoscopic panoramas acquisition [1][2][3] by proposing a mathematical framework to contrast different omnistereoscopic image acquisition strategies.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:09 IS&T Electronic Imaging (EI) Symposium English 2014

Comprehensive evaluation of latest 2D/3D monitors and comparison to a custom-built 3D mirror-based display in laparoscopic surgery

Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
16:45 IS&T Electronic Imaging (EI) Symposium English 2014

Architecture for high performance stereoscopic game rendering on Android

Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
21:46 IS&T Electronic Imaging (EI) Symposium English 2014

Automatic detection of artifacts in converted S3D video

In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:59 IS&T Electronic Imaging (EI) Symposium English 2014

Compression for full-parallax light field displays

Full-parallax light field displays utilize a large volume of data and demand efficient real-time compression algorithms to be viable. Many compression techniques have been proposed. However, such solutions are impractical in bandwidth, processing or power requirements for a real-time implementation. Our method exploits the spatio angular redundancy in a full parallax light field to compress the light field image, while reducing the total computational load with minimal perceptual degradation. Objective analysis shows that depending on content, bandwidth reduction from two to four orders of magnitude is possible. Subjective analysis shows that the compression technique produces images with acceptable quality, and the system can successfully reproduce the 3D light field, providing natural binocular and full motion parallax. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:04 IS&T Electronic Imaging (EI) Symposium English 2014

A hand-held immaterial volumetric display

We have created an ultralight, movable, “immaterial” fogscreen. It is based on the fogscreen mid-air imaging technology. The hand-held unit is roughly the size and weight of an ordinary toaster. If the screen is tracked, it can be swept in the air to create mid-air slices of volumetric objects, or to show augmented reality (AR) content on top of real objects. Interfacing devices and methodologies, such as hand and gesture trackers, camera-based trackers and object recognition, can make the screen interactive. The user can easily interact with any physical object or virtual information, as the screen is permeable. Any real objects can be seen through the screen, instead of e.g., through a video-based augmented reality screen. It creates a mixed reality setup where both the real world object and the augmented reality content can be viewed and interacted with simultaneously. The hand-held mid-air screen can be used e.g., as a novel collaborating or classroom tool for individual students or small groups. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:42 IS&T Electronic Imaging (EI) Symposium English 2014

Frameless multiview display modules employing flat-panel displays for a large-screen autostereoscopic display

A large-screen autostereoscopic display enables life-size realistic communication. In this study, we propose the tiling of frameless multi-view display modules employing flat-panel displays. A flat-panel multi-view display and an imaging system with a magnification greater than one are combined to construct a multi-view display module with a frameless screen. The module screen consists of a lens and a vertical diffuser to generate viewpoints in the observation space and to increase the vertical viewing zone. When the modules are tiled, the screen lens should be appropriately shifted to produce a common viewing area for all modules. We designed and constructed the multi-view display modules, which have a screen size of 27.3 in. and a resolution of 320 × 200. The module depth was 1.5 m and the number of viewpoints was 144. The viewpoints were generated with a horizontal interval of 16 mm at a distance of 5.1 m from the screen. Four modules were constructed and aligned in the vertical direction to demonstrate a middle-size screen system. The tiled screen had a screen size of 62.4 in. (589 mm × 1,472 mm). The prototype system can display almost human-size objects. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:14 IS&T Electronic Imaging (EI) Symposium English 2014

Estimating impact of stereo 3D display technology on depth perception

This paper investigates the presentation of moving stereo images on different display devices. We address three important issues. First, we propose temporal compensation for the Pulfrich effect when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:12 IS&T Electronic Imaging (EI) Symposium English 2014

Stereoscopic display system with integrated motion parallax and direct manipulation

We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
20:11 IS&T Electronic Imaging (EI) Symposium English 2014

Interpolating vertical parallax for an autostereoscopic 3D projector array

CONTEXT: We present a technique for achieving tracked vertical parallax for multiple users for a variety of autostereoscopic projector array setups including front- and rear- projection, and curved display surfaces. This “hybrid parallax” approach allows for immediate horizontal parallax as viewers move left and right, and tracked parallax as they move up and down, allowing cues such as 3D perspective and eye contact to be conveyed faithfully. OBJECTIVE: Projector arrays are well suited for 3D displays because of their ability to generate dense and steerable arrangements of pixels. We have developed a new autostereoscopic display utilizing a single dense row of 69 pico projectors. The projectors are focused on a 30x30cm vertically anisotropic screen that scatters the light from each lens into a vertical stripe while preserving horizontal angular variation. Each viewer’s eye observes the combined effect of image stripes from multiple projectors which combine to form a seamless 3D image. As every viewer sees a different 3D image, it is possible to customize each view with a different vertical perspective. Given a sparse set of tracked viewer positions, the challenge is to create a continuous estimate of viewer height and distance for all potential viewing angles to provide consistent vertical perspective to both tracked and untracked viewers. METHOD: Rendering to a dense projector display requires multiple-center of projection imagery, as adjacent projector pixels diverge to different viewer positions. If you assume constant viewer height and distance for each projector, viewers may see significant cross-talk and geometric distortion particularly when multiple viewers are in close proximity. We solve this problem with a custom GPU vertex shader projection that dynamically interpolates multiple viewer heights and distances within each projector frame. Thus, each projector’s image is rendered in a distorted manner representing multiple centers of projection, and might show an object from above on the left and from below on the right. RESULTS: We use a low-cost RGB depth sensor to simultaneously track multiple viewer head positions in 3D and interactively update the imagery sent to the array. Even though each user sees slices of multiple projectors, the perceived 3D image is consistent and smooth from any vantage point with reduced cross-talk. This rendering framework also frees us to explore different projector configurations including front and rear- mounted projector arrays and non-flat screens. Our rendering algorithm does not add significant overhead enabling realistic dynamic scenes. Our display produces full color autostereoscopic 3D imagery, with zero horizontal latency, and a wide 110o field of view which can accommodate numerous viewers. NOVELTY: While user tracking has long been used for single-user glasses displays, and single-user autosteroscopic display [Perlin et al. 2000] in order to update both horizontal and vertical parallax, our system is the first autostereoscopic projector array to incorporate tracking for vertical parallax. Our method could be adapted to other projector arrays [Rodriguez et al. 2007, Kawakita et al 2012, Kovacs and Zilly 2012, Yoshida et al 2011]. Furthermore, our display is reproducible with off-the-shelf projectors, screen materials, graphics cards, and video splitters.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:16 IS&T Electronic Imaging (EI) Symposium English 2014

Stereoscopic cell visualization: From mesoscopic to molecular scale

CONTEXT Stereoscopic vision is a substantial aspect of three-dimensional visualization approaches. Although most recent animation movies created for cinemas are shown in stereoscopic 3D (S3D), there are still many areas which do not take advantage of this technology. One of these areas is cell visualization. Despite the fact that many protein crystallographers have preferred working with stereoscopic devices for over a decade, it is quite astonishing that cell visualization seems to have ignored S3D completely, even though stereoscopic visualization of the cellular cosmos not accessible to the human eye bears high potential. Furthermore, the scientific community often works with interactive visualization environments. These tools usually provide S3D for different hardware configurations, but the intensity of the stereoscopic effect can only be manually adjusted by using slider buttons. This technique is sufficient to explore a single instance of a molecule, but it is inconvenient when navigating through a large environment on multiple scales. OBJECTIVE In this work approaches will be discussed to apply S3D to 1) rendered cell animations and 2) interactive cell environments by using freely available open source tools. A very important aspect in cell visualization is the bridging of scales. The mesoscopic level starts at a few thousands of nanometers – related to the cell and its components – whereas the molecular level goes down to a few Angstrom, where single atoms are visible. Therefore, both scales may differ by a factor of 100,000. This is especially a problem if the stereoscopic effect should be adjusted during an interactive navigation process. METHOD For the rendered animations it will be shown how to use Blender in combination with Schneider’s Stereoscopic Camera plug-in. An exemplary short movie was created, starting in the blood vessels, proceeding with the inner cell components and finally showing the translation and transcription process based on protein/PDB models. The interactive exploration environments are provided by the CELLmicrocosmos project. On the molecular level, the MembraneEditor is used to show a fixed projection plane S3D method. The mesoscopic level is represented by CellExplorer which is equipped with a dynamic projection plane S3D method. RESULTS The stereoscopic cell animations rendered with Blender were successfully shown on notebook monitors and power walls as well as on large cinema projection screens. The CELLmicrocosmos projects were optimized to provide adequate interactive cell environments which were successfully used during different university projects and presentations. Because the software developer is not able to define the relative position of the user to the point of interest, the fixed projection plane S3D method was used in combination with smaller membrane structures. But the dynamic projection plane is furthermore compatible with cell environments featuring large scale differences. NOVELTY Cell visualization is an emerging area in scientific communication. This work should encourage cytological researchers to take S3D technology into account for future projects. Moreover, the stereoscopic capabilities of the CELLmicrocosmos project are shown which have been developed over several years and which have never been discussed in our previous publications.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:30 IS&T Electronic Imaging (EI) Symposium English 2014

Stereoscopic depth perception in video see- through augmented reality within action space

CONTEXT: Depth perception is an important component in many augmented reality (AR) applications. It is, however, affected by multiple error sources. Most studies on stereoscopic AR have focused on the personal space whereas we address the action space (at distances beyond 2 m; in this study 6-10 m) using a video see-through display (HWD). This is relevant for example in the navigation and architecture domains. OBJECTIVE: For design guideline purposes there is a considerable lack of quantitative knowledge of the visual capabilities facilitated by stereoscopic HWDs. To fill the gap two interrelated experiments were conducted: Experiment 1 had the goal of finding the effect of viewing through a HWD using real objects while Experiment 2 dealt with variation of the relative size of the augmentations in the monoscopic and binocular conditions. METHOD: In Experiment 1, the participants judged depths of physical objects in a matching task using the Howard-Dolman test. The order of viewing conditions (naked eyes and HWD) and initial positions of the rods were varied. In Experiment 2, the participants judged the depth of an augmented object of interest (AOI) by comparing the disparity and size to auxiliary augmentations (AA). The task was to match the distance of a physical pointer to same distance with the AOI. The approach of using AAs has been recently introduced (Kytö et al. 2013). The AAs were added to the scene following literature-based spatial recommendations. RESULTS: The data from Experiment 1 indicated that the participants made more accurate depth judgments with HWD when the test was performed first with naked eyes. A hysteresis effect was observed with a bias of the judgments towards the starting position. As for Experiment 2, binocular viewing improved the depth judgments of AOI over the distance range. The binocular disparity and relative size interacted additively; the most accurate results were obtained when the depth cues were combined. The results have similar characteristics with a previous study (Kytö et al. 2013), where the effects of disparity and relative size were studied in X-Ray visualization case at shorter distances. Comparison of the two experiments showed that stereoscopic depth judgments were more accurate with physical objects (mean absolute error 1.13 arcmin) than with graphical objects (mean absolute error 3.77 arcmin). NOVELTY: The study fills the knowledge gap on exocentric depth perception in AR by quantitative insight of the effect of binocular disparity and relative size. It found that additional depth cues facilitate stereoscopic perception significantly. Relative size between the main and auxiliary augmentations turned out to be a successful facilitator. This can be traced to the fact that binocular disparity is accurate at short distances and the accuracy of relative size remains constant at long distances. Overall, these results act as guidelines for depth cueing in stereoscopic AR applications.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
23:21 IS&T Electronic Imaging (EI) Symposium English 2014

Multi-user autostereoscopic display based on direction-controlled illumination using a slanted cylindrical lens array

This research aims to develop an auto-stereoscopic display, which satisfies the conditions required for practical use, such as, high resolution and large image size comparable to ordinary display devices for television, arbitrary viewing position, multiple viewer availability, suppression of nonuniform luminance distribution, and compact system configuration. In the proposed system, an image display unit is illuminated with a direction-controlled illumination unit, which consists of a spatially modulated parallel light source and a steering optical system. The steering optical system is constructed with a slanted cylindrical array and vertical diffusers. The direction-controlled illumination unit can control output position and horizontal angle of vertically diffused light. The light from the image display unit is controlled to form narrow exit pupil. A viewer can watch the image only when an eye is located at the exit pupil. Auto-stereoscopic view can be achieved by alternately switching the position of an exit pupil at viewer's both eyes, and alternately displaying parallax images. An experimental system was constructed to verify the proposed method. The experimental system consists of a LCD projector and Fresnel lenses for the direction-controlled illumination unit, and a 32 inch full-HD LCD for image display. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:03 IS&T Electronic Imaging (EI) Symposium English 2014

Joint estimation of high resolution images and depth maps from light field cameras

Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:47 IS&T Electronic Imaging (EI) Symposium English 2014

Interlopers 3D: experiences designing a stereoscopic game

Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:26 IS&T Electronic Imaging (EI) Symposium English 2014

Integration of multiple view plus depth data for free viewpoint 3D display

This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:23 IS&T Electronic Imaging (EI) Symposium English 2014

Stereo and motion cues effect on depth judgment of volumetric data

Displays supporting stereoscopic and head-coupled motion parallax can enhance human perception of containing 3D surfaces and 3D networks but less for so volumetric data. Volumetric data is characterized by a heavy presence of transparency, occlusion and highly ambiguous spatial structure. There are many different rendering and visualization algorithms and interactive techniques that enhance perception of volume data and these techniques‟ effectiveness have been evaluated. However, how VR display technologies affect perception of volume data is less well studied. Therefore, we conduct two formal experiments on how various display conditions affect a participant‟s depth perception accuracy of a volumetric dataset. Our results show effects of VR displays for human depth perception accuracy for volumetric data. We discuss the implications of these finding for designing volumetric data visualization tools that use VR displays. In addition, we compare our result to previous works on 3D networks and discuss possible reasons for and implications of the different results. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:00 IS&T Electronic Imaging (EI) Symposium English 2014

Fully automatic 2D to 3D conversion with aid of high-level image features

With the recent advent in 3D display technology, there is an increasing need for conversion of existing 2D content into rendered 3D views. We propose a fully automatic 2D to 3D conversion algorithm that assigns relative depth values to the various objects in a given 2D image/scene and generates two different views (stereo pair) using a Depth Image Based Rendering (DIBR) algorithm for 3D displays. The algorithm described in this paper creates a scene model for each image based on certain low-level features like texture, gradient and pixel location and estimates a pseudo depth map. Since the capture environment is unknown, using low-level features alone creates inaccuracies in the depth map. Using such flawed depth map for 3D rendering will result in various artifacts, causing an unpleasant viewing experience. The proposed algorithm also uses certain high-level image features to overcome these imperfections and generates an enhanced depth map for improved viewing experience. Finally, we show several 3D results generated with our algorithm in the results section. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
57:35 IS&T Electronic Imaging (EI) Symposium English 2014

Compressive displays: Combining optical fabrication, computational processing, and perceptual tricks to build the displays of the future

In this talk, we explore modern approaches to glasses-free 3D display using compressive light field displays. In contrast to conventional technology, compressive displays aim for a joint-design of optics, electronics, and computational processing that together exploit compressibility of the presented data. For instance, multiview images or light fields show the same 3D scene from different perspectives - all these images are very similar and therefore compressible. By combining displays that use multilayer architectures or directional backlighting combined with optimal light field factorizations, limitations of existing devices, for instance resolution, depth of field, and field of view, can be overcome. In addition to light field display, we will discuss approaches to compressive super-resolution image display and compressive high dynamic range display. As with compressive light field displays, these technologies rely on multiplexing image content in time such that the visual system of a human observer combines presented patterns into a consistent 3D, high-resolution, or high-contrast image. With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. We will review these techniques and also give an outlook on next-generation compressive light field camera technology.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
16:09 IS&T Electronic Imaging (EI) Symposium English 2014

Time-division multiplexing parallax barrier based on primary colors

4-view parallax barrier is considered to be a practical way to solve the viewing zone issue of conventional 2-view parallax barrier. To realize a flickerless 4-view system that provides full display resolution to each view, quadruple timedivision multiplexing with a refresh rate of 240 Hz is necessary. Since 240 Hz displays are not easily available yet at this moment, extra efforts are needed to reduce flickers when executing under a possible lower refresh rate. In our last work, we have managed to realize a prototype with less flickers under 120 Hz by introducing 1-pixel aperture and involving anaglyph into quadruple time-division multiplexing, while either stripe noise or crosstalk noise stands out. In this paper, we introduce a new type of time-division multiplexing parallax barrier based on primary colors, where the barrier pattern is laid like “red-green-blue-black (RGBK)”. Unlike other existing methods, changing the order of the element pixels in the barrier pattern will make a difference in this system. Among the possible alignments, “RGBK” is considered to be able to show less crosstalk while “RBGK” may show less stripe noise. We carried out a psychophysical experiment and found some positive results as expected, which shows that this new type of time-division multiplexing barrier shows more balanced images with stripe noise and crosstalk controlled at a relatively lower level at the same time. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:08 IS&T Electronic Imaging (EI) Symposium English 2014

Transparent stereoscopic display and application

Augmented reality has become important to our society as it can enrich the actual world with virtual information. Transparent screens offer one possibility to overlay rendered scenes with the environment, acting both as display and window. In this work, we review existing transparent back-projection screens for the use with active and passive stereo. Advantages and limitations are described and, based on these insights, a passive stereoscopic system using an anisotropic back-projection foil is proposed. To increase realism, we adapt rendered content to the viewer's position using a Kinect tracking system, which adds motion parallax to the binocular cues. A technique well known in control engineering is used to decrease latency and increase frequency of the tracker. Our transparent stereoscopic display prototype provides immersive viewing experience and is suitable for many augmented reality applications. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
13:32 IS&T Electronic Imaging (EI) Symposium English 2014

Vertical parallax added tabletop-type 360-degree three-dimensional display

The generation of full-parallax and 360-degree three-dimensional (3D) images on a tabletop screen is proposed. The proposed system comprises a small array of high-speed projectors and a rotating screen. All projectors are located at different heights from the screen. The lens shift technique is used to superimpose all images generated by the projectors onto the rotating screen. Because the rotating screen has an off-axis lens function, the image of the projection lens generates a viewpoint in the space, and the screen rotation generates a number of viewpoints on a circle around the rotating screen. Because all projectors are located at different heights, different projectors generate the viewpoints at different heights. Therefore, multiple viewpoints are aligned in the vertical direction to provide the vertical parallax. The proposed technique was experimentally verified. Three DMD projectors were used to generate three viewpoints in the vertical direction. The heights of the viewpoints were 720, 764, and 821 mm. Each projector generated 900 viewpoints on a circle. The diameter of the rotating screen was 300 mm. The frame rate was 24.7 Hz. The generation of 360-degree 3D images with the horizontal and vertical parallaxes was verified. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:50 IS&T Electronic Imaging (EI) Symposium English 2014

The impact of stereo 3D sports TV broadcasts on user's depth perception and spatial presence experience

This work examines the impact of content and presentation parameters in 2D versus 3D on depth perception and spatial presence, and provides guidelines for stereoscopic content development for 3D sports TV broadcasts and cognate subjects. Under consideration of depth perception and spatial presence experience, a preliminary study with 8 participants (sports: soccer and boxing) and a main study with 31 participants (sports: soccer and BMX-Miniramp) were performed. The dimension (2D vs. 3D) and camera position (near vs. far) were manipulated for soccer and boxing. In addition for soccer, the field of view (small vs. large) was examined. Moreover, the direction of motion (horizontal vs. depth) was considered for BMX-Miniramp. Subjective assessments, behavioural tests and qualitative interviews were implemented. The results confirm a strong effect of 3D on both depth perception and spatial presence experience as well as selective influences of camera distance and field of view. The results can improve understanding of the perception and experience of 3D TV as a medium. Finally, recommendations are derived on how to use various 3D sports ideally as content for TV broadcasts. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
46:12 IS&T Electronic Imaging (EI) Symposium English 2014

Preservation and exhibition of historical 3D movies

3D movies have a long history dating as far back as 1915. Jeff will provide an overview and preservation status of the 1950’s “Golden Age” 3D movies plus several “pre Golden Age” 3D content examples. Through Jeff’s keen interest in early 3D movies and all forms of early film content, he has been instrumental in locating, restoring, preserving and exhibiting many early 3D film titles. Jeff has many interesting and unusual stories to tell of how he helped locate and recover several early 3D movies. The onward march of time, and the ever faster changes in technology now present many challenges for the preservation of early 3D film content, but also offer new opportunities. The rapid replacement of 35mm film projection with digital projection is a key part of this change. Jeff will reflect on the three 3D Movie Expos that he has run in Hollywood which have allowed the public to experience these historical 3D movies once again.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
16:34 IS&T Electronic Imaging (EI) Symposium English 2014

A stereoscopic system for viewing the temporal evolution of brain activity clusters in response to linguistic stimuli

In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
09:57 IS&T Electronic Imaging (EI) Symposium English 2014

A variable-collimation display system

Two important human depth cues are accommodation and vergence. Normally, the eyes accommodate and converge or diverge in tandem; changes in viewing distance cause the eyes to simultaneously adjust both focus and orientation. However, ambiguity between accommodation and vergence cues is a well-known limitation in many stereoscopic display technologies. This limitation also arises in state-of-the-art full-flight simulator displays. In current full-flight simulators, the out-the-window (OTW) display (i.e., the front cockpit window display) employs a fixed collimated display technology which allows the pilot and copilot to perceive the OTW training scene without angular errors or distortions; however, accommodation and vergence cues are limited to fixed ranges (e.g., ~ 20 m). While this approach works well for long-range, the ambiguity of depth cues at shorter range hinders the pilot’s ability to gauge distances in critical maneuvers such as vertical take-off and landing (VTOL). This is the first in a series of papers on a novel, variable-collimation display (VCD) technology that is being developed under NAVY SBIR Topic N121-041 funding. The proposed VCD will integrate with rotary-wing and vertical take-off and landing simulators and provide accurate accommodation and vergence cues for distances ranging from approximately 3 m outside the chin window to ~ 20 m. A display that offers dynamic accommodation and vergence could improve pilot safety and training, and impact other applications presently limited by lack of these depth cues. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
16:00 IS&T Electronic Imaging (EI) Symposium English 2014

A multilayer display augmented by alternating layers of lenticular sheets

A multilayer display is an autostereoscopic display constructed by stacking multiple layers of LC (liquid crystal) panels on top of a light source. It is capable of delivering smooth, continuous, and position-dependent images to viewers within a prescribed viewing zone. However, the images thus delivered may contain artifacts, which are inconsistent with real 3D scenes. For example, objects occluding one another may fuse together, or get obscured in the delivered images. To reduce such artifacts, it is often necessary to narrow the viewing zone. Using a directional rather than a uniform light source is one way to mitigate this problem. In this work, we present another solution to the problem. We propose an integrated architecture of multilayer and lenticular displays, where multiple LC panels are sandwiched between pairs of lenticular sheets. By associating a pair of lenticular sheets with a LC panel, each pixel in the panel is transformed into a view-dependent pixel, which is visible only from a particular viewing direction. Since all pixels in the integrated architecture are view-dependent, the display is partitioned into several sub-displays, each of which corresponds to a narrow viewing zone. The partitioning of display will reduce the possibility that the artifacts are noticeable in the delivered images. We will show several simulation results confirming that the proposed extension of multilayer display can deliver more plausible images than conventional multilayer display. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:19 IS&T Electronic Imaging (EI) Symposium English 2014

A novel stereoscopic display technique with improved spatial and temporal properties

Common stereoscopic 3D (S3D) displays utilize either spatial or temporal interlacing to send different images to each eye. Temporal interlacing sends content to the left and right eyes alternatingly in time, and is prone to artifacts such as flicker, unsmooth motion, and depth distortion. Spatial interlacing sends even pixel rows to one eye and odd rows to the other eye, and has a lower effective spatial resolution than temporal interlacing unless the viewing distance is large. We propose a spatiotemporal hybrid protocol that interlaces the left- and right-eye views spatially, but the rows corresponding to each eye alternate every frame. We performed psychophysical experiments to compare this novel stereoscopic display protocol to existing methods in terms of spatial and temporal properties. Using a haploscope to simulate the three protocols, we determined perceptual thresholds for flicker, motion artifacts, and depth distortion, and we measured the effective spatial resolution. Spatial resolution is improved, flicker and motion artifacts are reduced, and depth distortion is eliminated. These results suggest that the hybrid protocol maintains the benefits of spatial and temporal interlacing while eliminating the artifacts, thus creating a more realistic viewing experience. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:52 IS&T Electronic Imaging (EI) Symposium English 2014

Vision-based calibration of parallax barrier displays

Static and dynamic parallax barrier displays became very popular over the past years. Especially for single viewer applications like tablets, phones and other hand-held devices, parallax barriers provide a convenient solution to render stereoscopic content. In our work we present a computer vision based calibration approach to relate image layer and barrier layer of parallax barrier displays with unknown display geometry for static or dynamic viewer positions using homographies. We provide the math and methods to compose the required homographies on the fly and present a way to compute the barrier without the need of any iteration. Our GPU implementation is stable and general and can be used to reduce latency and increase refresh rate of existing and upcoming barrier methods. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
out of 1 pages
Loading...
Feedback

Timings

  181 ms - page object
   27 ms - search
    1 ms - highlighting
    0 ms - highlighting/32378
    0 ms - highlighting/32361
    0 ms - highlighting/32352
    0 ms - highlighting/32379
    0 ms - highlighting/32367
    0 ms - highlighting/32377
    0 ms - highlighting/32362
    0 ms - highlighting/32359
    0 ms - highlighting/32369
    0 ms - highlighting/32354
    0 ms - highlighting/32355
    0 ms - highlighting/32368
    0 ms - highlighting/32365
    0 ms - highlighting/32375
    0 ms - highlighting/32364
    0 ms - highlighting/32353
    0 ms - highlighting/32360
    0 ms - highlighting/32363
    0 ms - highlighting/32358
    0 ms - highlighting/32356
    0 ms - highlighting/32351
    0 ms - highlighting/32357
    0 ms - highlighting/32366
    0 ms - highlighting/32374
    0 ms - highlighting/32370
    0 ms - highlighting/32373
    0 ms - highlighting/32376
    1 ms - highlighting/32372
    1 ms - highlighting/32371

Version

AV-Portal 3.8.2 (0bb840d79881f4e1b2f2d6f66c37060441d4bb2e)