Show filters Hide filters

Refine your search

Publication Year
1-12 out of 29 results
Change view
  • Sort by:
20:21 IS&T Electronic Imaging (EI) Symposium English 2014

Depth consistency and vertical disparities in stereoscopic panoramas

CONTEXT: In recent years, the problem of acquiring omnidirectional stereoscopic imagery of dynamic scenes has gained commercial interest and, consequently, new techniques have been proposed to address this problem [1]. The goal of many of these novel panoramic methods is to provide practical solutions for acquiring real-time omnidirectional stereoscopic imagery suitable to stimulate binocular human stereopsis in any gazing direction [2][3]. In particular, methods based on the acquisition of partially overlapped stereoscopic snapshots of the scene are the most attractive for real-time omnistereoscopic capture [1]. However, there is a need to rigorously model these acquisition techniques in order to provide useful design constraints for the corresponding omnidirectional stereoscopic systems. OBJECTIVE: Our main goal in this work is to propose an omnidirectional camera model, which is sufficiently flexible to describe a variety of omnistereoscopic camera configurations. We have developed a projective camera model suitable to describe a range of omnistereoscopic camera configurations and usable to determine constraints relevant to the design of omnistereoscopic acquisition systems. In addition, we applied our camera model to estimate the system constraints for the rendering approach based on mosaicking partially overlapped stereoscopic snapshots of the scene. METHOD: First, we grouped the possible stereoscopic panoramic methods, suitable to produce horizontal stereo for human viewing in every azimuthal direction, into four camera configurations. Then, we propose an omnistereoscopic camera model based on projective geometry which is suitable for describing each of the four camera configurations. Finally, we applied this model to obtain expressions for the horizontal and vertical disparity errors encountered when creating a stereoscopic panorama by mosaicking partial stereoscopic snapshots of the scene. RESULTS: We simulated the parameters of interest using the proposed geometric model combined with a ray tracing approach for each camera model. From these simulations, we extracted conclusions that can be used in the design of omnistereoscopic cameras for the acquisition of dynamic scenes. One important parameter used to contrast different camera configurations is the minimum distance to the scene to provide a continuous perception of depth in any gazing direction after mosaicking partial stereoscopic views. The other important contribution is to characterize the vertical disparities that cause ghosting at the stitching boundaries between mosaics. In the simulation, we studied the effect of the field-of-view of the lenses, and the pixel size and dimension of the sensor in the design of the system. NOVELTY: The main contribution of this work is to provide a tractable method for analyzing multiple camera configurations intended for omnistereoscopic imaging. In addition, we estimated and compared the system constraints to attain a continuous depth perception in all azimuth directions. Also important for the rendering process, we characterized mathematically the vertical disparities that would affect the mosaicking process in each omnistereoscopic configuration. This work complements and extends our previous work in stereoscopic panoramas acquisition [1][2][3] by proposing a mathematical framework to contrast different omnistereoscopic image acquisition strategies.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:09 IS&T Electronic Imaging (EI) Symposium English 2014

Comprehensive evaluation of latest 2D/3D monitors and comparison to a custom-built 3D mirror-based display in laparoscopic surgery

Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
16:45 IS&T Electronic Imaging (EI) Symposium English 2014

Architecture for high performance stereoscopic game rendering on Android

Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
21:46 IS&T Electronic Imaging (EI) Symposium English 2014

Automatic detection of artifacts in converted S3D video

In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:59 IS&T Electronic Imaging (EI) Symposium English 2014

Compression for full-parallax light field displays

Full-parallax light field displays utilize a large volume of data and demand efficient real-time compression algorithms to be viable. Many compression techniques have been proposed. However, such solutions are impractical in bandwidth, processing or power requirements for a real-time implementation. Our method exploits the spatio angular redundancy in a full parallax light field to compress the light field image, while reducing the total computational load with minimal perceptual degradation. Objective analysis shows that depending on content, bandwidth reduction from two to four orders of magnitude is possible. Subjective analysis shows that the compression technique produces images with acceptable quality, and the system can successfully reproduce the 3D light field, providing natural binocular and full motion parallax. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:04 IS&T Electronic Imaging (EI) Symposium English 2014

A hand-held immaterial volumetric display

We have created an ultralight, movable, “immaterial” fogscreen. It is based on the fogscreen mid-air imaging technology. The hand-held unit is roughly the size and weight of an ordinary toaster. If the screen is tracked, it can be swept in the air to create mid-air slices of volumetric objects, or to show augmented reality (AR) content on top of real objects. Interfacing devices and methodologies, such as hand and gesture trackers, camera-based trackers and object recognition, can make the screen interactive. The user can easily interact with any physical object or virtual information, as the screen is permeable. Any real objects can be seen through the screen, instead of e.g., through a video-based augmented reality screen. It creates a mixed reality setup where both the real world object and the augmented reality content can be viewed and interacted with simultaneously. The hand-held mid-air screen can be used e.g., as a novel collaborating or classroom tool for individual students or small groups. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:42 IS&T Electronic Imaging (EI) Symposium English 2014

Frameless multiview display modules employing flat-panel displays for a large-screen autostereoscopic display

A large-screen autostereoscopic display enables life-size realistic communication. In this study, we propose the tiling of frameless multi-view display modules employing flat-panel displays. A flat-panel multi-view display and an imaging system with a magnification greater than one are combined to construct a multi-view display module with a frameless screen. The module screen consists of a lens and a vertical diffuser to generate viewpoints in the observation space and to increase the vertical viewing zone. When the modules are tiled, the screen lens should be appropriately shifted to produce a common viewing area for all modules. We designed and constructed the multi-view display modules, which have a screen size of 27.3 in. and a resolution of 320 × 200. The module depth was 1.5 m and the number of viewpoints was 144. The viewpoints were generated with a horizontal interval of 16 mm at a distance of 5.1 m from the screen. Four modules were constructed and aligned in the vertical direction to demonstrate a middle-size screen system. The tiled screen had a screen size of 62.4 in. (589 mm × 1,472 mm). The prototype system can display almost human-size objects. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:14 IS&T Electronic Imaging (EI) Symposium English 2014

Estimating impact of stereo 3D display technology on depth perception

This paper investigates the presentation of moving stereo images on different display devices. We address three important issues. First, we propose temporal compensation for the Pulfrich effect when using anaglyph glasses. Second, we describe, how content-adaptive capture protocols can reduce false motion-in-depth sensation for time-multiplexing based displays. Third, we conclude with a recommendation how to improve rendering of synthetic stereo animations. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:12 IS&T Electronic Imaging (EI) Symposium English 2014

Stereoscopic display system with integrated motion parallax and direct manipulation

We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
20:11 IS&T Electronic Imaging (EI) Symposium English 2014

Interpolating vertical parallax for an autostereoscopic 3D projector array

CONTEXT: We present a technique for achieving tracked vertical parallax for multiple users for a variety of autostereoscopic projector array setups including front- and rear- projection, and curved display surfaces. This “hybrid parallax” approach allows for immediate horizontal parallax as viewers move left and right, and tracked parallax as they move up and down, allowing cues such as 3D perspective and eye contact to be conveyed faithfully. OBJECTIVE: Projector arrays are well suited for 3D displays because of their ability to generate dense and steerable arrangements of pixels. We have developed a new autostereoscopic display utilizing a single dense row of 69 pico projectors. The projectors are focused on a 30x30cm vertically anisotropic screen that scatters the light from each lens into a vertical stripe while preserving horizontal angular variation. Each viewer’s eye observes the combined effect of image stripes from multiple projectors which combine to form a seamless 3D image. As every viewer sees a different 3D image, it is possible to customize each view with a different vertical perspective. Given a sparse set of tracked viewer positions, the challenge is to create a continuous estimate of viewer height and distance for all potential viewing angles to provide consistent vertical perspective to both tracked and untracked viewers. METHOD: Rendering to a dense projector display requires multiple-center of projection imagery, as adjacent projector pixels diverge to different viewer positions. If you assume constant viewer height and distance for each projector, viewers may see significant cross-talk and geometric distortion particularly when multiple viewers are in close proximity. We solve this problem with a custom GPU vertex shader projection that dynamically interpolates multiple viewer heights and distances within each projector frame. Thus, each projector’s image is rendered in a distorted manner representing multiple centers of projection, and might show an object from above on the left and from below on the right. RESULTS: We use a low-cost RGB depth sensor to simultaneously track multiple viewer head positions in 3D and interactively update the imagery sent to the array. Even though each user sees slices of multiple projectors, the perceived 3D image is consistent and smooth from any vantage point with reduced cross-talk. This rendering framework also frees us to explore different projector configurations including front and rear- mounted projector arrays and non-flat screens. Our rendering algorithm does not add significant overhead enabling realistic dynamic scenes. Our display produces full color autostereoscopic 3D imagery, with zero horizontal latency, and a wide 110o field of view which can accommodate numerous viewers. NOVELTY: While user tracking has long been used for single-user glasses displays, and single-user autosteroscopic display [Perlin et al. 2000] in order to update both horizontal and vertical parallax, our system is the first autostereoscopic projector array to incorporate tracking for vertical parallax. Our method could be adapted to other projector arrays [Rodriguez et al. 2007, Kawakita et al 2012, Kovacs and Zilly 2012, Yoshida et al 2011]. Furthermore, our display is reproducible with off-the-shelf projectors, screen materials, graphics cards, and video splitters.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:16 IS&T Electronic Imaging (EI) Symposium English 2014

Stereoscopic cell visualization: From mesoscopic to molecular scale

CONTEXT Stereoscopic vision is a substantial aspect of three-dimensional visualization approaches. Although most recent animation movies created for cinemas are shown in stereoscopic 3D (S3D), there are still many areas which do not take advantage of this technology. One of these areas is cell visualization. Despite the fact that many protein crystallographers have preferred working with stereoscopic devices for over a decade, it is quite astonishing that cell visualization seems to have ignored S3D completely, even though stereoscopic visualization of the cellular cosmos not accessible to the human eye bears high potential. Furthermore, the scientific community often works with interactive visualization environments. These tools usually provide S3D for different hardware configurations, but the intensity of the stereoscopic effect can only be manually adjusted by using slider buttons. This technique is sufficient to explore a single instance of a molecule, but it is inconvenient when navigating through a large environment on multiple scales. OBJECTIVE In this work approaches will be discussed to apply S3D to 1) rendered cell animations and 2) interactive cell environments by using freely available open source tools. A very important aspect in cell visualization is the bridging of scales. The mesoscopic level starts at a few thousands of nanometers – related to the cell and its components – whereas the molecular level goes down to a few Angstrom, where single atoms are visible. Therefore, both scales may differ by a factor of 100,000. This is especially a problem if the stereoscopic effect should be adjusted during an interactive navigation process. METHOD For the rendered animations it will be shown how to use Blender in combination with Schneider’s Stereoscopic Camera plug-in. An exemplary short movie was created, starting in the blood vessels, proceeding with the inner cell components and finally showing the translation and transcription process based on protein/PDB models. The interactive exploration environments are provided by the CELLmicrocosmos project. On the molecular level, the MembraneEditor is used to show a fixed projection plane S3D method. The mesoscopic level is represented by CellExplorer which is equipped with a dynamic projection plane S3D method. RESULTS The stereoscopic cell animations rendered with Blender were successfully shown on notebook monitors and power walls as well as on large cinema projection screens. The CELLmicrocosmos projects were optimized to provide adequate interactive cell environments which were successfully used during different university projects and presentations. Because the software developer is not able to define the relative position of the user to the point of interest, the fixed projection plane S3D method was used in combination with smaller membrane structures. But the dynamic projection plane is furthermore compatible with cell environments featuring large scale differences. NOVELTY Cell visualization is an emerging area in scientific communication. This work should encourage cytological researchers to take S3D technology into account for future projects. Moreover, the stereoscopic capabilities of the CELLmicrocosmos project are shown which have been developed over several years and which have never been discussed in our previous publications.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:30 IS&T Electronic Imaging (EI) Symposium English 2014

Stereoscopic depth perception in video see- through augmented reality within action space

CONTEXT: Depth perception is an important component in many augmented reality (AR) applications. It is, however, affected by multiple error sources. Most studies on stereoscopic AR have focused on the personal space whereas we address the action space (at distances beyond 2 m; in this study 6-10 m) using a video see-through display (HWD). This is relevant for example in the navigation and architecture domains. OBJECTIVE: For design guideline purposes there is a considerable lack of quantitative knowledge of the visual capabilities facilitated by stereoscopic HWDs. To fill the gap two interrelated experiments were conducted: Experiment 1 had the goal of finding the effect of viewing through a HWD using real objects while Experiment 2 dealt with variation of the relative size of the augmentations in the monoscopic and binocular conditions. METHOD: In Experiment 1, the participants judged depths of physical objects in a matching task using the Howard-Dolman test. The order of viewing conditions (naked eyes and HWD) and initial positions of the rods were varied. In Experiment 2, the participants judged the depth of an augmented object of interest (AOI) by comparing the disparity and size to auxiliary augmentations (AA). The task was to match the distance of a physical pointer to same distance with the AOI. The approach of using AAs has been recently introduced (Kytö et al. 2013). The AAs were added to the scene following literature-based spatial recommendations. RESULTS: The data from Experiment 1 indicated that the participants made more accurate depth judgments with HWD when the test was performed first with naked eyes. A hysteresis effect was observed with a bias of the judgments towards the starting position. As for Experiment 2, binocular viewing improved the depth judgments of AOI over the distance range. The binocular disparity and relative size interacted additively; the most accurate results were obtained when the depth cues were combined. The results have similar characteristics with a previous study (Kytö et al. 2013), where the effects of disparity and relative size were studied in X-Ray visualization case at shorter distances. Comparison of the two experiments showed that stereoscopic depth judgments were more accurate with physical objects (mean absolute error 1.13 arcmin) than with graphical objects (mean absolute error 3.77 arcmin). NOVELTY: The study fills the knowledge gap on exocentric depth perception in AR by quantitative insight of the effect of binocular disparity and relative size. It found that additional depth cues facilitate stereoscopic perception significantly. Relative size between the main and auxiliary augmentations turned out to be a successful facilitator. This can be traced to the fact that binocular disparity is accurate at short distances and the accuracy of relative size remains constant at long distances. Overall, these results act as guidelines for depth cueing in stereoscopic AR applications.
  • Published: 2014
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
out of 3 pages
Loading...
Feedback
AV-Portal 3.6.0 (ecdfbe1492e563fa2305056b6ba267cdae273179)

Timings

  785 ms - page object
   19 ms - search
    2 ms - highlighting
    1 ms - highlighting/32360
    0 ms - highlighting/32358
    0 ms - highlighting/32356
    0 ms - highlighting/32357
    0 ms - highlighting/32359
    0 ms - highlighting/32351
    0 ms - highlighting/32363
    0 ms - highlighting/32362
    1 ms - highlighting/32361
    0 ms - highlighting/32367
    0 ms - highlighting/32371
    0 ms - highlighting/32372