Show filters Hide filters

Refine your search

Publication Year
1-31 out of 31 results
Change view
  • Sort by:
35:43 IS&T Electronic Imaging (EI) Symposium English 2016

Two shipwrecks, 2500 metres underwater, six 3D cameras

In April/May 2015, a team led by Curtin University, WA Museum and DOF Subsea conducted a 3D imaging survey of the two historic shipwrecks HMAS Sydney (II) and HSK Kormoran. The Australian vessel HMAS Sydney and the German vessel HSK Kormoran encountered each other in the midst of World War II on the 19th of November in 1941 off the Western Australian coast. After a fierce battle both ships sank each other and they now lie in 2500 m (8200 feet) water depth, 200 km (125 miles) offshore from Shark Bay. This event is Australia's largest loss of life in a single maritime disaster - with the entire crew of 645 perishing on the Sydney and 82 crew lost on the Kormoran. The exact location of the two wrecks remained unknown for almost 70 years until they were discovered in 2008. The aim of the 2015 expedition was to conduct a detailed 3D imaging survey of the two wrecks and their extensive debris fields. A custom underwater lighting and camera package was developed for fitment to two work-class underwater remotely operated vehicles (ROVs) as often used in the offshore oil and gas industry. The camera package included six 3D cameras, and fourteen digital still cameras fitted across the two ROVs intended to capture feature photography, cinematography and 3D reconstruction photography. The camera package included six underwater stereoscopic cameras (three on each ROV) which captured a mix of 3D HD video footage, 3D stills, and 3D 4K video footage. High light levels are key to successful underwater photography and the system used a suite of ten LED underwater lights on each ROV to achieve artistic and effective lighting effects. At the conclusion of four days of diving, the team had collected over 500,000 stills and over 300 hours of HD footage. The collected materials will contribute towards the development of museum exhibitions at the WA Museum and partner institutions, and the development of a feature documentary. Another key technology being deployed on this project is photogrammetric 3D reconstruction which allows the generation of photo-realistic digital 3D models from a series of 2D photographs. These digital 3D models can be visualised in stereoscopic 3D and potentially 3D printed in full-colour to create physical reproductions of items from the sea floor. This presentation will provide an overview of the expedition, a summary of the technology deployed, and an insight into the 3D imaging materials captured. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:07 IS&T Electronic Imaging (EI) Symposium English 2016

3D autostereoscopic display image generation using direct light field rendering

The rapid development of 3D display technologies allows consumers to enjoy the 3D world visually through different display systems such as the stereoscopic, multiview, and light field displays. On the other hand, the conventional multiview synthesis-based 3D rendering technique demands more memory usage and computation time as the number of views is increased in multiview or light field displays. Conventional 3D rendering processing for 3D display becomes complex in order to generated real like 3D display image. This paper proposes a novel method to generate light rays of 3D display not by the conventional algorithm such as multiview rendering method technique but by 3D direct light field rendering. Our algorithm interprets light rays from 3D display and input color and disparity value in the light field domain, and it significantly reduces the computational complexity and memory usage. Since direct light field rendering algorithm is different from general multiview image processing algorithms, we propose new 3D image generation algorithm for hole filling, boundary matting, and view filtering from common stereo input images. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:16 IS&T Electronic Imaging (EI) Symposium English 2016

An efficient approach to playback of stereoscopic videos using a wide field-of-view

The affordability of head-mounted displays and high-resolution cameras has prompted the need for efficient playback of stereoscopic videos using a wide field-of-view (FOV). The MARquette Visualization Lab (MARVL) focuses on the display of stereoscopic content that has been filmed or computer-generated using a large-scale immersive visualization system, as well as head-mounted and augmented reality devices. Traditional approaches to video playback using a plane fall short with larger immersive FOVs. We developed an approach to playback of stereoscopic videos in a 3D world where depth is determined by the video content. Objects in the 3D world receive the same video texture but computational efficiency is derived using UV texture offsets as opposing halves of a frame-packed 3D video. Left and right cameras are configured in Unity via pulling masks so that they only uniquely show the texture for the corresponding eye. The camera configuration is then constructed through code at runtime using MiddleVR for Unity 4, and natively in Unity 5. This approach becomes more difficult with multiple cameras and maintaining stereo alignment for the full FOV, but has been used successfully in MARVL for applications including employee wellness initiatives, interactivity with high-performance computing results, and navigation within the physical world. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:13 IS&T Electronic Imaging (EI) Symposium English 2016

360-degree three-dimensional display with the virtual display surface

We propose the omnidirectional 3D display system which displays directly touchable 3D images. The display surface of the proposed display is cylindrical, and displayed 3D images are observed around the display surface. The proposed system is composed of multiple basic display units. Each basic display unit consists of an LCD, a microlens array (or an HOE), and a relay optics. The display surface of the proposed system is the virtual screen which is composed of multiple light focusing points (3D pixels) equally spaced in a cylindrical shape. Therefore, the display surface is not the physical obstruction when observers touch 3D images directly. We constructed the prototype system to verify the effectiveness of the proposed system. The virtual cylindrical display surface was composed of 24 basic display units. The angle of view of each 3D pixel which forms the virtual cylindrical display surface was 15°, and each 3D pixel irradiated 36 light rays at 0.4° intervals. The diameter and the height of the virtual cylindrical display surface were 5cm both. A displayed 3D images was directly touchable and was observed from 360° directions. This paper describes the principle of the proposed omnidirectional 3D display, and also describes the experimental results. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:13 IS&T Electronic Imaging (EI) Symposium English 2016

An adaptive blur in peripheral vision to reduce visual fatigue in stereoscopic vision

For some years, a lot of Stereoscopic 3D contents have been released. Even if the depth sensation is realistic, it is still not perfect and uncomfortable. The objective of our work is to use the gaze of the user to bring closer artificial vision and natural vision to increase the precision of the perception and decrease visual fatigue. For example, a difference in artificial vision is the accommodation point and the convergence point of the eye. In natural vision, these points are the same whereas in artificial vision event if the convergence point is on the looked object, the accommodation point remains on the screen. This difference bring visual fatigue. In this article, we propose and evaluate the effect of an artificial blur in peripheral vision in order to reduce the accommodation vergence conflict and so the strain. We found that adding a blur in peripheral vision decreases the visual fatigue but this blur can’t be used actually due to eye-tracker latency. In a future work, we will investigate the effect of vertical parallaxes on shape perception, distance perception and visual fatigue. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:32 IS&T Electronic Imaging (EI) Symposium English 2016

Light field modulation using a double-lenticular liquid crystal panel

The ultimate goal of any auto-stereoscopic display is to reproduce exact light fields of 3D scenes on the display’s surface. However, most existing displays can only reproduce inexact light fields. Filling the gap between them has been a major target of research. In this work, we present a light field modulator consisting of a LC (liquid crystal) panel, a light diffuser, and a pair of lenticular sheets. The modulator will modify the intensity of light passing through it. When combined with a color filter, the modulator can also modify the color tone. Since the modification is dependent on the light’s direction, the modulator can be tuned to improve the light field from being inexact to being nearly exact. To further investigate the modulator’s capability, we put it in front of a multi-layer display. The light fields reproduced by a multi-layer display are only approximate especially when the display is tailored to cover a wide viewing zone. We observe that the modulator can mitigate the occurrence of artifacts in the outputted light fields. We also observe that monochromatic fields can be converted into color fields using the modulator. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
20:21 IS&T Electronic Imaging (EI) Symposium English 2016

Blue noise sampling of surfaces from stereoscopic images

We propose an original sampling technique for surfaces generated by stereoscopic acquisition systems. The idea is to make the sampling of these surfaces directly on the pair of stereoscopic images, instead of doing it on the meshes created by triangulation of the point clouds given by the acquisition system. Point clouds are generally dense, and consequently the resulting meshes are oversampled (this is why a re-sampling of the meshes is often done). Moving the sampling stage in the 2D image domain greatly simplifies the classical sampling pipeline, allows to control the number of points from the beginning of the sampling/reconstruction process, and optimizes the size of the generated data. More precisely, we developed a feature-preserving Poisson-disk sampling technique applied to the 2D image domain - which can be seen as a parameterization domain - with inter-sample distances still computed in the 3D space, to reduce the distortion due the embedding in R^3. Experimental results show that our method generates 3D sampling patterns with nice blue noise properties in R^3 (comparable to direct 3D sampling methods), while keeping the geometrical features of the scanned surface. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
16:21 IS&T Electronic Imaging (EI) Symposium English 2016

Geometrically constrained sub-pixel disparity estimation from stereo images of the retinal fundus

The aim of this study is to help ophthalmologists and opticians during the diagnostic process of the retinal fundus. We propose a computer-vision-based solution that allows, from stereo images, the extraction of clinical parameters and/or the generation of multi-viewpoint images of the retinal fundus. This goal can be achieved by estimating the disparity map of the stereo images. For more precise clinical parameter extraction, a sub-pixel approach could be used. Additionally, the a priori knowledge of the fundus geometric shape provides useful information for the disparity map estimation process. In this paper we propose a sub-pixel disparity estimation algorithms that takes into consideration the geometric shape of the retinal fundus. Different stereo images, with known and unknown ground truth, are used to compare the proposed algorithms to state-of-the-art algorithms and to demonstrate the efficiency of our proposed method. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
23:24 IS&T Electronic Imaging (EI) Symposium English 2016

Beyond fun and games: VR as a tool of the trade

The recent resurgence of VR is exciting and encouraging because the technology is at a point that it soon will be available for a very large audience in the consumer market. However, it has also been a little bit disappointing to see that VR technology is mostly being portrayed as the ultimate gaming environment and the new way to experience movies. VR is much more than that, there has been a wide number or groups around the world using VR for the past twenty years in engineering, design, training, medical treatments and many other areas beyond gaming and entertainment that seem to have been forgotten in the public perception. Furthermore, VR technology is also much more than goggles, there are many ways to build devices and systems to immerse users in virtual environments. And finally, there are also a lot of challenges in aspects related to creating engaging, effective, and safe VR applications. This talk will present our experiences in developing VR technology, creating applications in many industry fields, exploring the effect of VR exposure to users, and experimenting with different immersive interaction models. The talk will provide a much wider perspective on what VR is, its benefits and limitations, and how it has the potential to become a key technology to improve many aspects of human life. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:35 IS&T Electronic Imaging (EI) Symposium English 2016

Application of light field displays to vision correction and accommodation support

Light field displays have been primarily targeted for auto-stereoscopic displays: with multiple views across a wide viewing angle, people from different perspectives are able to see slightly different contents. Recently, by showing a High-Angular-Resolution light field to a single viewer, new applications and algorithms are developed to enhance the visual experience. Vision-correcting displays let eyes with aberrations see the display in sharp focus without wearing eye-glasses; this is enabled by approximating the inverse aberrations using a dense light field. Another application of the High-Angular-Resolution light field solves the Vergence-Accommodation-Conflict by supporting focus cues in a VR headmount. We showed that, by exploring the compressibility of near-eye light fields, perceived spatial resolution of the display can be greatly enhanced when compared to prior work. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:22 IS&T Electronic Imaging (EI) Symposium English 2016

Evaluation of the perception of dynamic horizontal image translation and a gaze adaptive approach

In stereo 3D, dynamic horizontal image translation (DHIT) is an important technique to mitigate visually stressing depth discontinuities during scene cuts by slowly shifting the stereo 3d views in opposite directions just before and after a scene cut. Thereby, the disparity of objects of interest is adjusted. This kind of scene cut is also known as an “active depth cut”. The DHIT can also be applied to reduce the accommodation vergence conflict, which, today, is the main source for visual fatigue. The perception of DHIT by a human observer is investigated in the course of this work and design recommendations for the DHIT in stereo production or for parameterization of automatic DHIT systems are given. An example for an automatic system is our previously proposed eye tracking based approach “GACS3D”, where the current point of gaze of the subject is brought into the zero parallax setting by applying DHIT. This kind of gaze adaptive processing is supposed to reduce visual fatigue in a single user environment. The effectiveness of this approach as well as the implications for the perception are also investigated in this work. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
13:25 IS&T Electronic Imaging (EI) Symposium English 2016

Depth extraction from a single image based on block-matching and robust regression

In this paper, we propose a data-driven approach for automatically estimating a plausible depth map from a single monocular image. Instead of using complicated parametric model, we cast the estimation as a simple yet effective regression problem. We first retrieve semantically similar RGB and depth candidates from database using an activation descriptor. Then, initial estimates are synthesized based on block-matching and robust patch regression. Finally, a weighted median filter (WMF) is adapted to further align depth boundaries to RGB edges. We explicitly take texture-removing technique into consideration for visually plausible results. Experimental results on natural images show that the proposed method outperforms existing approaches in term of both qualitative and quantitative evaluations. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
23:18 IS&T Electronic Imaging (EI) Symposium English 2016

3D will be back but not as we know it

GoPro launched the “Dual­Hero 2.0” stereo rig in 2014. This offered amazing sync ability (pixel level), low cost and high resolution (17:9 2.7K@30p ­ which looks amazing when viewed on a 4K 3D monitor). But the consumer stereo 3D market had already crashed. 3D continues to be a strong attraction at the cinema because the viewing experience is carefully controlled. This same challenge is now plainly visible in the emerging VR, AR and MR technologies, but there are really compelling reasons why 3D, either as stereo output or depth­map will play an essential role in the coming video technologies. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
16:13 IS&T Electronic Imaging (EI) Symposium English 2016

A high resolution aerial 3D display using a directional backlight

This paper describes a high resolution aerial 3D display using a time-division multiplexing directional backlight. In this system an aerial real image is generated with a pair of large convex lenses. The directional backlight is controlled based on the detected face position so that binocular stereoscopy may be maintained for a moving observer. By use of the directional backlight, the proposed system attains autostereoscopy without any moving parts. A wide viewing zone is realized by placing a large aperture convex lens between the backlight and the LCD panel. With the advantage of time-division multiplexing, a high resolution 3D image is presented to the viewer. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:59 IS&T Electronic Imaging (EI) Symposium English 2016

Capturing and rendering light-field video: Approaches and challenges

Lytro is building a revolutionary system to record and render the light-field of live-action video, enabling viewers to immerse themselves in 3D cinematic VR experiences. In this presentation, we will describe our system design for capturing, processing, and rendering light-field video and discuss the significant data and computing challenges to be solved on our journey. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
21:22 IS&T Electronic Imaging (EI) Symposium English 2016

3DTV: past, present and future

  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:53 IS&T Electronic Imaging (EI) Symposium English 2016

New visual coding exploration in MPEG: Super-multiView and free navigation in free viewpoint TV

ISO/IEC MPEG and ITU-T VCEG have recently jointly issued a new multiview video compression standard, called 3D-HEVC, which reaches unpreceded compression performances for linear, dense camera arrangements. For instance, 80 full-HD views for so-called Super-MultiView (SMV) autostereoscopic 3D displays can be transmitted by 3D-HEVC at 15 to 60 Mbps, comparable to the bandwidth requirements of 4k/8k video. Novel SMV displays capable of displaying a couple of hundreds of full-HD views, that are already prototyped in R&D labs, would benefit from an additional two-fold compression gain. Transmitting depth maps along with coded video in a single 3D-HEVC stream and synthesizing additional output views using Depth Image Based Rendering (DIBR) techniques, opens opportunities for omitting some camera views for higher compression. However, high quality-bitrate penalties have been observed in applications where the multiview content is captured by an arc camera arrangement surrounding the scene, e.g. in sports events. Moreover, there is currently no out of the box technology that can provide high quality virtual views synthesized from relatively sparse, arbitrarily arranged cameras in Free Navigation (FN) for e.g. the Matrix bullet effect with only a dozen of cameras. The MPEG standardization committee has therefore issued a Call for Evidence in June 2015 [N15348], calling for improved compression technologies to support near-future SMV and FN applications. OBJECTIVE: The main objective is to improve view prediction/synthesis for better SMV compression performance when omitting/decimating some of the input views during transmission, as well as supporting FN functionalities in non-linear, sparse camera arrangements. Visually-pleasant DIBR view synthesis methods therefore require multi-camera depth estimation and inpainting approaches that are currently not supported in the MPEG reference software, which historically has mainly been confined to stereoscopic scene analysis/prediction/synthesis methods. METHOD: Multi-camera plane sweeping, epipolar plane image and inpainting techniques that coherently integrate all available camera information into a single data representation, drastically improve the visual coherence between successive virtual views. Moreover, Human Visual System (HVS) masking effects in spatio-temporally adjacent views provide a high degree of forgiveness in decimating the multi-camera input information, similar to what has been done in the TV pioneering era for inserting low-bandwidth chrominance data into the settled luminance spectrum bandwidth of B&W TV. RESULTS: While omitting some input views in the transmission chain and resynthesizing these views at the decoder represents a huge objective PSNR penalty (5 to 10 dB), limited subjective MOS impact has been observed with improved, non-linear multi-camera processing tools (color calibration, depth estimation and view synthesis), proper view decimation and Group of Views (GoV) data interleaving, cf. graph in attachment. NOVELTY: Continued work on [Jorissen2015] and [Dricot2015] with the inclusion of aforementioned tools, deep into the 3D-HEVC coding chain, provides substantial visual quality gains. New subjective quality metrics with stereoscopic and angular velocity viewpoint transition considerations - as opposed to a fixed viewpoint in traditional TV – give additional HVS masking, reaching higher MOS scores. Further validation on Holografika SMV displays with a more extensive set of dozens of video sequences is pursued. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
15:58 IS&T Electronic Imaging (EI) Symposium English 2016

Effect of inter-lens distance on fusional limit in stereoscopic vision

In this study investigated the effect of the frame design of a simple smartphone HMD on the stereoscopic vision and considered the design requirements for comfortable viewing environment. We mainly focused on the lens spacing used in screen enlargement and extension of the focal length. To investigate the differences in the fusional limit attributable to lens spacing, three HMDs with left/right eye-lens spacing of 57.5, 60, and 62.5 mm were utilized. When the three types of HMD and display were compared, the positive and negative direction fusional limits were closer than the display for all HMDs. In particular, that of 62.5 mm condition was shifted to significantly proximal in comparison with the control condition. The results showed a trend that the fusional range becomes nearer in a simple HMD. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:35 IS&T Electronic Imaging (EI) Symposium English 2016

Emotional arousal by stereoscopic images and the effects on time perception

In this research, the effect of enhancement of arousal by 2D to 3D conversion and disparity modification of emotional images was examined in terms of time perception. From the results of the experiment, lengthening of the estimation was found for longer duration range of the 3D condition and the disparity modification condition, and the tendency was significant for the high arousal images. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
20:01 IS&T Electronic Imaging (EI) Symposium English 2016

Investigating intermittent stereoscopy: Its effects on perception and visual fatigue

In a context of virtual reality being ubiquitous in certain industries, as well as the substantial amount of literature about the visual fatigue it causes, we wondered whether the presentation of intermittent S3D stimuli would lead to improved depth perception (over monoscopic) while reducing subjects’ visual asthenopia. In a between-subjects design, 60 individuals under 40 years old were tested in four different conditions, with head-tracking enabled: two intermittent S3D conditions (Stereo @ beginning: S3D at task onset linearly transitioning to mono in 3 seconds; Stereo @ end: monoscopic at task onset for 4 seconds, linearly transitioning to S3D in 3 seconds) and two control conditions (Mono: monoscopic images only; Stereo: constant S3D). Several optometric variables were measured pre- and post-experiment, and a subjective questionnaire assessing discomfort was administered. Our results suggest a difference between simple scenes (containing few static objects, or slow, linear movement along one axis only), and more complex environments with more diverse movement. In the former case, Stereo @ beginning leads to depth perception which is as accurate as Stereo, and any condition involving S3D leads to more precision than Mono. We posit that the brain might build an initial depth map of the environment, which it keeps using after the suppression of disparity cues. In the case of more complex scenes, Stereo @ end leads to more accurate decisions: the brain might possibly need additional depth cues to reach an accurate decision. Stereo and Stereo @ beginning also significantly decrease response times, suggesting that the presence of disparity cues at task onset boosts the brain’s confidence in its initial evaluation of the environment’s depth map. Our results concerning fatigue, while not definitive, hint at it being proportional to the amount of exposure to S3D stimuli. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:00 IS&T Electronic Imaging (EI) Symposium English 2016

Stereoscopic space map – A semi-immersive navigation interface for 3D multi-display presentations

Public presentations in large-scale stereoscopic 3D environments like CAVEs are usually accompanied by strong side-effects, such as unexpected movements or even motion sickness caused by, for example, imprecisely-tracked wands and a disturbed stereoscopic vision. On one hand, 3D navigation is required to enable an appropriate interaction with the spatial objects. On the other hand, in most cases only one person is the navigator, whereas all other persons are forming the audience. Moreover, both usually lack the overview in a complex environment. Therefore, a new approach is proposed here, enabling 1) 3D navigation on a precise navigation screen representing an overview map (also known as worlds in miniature), and 2) processing the movement information to a large-scale environment representing the real world. The interactive virtual map is stereoscopically visualized by the zSpace 200® (using CELLmicrocosmos 1.2 CellExplorer), whereas the virtual world is shown on a panoramic 330° CAVE2TM (using Omegalib). We will show that the distinction between the navigation interface and the virtual world environment is reasonable for stereoscopic 3D presentation and exploration purposes, because the stereoscopic virtual world rendering can be optimized with respect to the different tour points, extending our previously published interactive projection plane approach. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
11:41 IS&T Electronic Imaging (EI) Symposium English 2016

Linear optimization approach for depth range adaption of stereoscopic videos

Depth-Image Based Rendering (DIBR) techniques enable the creation of virtual views from color and corresponding depth images. In stereoscopic 3D film making, the ability of DIBR to render views at arbitrary viewing positions allows adaption of a 3D scene’s depth budget to address physical depth limitations of the display and to optimize for visual viewing comfort. This rendering of stereoscopic videos requires the determination of optimal depth range adaptions, which typically depends on the scene content, the display system and the viewers’ experience. We show that this configuration problem can be modeled by a linear optimization problem that aims at maximizing the overall quality of experience (QoE) based on depth range adaption. Rules from literature are refined by data analysis and feature extraction based on datasets from film industry and human attention models. We discuss our approach in terms of practical feasibility, generalizability w.r.t different content and subjective image quality, visual discomfort and stereoscopic effects and demonstrate its performance in a user study on publicly available and self-recorded datasets. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
17:50 IS&T Electronic Imaging (EI) Symposium English 2016

Hybrid reality: Using 2D and 3D together in a mixed mode display

Critical collaborative work session rely on sharing 2D and 3D information. Limitations in display options make it difficult to share and interact with multiple content types content simultaneously. As a result, displays capable of showing stereoscopic content are predominately used for 2D applications. This presentation will illustrate Hybrid Reality—a strategy for showing and interacting with 2D and 3D content simultaneously on the same display. Example displays, use cases, and case studies will be discussed. By using the Hybrid Reality environment, manufacturing organizations have achieved ROI with time and cost savings as well as improved collaboration for complex design problems. In higher education, Hybrid Reality displays support instruction and curriculum design by providing a process to share a wide spectrum of 2D and 3D media and applications into classroom setting. This presentation will share detailed case studies of both applications. This presentation will demonstrate how a Hybrid Reality display system can be used to effectively combine 2D and 3D content and applications for improved understanding, insight, decision making, and collaboration. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
20:29 IS&T Electronic Imaging (EI) Symposium English 2016

Stereoscopic remote vision system aerial refueling visual performance

The performance and comfort of aircrew using stereoscopic displays viewed at a near distance over long periods of time is now an important operational factor to consider with the introduction of aerial refueling tankers using remote vision system technology. Due to concern that the current USAF vision standards and test procedures may not be adequate for accurately identifying aircrew medically fit to operate this new technology for long mission durations, we investigated performance with the use of a simulated remote vision system, and the ability of different vision tests to predict performance and reported discomfort. The results showed that the use of stereoscopic cameras generally improved performance but that individuals with poorer vision test scores performed more poorly and reported greater levels of discomfort. In general, newly developed computer-based vision tests were more predictive of both performance and reported discomfort than standard optometric tests. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
08:35 IS&T Electronic Imaging (EI) Symposium English 2016

Optical realization for the computer-generated cylindrical hologram

Real 360-degree holographic display method based on the CGCH’s using high-speed projection and rotating screen is proposed. The laser beam reflects on the high-speed DMD during the DMD displays computer-generated holograms in the generated consistency of for cylindrical surface and 3D images are reconstructed on the rotating screen. Reconstructed 3D images for the corresponding cylindrical holograms are tailored along horizontal direction while rotating screen is synchronized with DMD projection. Horizontally assembled entire 3D image is observed from anywhere around the display and CGCH can be demonstrated successfully. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
20:02 IS&T Electronic Imaging (EI) Symposium English 2016

LEIA 3D: holographic reality

Ever since Doug Engelbart presented the first modern computer interface with his famous “Mother of All Demos,” we have strived to achieve more intuitive ways to interact with digital information. That interface has not fundamentally changed over the past half-century, though, even as the scope of information continues to exponentially increase. As we now look to computerized AI to help us navigate and make sense of all of our shared data, we also require a new way to present the information that is intuitive and useful to us. Holographic Reality (HR) is based on “holographic” 3D screens that do not require any eyewear to function. These screens must produce realistic, full-parallax 3D imagery that can be manipulated in mid-air by finger or hand gestures. They must provide the same high quality imagery throughout the field of view, no jumps, bad-spots or other visual artefact. Augmented by Haptic technology (tactile feedback), these screens will even let us “feel” the holographic content physically at our fingertips. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
14:45 IS&T Electronic Imaging (EI) Symposium English 2016

Curtin HIVE – Hub for Immersive Visualization and eResearch

  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:10 IS&T Electronic Imaging (EI) Symposium English 2016

Towards perceptually coherent depth maps in 2D-to-3D conversion

We propose a semi-automatic 2D-to-3D conversion algorithm that is embedded in an efficient optimization framework, i.e., cost volume filtering, which assigns pixels to depth values initialized by user-given scribbles. The proposed algorithm is capable of capturing depth changes of objects that move towards or farther away from the camera. We achieve this by determining a rough depth order between objects in each frame, according to the motion observed in the video, and incorporate this depth order into the depth interpolation process. In contrast to previous publications, our algorithm focuses on avoiding conflicts between the generated depth maps and monocular depth cues that are present in the video, i.e., motion-caused occlusions, and thus takes a step towards the generation of perceptually coherent depth maps. We demonstrate the capabilities of our proposed algorithm on synthetic and recorded video data and by comparison with depth ground truth. Experimental evaluations show that we obtain temporally and perceptually coherent 2D-to-3D conversions in which temporal and spatial edges coincide with edges in the corresponding input video. Our proposed depth interpolation can clearly improve the conversion results for videos that contain objects which exhibit motion in depth, compared to commonly performed naïve depth interpolation techniques. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
18:14 IS&T Electronic Imaging (EI) Symposium English 2016

Trends in S3D movies quality as evaluated on 105 movies and 10 quality metrics

1) OBJECTIVE: The main objective of the large-scale quality analysis of S3D movies is to gain a better understanding of how quality control was performed in different movies. Also several novel quality metrics are presented, including channel swap detection, evaluation of temporal shifts between stereoscopic views and depth continuity. 2) METHOD: The main technical obstacle that we had to overcome was an enormous amount of computation and disc space required by such an analysis. Evaluation of one movie could take up to 4 weeks and required over 40GB for the source Blu-ray only. To maximize the efficiency we had to rewrite all of our metrics to exploit the multicore architecture of contemporary CPUs. We have also developed a system that efficiently distributes the computations across the cluster of up to 17 computers working in parallel. It enabled us to finish the evaluation of 105 movies in about 6 months. 3) RESULTS: An evaluation of 105 S3D movies’ technical quality has been conducted that span over 50 years of the stereoscopic cinema history. Our main observations are as follows: According to our measurements, “Avatar” in fact had a superior technical quality compared to the most S3D movies of the previous decade. So it is not surprising that it was positively received by the viewers. S3D quality improvement over the years is fairly obvious from the conducted evaluation, e.g. the results of average-quality movies from 2010 correspond to the results of the 2014 movies with nearly the worst technical quality. A more important conclusion from the analysis, however, is that it gradually becomes possible to produce low-budget movies with excellent technical quality, that was previously within reach only for high-budget blockbusters. We hope that new objective quality metrics like the channel mismatch metric will find their applications in production pipelines. It can further decrease the number of viewers experiencing discomfort and give a start to the new surge of S3D popularity. 4) CONCLUSION: Objective S3D quality metrics make it easier to find problematic frames or entire shots in movies, that could potentially lead to discomfort of a significant fraction of the audience. Our analysis have already revealed thousands of such scenes in real S3D movies. But to directly estimate this discomfort subjective evaluations are necessary. We have organized several of such evaluations with the help of volunteers, that were asked to watch some of the scenes with the worst technical quality according to our analysis. These experiments allow us to further improve the metrics and to develop a universal metric that could directly predict a percentage of the audience experiencing a noticeable discomfort. It is already clear that the development of such universal metric is a very challenging problem, so we are looking for collaboration. It is also clear to us that the majority of problems could be fixed in post-production with minimal user intervention, if not entirely automatically. Some of these techniques are not widely employed just because the problem itself is not considered important enough to require correction. We hope our work could help shed the light on the problem and more attention will be drawn to correcting the S3D production issues. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
21:39 IS&T Electronic Imaging (EI) Symposium English 2016

Stereoscopy-based procedural generation of virtual environments

Procedural generation of virtual scenes (like e.g., complex cities with buildings of different sizes and heights) is widely used in the CG movies and videogames industry. Even if this kind of scenes are often visualized using stereoscopy, however, to our knowledge, stereoscopy is not currently used as a tool in the procedural generation, while a more comprehensive integration of stereoscopic parameters can play a relevant role in the automatic creation and placement of virtual models. In this paper, we show how to use stereoscopic parameters to guide the procedural generation of a scene in an open-source modeling software. Virtual objects can be automatically placed inside the stereoscopic volume, in order to reach the maximum amount of parallax on screen, given a particular interocular distance, convergence plane and display size. The proposed approach allows to create again a virtual scene, given a particular context of visualization, avoiding problems related to excessive positive parallax in the final rendering. Moreover, the proposed approach can be used also to automatically detect window violations, by determining overlaps in negative parallax area between models and the view frustums of the stereoscopic camera, and to apply proper solutions, like e.g. the automatic placement of a floating window. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
25:51 IS&T Electronic Imaging (EI) Symposium English 2016

3-D movie rarities

Stereoscopic motion pictures have existed for 100 years, and the 3-D Film Archive - founded in 1990 - has a key role in saving and preserving these historic elements. Greg Kintz will discuss the many obstacles and challenges in locating and saving these precious stereo images. For example, their scanning, panel-matching, and stereoscopic image matching techniques have been widely recognized for their efficiency and precision. As Greg will present, the full restoration process begins with 2k or 4k wet-gate scanning of the best surviving 35 mm elements. The films are then aligned, shot-by-shot, for precise alignment and panel matching of the left / right elements. The 3-D Film Archive's multi-step process also includes image stabilization, flicker reduction, color balance, and dirt clean-up. At one time, the 3-D Film Archive held the largest collection of vintage stereoscopic film elements in the world. As such, Greg will display some of his favorite clips on the SD&A stereoscopic projection screen. In addition, the Archive's first four releases on Blu-ray 3D have enjoyed acclaim: Dragonfly Squadron, The Bubble, 3-D Rarities, and The Mask. For the first time, contemporary viewers are able to see these films at home in quality equal to or greater than the original theatrical experience. Greg will also discuss how the Archive is working to save and restore additional Golden Age 3-D films through licensing and partnerships. © 2016, Society for Imaging Science and Technology (IS&T).
  • Published: 2016
  • Publisher: IS&T Electronic Imaging (EI) Symposium
  • Language: English
out of 1 pages
Loading...
Feedback

Timings

  119 ms - page object
   27 ms - search
    0 ms - highlighting
    0 ms - highlighting/32252
    0 ms - highlighting/32234
    0 ms - highlighting/32255
    0 ms - highlighting/32254
    0 ms - highlighting/32246
    0 ms - highlighting/32256
    0 ms - highlighting/32253
    0 ms - highlighting/32242
    0 ms - highlighting/32238
    0 ms - highlighting/32250
    0 ms - highlighting/32235
    0 ms - highlighting/32232
    0 ms - highlighting/32240
    0 ms - highlighting/32243
    0 ms - highlighting/32245
    0 ms - highlighting/32244
    0 ms - highlighting/32241
    0 ms - highlighting/32248
    0 ms - highlighting/32249
    0 ms - highlighting/32228
    0 ms - highlighting/32230
    0 ms - highlighting/32236
    0 ms - highlighting/32231
    0 ms - highlighting/32247
    0 ms - highlighting/32251
    0 ms - highlighting/32227
    0 ms - highlighting/32229
    0 ms - highlighting/32237
    0 ms - highlighting/32257
    0 ms - highlighting/32233
    0 ms - highlighting/32239

Version

AV-Portal 3.7.0 (943df4b4639bec127ddc6b93adb0c7d8d995f77c)