RegisterGet new password
ISMAR 2014 - Sep 10-12 - Munich, Germany

ieee.org
computer.org
vgtc.org

ISMAR Sessions

Call for Participation

Committees

Supporters and Media

Contact




Social Media

LinkedInLinkedIn



Previous Years

ISMAR Papers for Session "Rendering"

Rendering
Session : 
Rendering
Date & Time : September 10 04:15 pm - 06:30 pm
Location : HS1
Chair : Wolfgang Broll, TU Ilmenau
Papers : 
Interactive Near-Field Illumination for Photorealistic Augmented Reality on Mobile Devices
Authors: Kai Rohmer, Wolfgang Büschel, Raimund Dachselt, Thorsten Grosch
Abstract :
Mobile devices become more and more important today, especially for augmented reality (AR) applications in which the camera of the mobile device acts like a window into the mixed reality world. Up to now, no photorealistic augmentation is possible since the computational power of the mobile devices is still too weak. Even a streaming solution from a stationary PC would cause a latency, that affects user interactions considerably. Therefore, we introduce a differential illumination method that allows for a consistent illumination of the inserted virtual objects on mobile devices, avoiding a delay. The necessary computation effort is shared between a stationary PC and the mobile devices to make use of the capacities available on both sides. The method is designed such that only a minimum amount of data has to be transferred asynchronously between the stationary PC and one or multiple mobile devices. This allows for an interactive illumination of virtual objects with a consistent appearance under both temporally and spatially varying real illumination conditions. To describe the complex near-field illumination in an indoor scenario, multiple HDR video cameras are used to capture the illumination from multiple directions. In this way, sources of illumination can be considered that are not directly visible to the mobile device because of occlusions and the limited field of view of built-in cameras.
Delta Voxel Cone Tracing
Author: Tobias Alexander Franke
Abstract :
Mixed reality applications which must provide visual coherence between synthetic and real objects need relighting solutions for both: synthetic objects have to match lighting conditions of their real counterparts, while real surfaces need to account for the change in illumination introduced by the presence of an additional synthetic object. In this paper we present a novel relighting solution called Delta Voxel Cone Tracing to compute both direct shadows and first bounce mutual indirect illumination. We introduce a voxelized, pre-filtered representation of the combined real and synthetic surfaces together with the extracted illumination difference due to the augmentation. In a final gathering step this representation is cone-traced and superimposed onto both types of surfaces, adding additional light from indirect bounces and synthetic shadows from antiradiance present in the volume. The algorithm computes results at interactive rates, is temporally coherent and to our knowledge provides the first real-time rasterizer solution for mutual diffuse, glossy and perfect specular indirect reflections between synthetic and real surfaces in mixed reality.
Importance Weighted Image Enhancement for Prosthetic Vision: An Augmentation Framework
Authors: Chris McCarthy, Nick Barnes
Abstract :
Augmentations to enhance perception in prosthetic vision (also known as bionic eyes) have the potential to improve functional outcomes significantly for implantees. In current (and near-term) implantable electrode arrays resolution and dynamic range are highly constrained in comparison to images from modern cameras that can be head mounted. In this paper, we propose a novel, generally applicable adaptive contrast augmentation framework for prosthetic vision that addresses the specific perceptual needs of low resolution and low dynamic range displays. The scheme accepts an externally defined pixel-wise weighting of importance describing features of the image to enhance in the output dynamic range. Our approach explicitly incorporates the logarithmic scaling of enhancement required in human visual perception to ensure perceivability of all contrast augmentations. It requires no pre-existing contrast, and thus extends previous work in local contrast enhancement to a formulation for general image augmentation. We demonstrate the generality of our augmentation scheme for scene structure and looming object enhancement using simulated prosthetic vision.
P-HRTF: Efficient Personalized HRTF Computation for High-Fidelity Spatial Sound
Authors: Alok Meshram, Ravish Mehra, Hongsheng Yang, Enrique Dunn, Jan-michael Frahm, Dinesh Manocha
Abstract :
Accurate rendering of 3D spatial audio for interactive virtual auditory displays requires the use of personalized head-related transfer functions (HRTFs). We present a new approach to compute personalized HRTFs for any individual using a method that combines state-of-the-art image-based 3D modeling with an efficient numerical simulation pipeline. Our 3D modeling framework enables capture of the listener's head and torso using consumer-grade digital cameras to estimate a high-resolution non-parametric surface representation of the head, including the extended vicinity of the listener's ear. We leverage sparse structure from motion and dense surface reconstruction techniques to generate a 3D mesh. This mesh is used as input to a numeric sound propagation solver, which uses acoustic reciprocity and Kirchhoff surface integral representation to efficiently compute an individual's personalized HRTF. The overall computation takes tens of minutes on multi-core desktop machine. We have used our approach to compute the personalized HRTFs of few individuals, and we present our preliminary evaluation here. To the best of our knowledge, this is the first commodity technique that can be used to compute personalized HRTFs in a lab or home setting.
Visibility-Based Blending for Real-Time Applications
Authors: Taiki Fukiage, Takeshi Oishi, Katsushi Ikeuchi
Abstract :
There are many situations in which virtual objects are presented half-transparently on a background in real time applications. In such cases, we often want to show the object with constant visibility. However, using the conventional alpha blending, visibility of a blended object substantially varies depending on colors, textures, and structures of the background scene. To overcome this problem, we present a framework for blending images based on a subjective metric of visibility. In our method, a blending parameter is locally and adaptively optimized so that visibility of each location achieves the targeted level. To predict visibility of an object blended by an arbitrary parameter, we utilize one of the error visibility metrics that have been developed for image quality assessment. In this study, we demonstrated that the metric we used can linearly predict visibility of a blended pattern on various texture images, and showed that the proposed blending methods can work in practical situations assuming augmented reality.

Sponsors (Become one)

Diamond



Gold




Silver



Bronze









SME



in special cooperation with




in cooperation with