ISMAR Papers for Session "Layout and Head-Worn Displays VST"
Layout and Head-Worn Displays VST
Session :
Layout and Head-Worn Displays - Video See-Through
Date & Time : September 12 02:00 pm - 03:30 pm Location : HS1 Chair : Tobias Hoellerer, University of California, Santa Barbara Papers :
Creating Automatically Aligned Consensus Realities for AR Videoconferencing
Authors: Nicolas Lehment, Daniel Merget, Gerhard Rigoll
Abstract : This paper presents an AR videoconferencing approach merging two remote rooms
into a shared workspace. Such bilateral {AR} telepresence inherently suffers
from breaks in immersion stemming from the different physical layouts of
participating spaces. As a remedy, we develop an automatic alignment scheme
which ensures that participants share a maximum of common features in their
physical surroundings. The system optimizes alignment with regard to initial
user position, free shared floor space, camera positioning and other factors.
Thus we can reduce discrepancies between different room and furniture layouts
without actually modifying the rooms themselves. A description and discussion
of our alignment scheme is given along with an exemplary implementation on
real-world datasets.
FLARE: Fast Layout for Augmented Reality Applications
Authors: Ran Gal, Lior Shapira, Eyal Ofek, Pushmeet Kohli
Abstract : Creating a layout for an augmented reality (AR) application which embeds
virtual objects in a physical environment is difficult as it must adapt to
any physical space. We propose a rule-based framework for generating object
layouts for AR applications. Under our framework, the developer of an AR
application specifies a set of rules (constraints) which enforce
self-consistency (rules regarding the inter-relationships of application
components) and scene-consistency (application components are consistent with
the physical environment they are placed in). When a user enters a new
environment, we create, in real-time, a layout for the application, which is
consistent with the defined constraints (as much as possible). We find the
optimal configurations for each object by solving a constraint-satisfaction
problem. Our stochastic move making algorithm is domain-aware, and allows us
to efficiently converge to a solution for most rule-sets. In the paper we
demonstrate several augmented reality applications that automatically adapt
to different rooms and changing circumstances in each room.
Presence and Discernability in Conventional and Non-Photorealistic Immersive Augmented Reality
Authors: William Steptoe, Simon Julier, Anthony Steed
Abstract : Non-photorealistic rendering (NPR) has been shown as a powerful way to
enhance both visual coherence and immersion in augmented reality (AR).
However, it has only been evaluated in idealized pre-rendered scenarios with
handheld AR devices. In this paper we investigate the use of NPR in an
immersive, stereoscopic, wide field-of-view head-mounted video see-through AR
display. This is a demanding scenario, which introduces many real-world
effects including latency, tracking failures, optical artifacts and
mismatches in lighting. We present the AR-Rift, a low-cost video see-through
AR system using an Oculus Rift and consumer webcams. We investigate the
themes of consistency and immersion as measures of psychophysical
non-mediation. An experiment measures discernability and presence in three
visual modes: conventional (unprocessed video and graphics), stylized
(edge-enhancement) and virtualized (edge-enhancement and color extraction).
The stylized mode results in chance-level discernability judgments,
indicating successful integration of virtual content to form a visually
coherent scene. Conventional and virutalized rendering bias judgments towards
correct or incorrect respectively. Presence as it may apply to immersive AR,
and which, measured both behaviorally and subjectively, is seen to be
similarly high over all three conditions.
WeARHand: Head-Worn, RGB-D Camera-Based, Bare-Hand User Interface with Visually Enhanced Depth Perception
Authors: Taejin Ha, Steven Feiner, Woontack Woo
Abstract : We introduce WeARHand, which allows a user to manipulate virtual 3D objects
with a bare hand in a wearable augmented reality (AR) environment. Our method
uses no environmentally tethered tracking devices and localizes a pair of
near-range and far-range RGB-D cameras mounted on a head-worn display and a
moving bare hand in 3D space by exploiting depth input data. Depth perception
is enhanced through egocentric visual feedback, including a semi-transparent
proxy hand. We implement a virtual hand interaction technique and feedback
approaches, and evaluate their performance and usability. The proposed method
can apply to many 3D interaction scenarios using hands in a wearable AR
environment, such as AR information browsing, maintenance, design, and games.