RegisterGet new password
ISMAR 2014 - Sep 10-12 - Munich, Germany

ieee.org
computer.org
vgtc.org

ISMAR Sessions

Call for Participation

Committees

Supporters and Media

Contact




Social Media

LinkedInLinkedIn



Previous Years

ISMAR Papers for Session "Head-Worn Displays OST"

Head-Worn Displays OST
Session : 
Head-Worn Displays - Optical See-Through
Date & Time : September 12 11:00 am - 12:30 pm
Location : HS1
Chair : Tom Furness, HITLab, University of Washington
Papers : 
Performance and Sensitivity Analysis of INDICA: INteraction-free DIsplay CAlibration for Optical See-Through Head-Mounted Displays
Authors: Yuta Itoh, Gudrun Klinker
Abstract :
An issue in AR applications with Optical See-Through Head- Mounted Display (OST-HMD) is to correctly project 3D information to the current viewpoint of the user. Manual calibration methods give the projection as a black box which explains observed 2D- 3D relationships well (Fig. 1). Recently, we have proposed an INteraction-free DIsplay CAlibration method (INDICA) for OSTHMD, utilizing camera-based eye tracking[7]. It reformulates the projection in two ways: a black box with an actual eye model (Recycle Setup), and a combination of an explicit display model and an eye model (Full Setup). Although we have shown the former performs more stably than a repeated SPAAM calibration, we could not yet prove whether the same holds for the Full Setup. More importantly, it is still unclear how the error in the calibration parameters affects the final results. Thus, the users can not know how accurately they need to estimate each parameter in practice. We provide: (1) the fact that the Full Setup performs as accurately as the Recycle Setup under a marker-based display calibration, (2) an error sensitivity analysis for both SPAAM and INDICA over the on-/offline parameters, and (3) an investigation of the theoretical sensitivity on an OST-HMD justified by the real measurements.
Analysing the Effects of a Wide Field of View Augmented Reality Display on Search Performance in Divided Attention Tasks
Authors: Naohiro Kishishita, Kiyoshi Kiyokawa, Ernst Kruijff, Jason Orlosky, Tomohiro Mashita, Haruo Takemura
Abstract :
A wide field of view augmented reality display is a special type of head-worn device that enables users to view augmentations in the peripheral visual field. However, the actual effects of a wide field of view display on the perception of augmentations have not been widely studied. To improve our understanding of this type of display when conducting divided attention search tasks, we conducted an in depth experiment testing two view management methods, in-view and in-situ labelling. With in-view labelling, search target annotations appear on the display border with a corresponding leader line, whereas in-situ annotations appear without a leader line, as if they are affixed to the referenced objects in the environment. Results show that target discovery rates consistently drop with in-view labelling and increase with in-situ labelling as display angle approaches 100 degrees of field of view. Past this point, the performances of the two view management methods begin to converge, suggesting equivalent discovery rates at approximately 130 degrees of field of view. Results also indicate that users exhibited lower discovery rates for targets appearing in peripheral vision, and that there is little impact of field of view on response time and mental workload.
SmartColor: Real-Time Color Correction and Contrast for Optical See-Through Head-Mounted Displays
Authors: Juan David Hincapié-Ramos, Levko Ivanchuk, Srikanth Kirshnamachari Sridharan, Pourang Irani
Abstract :
Users of optical see-through head-mounted displays (OHMD) perceive color as a blend of the display color and the background. Color-blending is a major usability challenge as it leads to loss of color encodings and poor text legibility. Color correction aims at mitigating color blending by producing an alternative color which, when blended with the background, more closely approaches the color originally intended. To date, approaches to color correction do not yield optimal results or do not work in real-time. This paper makes two contributions. First, we present QuickCorrection, a real-time color correction algorithm based on display profiles. We describe the algorithm, measure its accuracy and analyze two implementations for the OpenGL graphics pipeline. Second, we present SmartColor, a middleware for color management of user-interface components in OHMD. SmartColor uses color correction to provide three management strategies: correction, contrast, and show-up-on-contrast. Correction determines the alternate color which best preserves the original color. Contrast determines the color which best warranties text legibility while preserving as much of the original hue. Show-up-on-contrast makes a component visible when a related component does not have enough contrast to be legible. We describe the SmartColor’s architecture and illustrate the color strategies for various types of display content.
Minimizing Latency for Augmented Reality Displays: Frames Considered Harmful
Authors: Feng Zheng, Turner Whitted, Anselmo Lastra, Peter Lincoln, Andrei State, Andrew Maimone, Henry Fuchs
Abstract :
We present initial results from a new image generation approach for low-latency displays such as those needed in head-worn AR devices. Avoiding the usual video interfaces, such as HDMI, we favor direct control of the internal display technology. We illustrate our new approach with a bench-top optical see-through AR proof-of-concept prototype that uses a Digital Light Processing (DLP) projector whose Digital Micromirror Device (DMD) imaging chip is directly controlled by a computer, similar to the way random access memory is controlled. We show that a perceptually-continuous-tone dynamic gray-scale image can be efficiently composed from a very rapid succession of binary (partial) images, each calculated from the continuous-tone image generated with the most recent tracking data. As the DMD projects only a binary image at any moment, it cannot instantly display this latest continuous-tone image, and conventional decomposition of a continuous-tone image into binary time-division-multiplexed values would induce just the latency we seek to avoid. Instead, our approach maintains an estimate of the image the user currently perceives, and at every opportunity allowed by the control circuitry, sets each binary DMD pixel to the value that will reduce the difference between that user-perceived image and the newly generated image from the latest tracking data. The resulting displayed binary image is "neither here nor there," but always approaches the moving target that is the constantly changing desired image, even when that image changes every 50µs. We compare our experimental results with imagery from a conventional DLP projector with similar internal speed, and demonstrate that AR overlays on a moving object are more effective with this kind of low-latency display device than with displays of similar speed that use a conventional video interface.

Sponsors (Become one)

Diamond



Gold




Silver



Bronze









SME



in special cooperation with




in cooperation with