RegisterGet new password
29 SEP - 3 OCT, 2015 FUKUOKA, JAPAN





ieee.org computer.org vgtc.org sigmr.vrsj.org vrsj.org
Augmentation Without Boundaries

The 14th IEEE International Symposium on Mixed and Augmented Reality
FUKUOKA, JAPAN

S&T Poster Session

All posters will be presented in one of either three teaser sessions. Normal poster can present 1 minute. Extended Poster can present 3 minutes. They will be on display throughout the entire conference, as individually arranged by the authors. Also, guaranteed presentation times are arranged in the following three batches:

Please prepare your poster presentation by following Guidelines.

Poster deployment in Room 401-403 is here.


30th September, 16:25-18:15, Room 401-402-403

Posters

Overlaying Navigation Signs on a Road Surface using a Head-Up Display
Kaho Ueno, Takashi Komuro
Teaser

In this paper, we propose a method for overlaying navigation signs on a road surface and displaying them on a head-up display (HUD). Accurate overlaying is realized by measuring 3D data of the surface in real time using a depth camera. In addition, the effect of head movement is reduced by performing face tracking with a camera that is placed in front of the HUD, and by performing distortion correction of projection images according to the driver’s viewpoint position. Using an experimental system, we conducted an experiment to display a navigation sign and confirmed that the sign is overlaid on a surface. We also confirmed that the sign looks to be fixed on the surface in real space.


Deformation Estimation of Elastic Bodies Using Multiple Silhouette Images for Endoscopic Image Augmentation
Akira Saito, Megumi Nakao, Yuki Uranishi, Tetsuya Matsuda
Teaser

This study proposes a method to estimate elastic deformation using silhouettes obtained from multiple endoscopic images. Our method allows to estimate intraoperative deformation of organs using a volumetric mesh model reconstructed from preoperative CT data. We use silhouette information of elastic bodies not to model the shape but to estimate the local displacements. The model shape is updated to satisfy the constraint of silhouettes while preserving the shape as much as possible. The result of the experiments showed that the proposed methods could estimate the deformation with 5mm-1cm RMS errors.


Hands-free AR Work Support System Monitoring Work Progress with Point-cloud Data Processing
Hirohiko Sagawa, Hiroto Nagayoshi, Harumi Kiyomizu, Tsuneya Kurihara
Teaser

We present a hands-free AR work support system that provides work instructions to workers without interrupting normal work procedures. This system estimates the work progress by monitoring the status of work objects only on the basis of 3D data captured from a depth sensor mounted on a helmet, and it selects appropriate information to be displayed on a head-mounted display (HMD) on the basis of the estimated work progress. We describe a prototype of the proposed system and the results of primary experiments carried out to evaluate the accuracy and performance of the system.


Endoscopic Image Augmentation Reflecting Shape Changes During Cutting Procedures
Megumi Nakao, Shota Endo, Keiho Imanishi, Tetsuya Matsuda
Teaser

This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support cutting procedures. This framework handles the history of measured drill tip’s location as a volume label, and visualizes the remains to be cut overlaid on the endoscopic image in real time. We performed a cutting experiment, and the efficacy of the cutting aid was evaluated among shape similarity, total moved distance of a cutting tool, and the required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.


Toward Enhancing Robustness of DR System: Ranking Model for Background Inpainting
Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Akira Kojima
Teaser

A method for blindly predicting inpainted image quality is proposed for enhancing the robustness of diminished reality (DR), which uses inpainting to remove unwanted objects by replacing them with background textures in real time. The method maps from inpainted image features to subjective image quality scores without the need for reference images. It enables more complex background textures to be applied to DR.


Interactive Visualizations for Monoscopic Eyewear to Assist in Manually Orienting Objects in 3D
Carmine Elvezio, Mengu Sukan, Steve Feiner, Barbara Tversky
Teaser

Assembly or repair tasks often require objects to be held in specific orientations to view or fit together. Research has addressed the use of AR to assist in these tasks, delivered as registered overlaid graphics on stereoscopic head-worn displays. In contrast, we are interested in using monoscopic head-worn displays, such as Google Glass. To accommodate their small monoscopic field of view, off center from the user's line of sight, we are exploring alternatives to registered overlays. We describe four interactive rotation guidance visualizations for tracked objects intended for these displays.


Movable Spatial AR On-The-Go
Ahyun Lee, Joo-Haeng Lee, Jaehong Kim
Teaser

We present a movable spatial augmented reality (SAR) system that can be easily installed in a user workspace. The proposed system aims to dynamically cover a wider projection area using a portable projector attached to a simple robotic device. It has a clear advantage than a conventional SAR scenario where, for example, a projector should be installed with a fixed projection area in the workspace. In the previous research [1], we proposed a data-driven kinematic control method for a movable SAR system. This method targets a SAR system integrated with a user-created robotic (UCR) device where an explicit kinematic configuration such as CAD model is unavailable. Our contribution in this paper is to show the feasibility of the data-driven control method by developing a practical application where dynamic change of projection area matters. We outline the control method and demonstrate an assembly guide example using a casually installed movable SAR system.


2D-3D Co-segmentation for AR-based Remote Collaboration
Kuo-Chin Lien, Benjamin Nuernberger, Matthew Turk, Tobias Höllerer
Teaser

In Augmented Reality (AR) based remote collaboration, a remote user can draw a 2D annotation that emphasizes an object of interest to guide a local user accomplishing a task. This annotation is typically performed only once and then sticks to the selected object in the local user's view, independent of his or her camera movement. In this paper, we present an algorithm to segment the selected object, including its occluded surfaces, such that the 2D selection can be appropriately interpreted in 3D and rendered as a useful AR annotation even when the local user moves and significantly changes the viewpoint.


Manipulating Haptic Shape Perception by Visual Surface Deformation and Finger Displacement in Spatial Augmented Reality
Toshio Kanamori, Daisuke Iwai, Kosuke Sato
Teaser

Many researchers are trying to realize a pseudo-haptic system which can visually manipulate a user's haptic shape perception when touching a physical object. In this paper, we focus on altering the perceived surface shape of a curved object when touching it with an index finger, by visually deforming it's surface shape and displacing the visual representation of the user's index finger as like s/he is touching the deformed surface, using spatial augmented reality. A experiment were conducted with a projection system to confirm the effect of the visual feedback for altering the perceived shape of curved shape. The result showed that the shape which the participants perceived was deformed from the actual shape they touched. The result prove the possibility for manipulating user's perceived shape of a curved surface by using pseudo-haptics in spatial augmented reality.


Mixed-Reality Store on the Other Side of a Tablet
Masaya Ohta, Shunsuke Nagano, Hotaka Niwa, Katsumi Yamashita
Teaser

This paper proposes a mixed-reality shopping system for users who do not own a PC but do own a tablet. In this system, while viewing panoramic images photographed along the aisles of a real store, the user can move freely around the store. Products can be selected and freely viewed from any angle. Furthermore, by utilizing a Photo-based augmented reality (Photo AR) technology the product can be displayed as if it were in the hands of the user. The results of a user evaluation showed that even though the proposed system uses a tablet with a smaller screen it was preferred over a conventional e-commerce site using a larger monitor.


Avatar-Mediated Contact Interaction between Remote Users for Social Telepresence
JIHYE OH, Yeonjoon Kim, Taeil Jin, Sukwon Lee, Youjin Lee, Sung-Hee Lee
Teaser

Social touch such as a handshake increases the sense of coexistence and closeness between remote users in a social telepresence environment, but creating such coordinated contact movements with a distant person is extremely difficult if given only visual feedback, without haptic feedback. This paper presents a method to enable hand-contact interaction between remote users in an avatar-mediated telepresence environment. The key approach is, while the avatar directly follows its owner’s motion in normal conditions, it adjusts the pose to maintain contact with the other user when the two users attempt to make contact interaction. To this end, we develop classifiers to recognize the users’ intention for the contact interaction. The contact classifier identifies whether the users try to initiate contact when they are not in contact, and the separation classifier identifies whether the two in contact attempt to break contact. The classifiers are trained based on a set of geometric distance features. During the contact phase, inverse kinematics is solved to determine the pose of the avatar’s arm so as to initiate and maintain natural contact with the other user’s hand. Our system is unique in that two remote users can perform real time hand contact interaction in a social telepresence environment.


Towards Estimating Usability Ratings of Handheld Augmented Reality Using Accelerometer Data
Marc Ericson Santos, Takafumi Taketomi, Goshiro Yamamoto, Gudrun Klinker, Christian Sandor, Hirokazu Kato
Teaser

Usability evaluations are important to the development of augmented reality systems. However, conducting large-scale longitudinal studies remains challenging because of the lack of inexpensive but appropriate methods. In response, we propose a method for implicitly estimating usability ratings based on readily available sensor logs. To demonstrate our idea, we explored the use of features of accelerometer data in estimating usability ratings in an annotation task. Results show that our implicit method corresponds with explicit usability ratings at 79-84%. These results should be investigated further in other use cases, with other sensor logs.


Abecedary tracking and mapping: a toolkit for tracking competition
Hideaki Uchiyama, Takafumi Taketomi, Sei Ikeda, João Lima
Teaser

This paper introduces a toolkit with camera calibration, monocular Simultaneous Localization and Mapping (SLAM) and registration with a calibration marker. With the toolkit, users can perform the whole procedure of the ISMAR on-site tracking competition in 2015. Since the source code is designed to be well-structured and highly-readable, users can easily install and modify the toolkit. By providing the toolkit, we encourage beginners to learn tracking techniques and participate in the competition.


Retrieving Lights Positions Using Plane Segmentation with Diffuse Illumination Reinforced with Specular Component
Paul-Émile Buteau, Hideo Saito
Teaser

We present a novel method to retrieve multiple positions of point lights in real indoor scenes based on a 3D reconstruction. This method takes advantage of illumination over planes detected using a segmentation of the reconstructed mesh of the scene. We can also provide an estimation without suffering from the presence of specular highlights but rather use this component to refine the final estimation. This allows consistent relighting throughout the entire scene for aumented reality purposes.


[S&T Short]
Simultaneous Direct and Augmented View Distortion Calibration of Optical See-Through Head-Mounted Displays

Yuta Itoh, Gudrun Klinker

In Augmented Reality (AR) with an Optical See-Through Head-Mounted Display (OST-HMD), the spatial calibration between a user's eye and the display screen is a crucial issue in realizing seamless AR experiences. A successful calibration hinges upon proper modeling of the display system which is conceptually broken down into an eye part and an HMD part. This paper breaks the HMD part down even further to investigate optical aberration issues. The display optics causes two different optical aberrations that degrade the calibration quality: the distortion of incoming light from the physical world, and that of light from the image source of the HMD. While methods exist for correcting either of the two distortions independently, there is, to our knowledge, no method which corrects for both simultaneously.
This paper proposes a calibration method that corrects both of the two distortions simultaneously for an arbitrary eye position given an OST-HMD system. We expand a light-field (LF) correction approach [8] originally designed for the former distortion. Our method is camera- based and has an offline learning and an online correction step. We verify our method in exemplary calibrations of two different OST-HMDs: a professional and a consumer OST-HMD. The results show that our method significantly improves the calibration quality compared to a conventional method with the accuracy comparable to 20/50 visual acuity. The results also indicate that only by correcting both the distortions simultaneously can improve the quality.


[S&T Full]
Semi-Parametric Color Reproduction Method for Optical See-Through Head-Mounted Displays

Yuta Itoh, Maksym Dzitsiuk, Toshiyuki Amano, Gudrun Klinker

The fundamental issues in Augmented Reality (AR) are on how to naturally mediate the reality with virtual content as seen by users. In AR applications with Optical See-Through Head- Mounted Displays (OST-HMD), the issues often raise the problem of rendering color on the OST-HMD consistently to input colors. However, due to various display constraints and eye properties, it is still a challenging task to indistinguishably reproduce the colors on OST-HMDs. An approach to solve this problem is to pre-process the input color so that a user perceives the output color on the display to be the same as the input.
We propose a color calibration method for OST-HMDs. We start from modeling the physical optics in the rendering and perception process between the HMD and the eye. We treat the color distortion as a semi-parametric model which separates the non-linear color distortion and the linear color shift. We demonstrate that calibrated images regain their original appearance on two OST-HMD setups with both synthetic and real datasets. Furthermore, we analyze the limitations of the proposed method and remaining problems of the color reproduction in OST- HMDs. We then discuss how to realize more practical color reproduction methods for future HMD-eye system.




1st October, 13:50-15:50, Room 401-402-403

Extended Posters

Augmented Reality for Radiation Awareness
Nicola Leucht, Severine Habert, Patrick Wucherer, Simon Weidert, Nassir Navab, Pascal Fallavollita
Teaser

C-arm fluoroscopes are frequently used during surgeries for intra-operative guidance. Unfortunately, due to X-ray emission and scattering, increased radiation exposure occurs in the operating theatre. The objective of this work is to sensitize the surgeon to their radiation exposure, enable them to check on their exposure over time, and to help them choose their best position related to the C-arm gantry during surgery. First, we aim at simulating the amount of radiation that reaches the surgeon using the Geant4 software, a toolkit developed by CERN. Using a flexible setup in which two RGB-D cameras are mounted to the mobile C-arm, the scene is captured and modeled respectively. After the simulation of particles with specific energies, the dose at the surgeon’s position, determined by the depth cameras, can be measured. The validation was performed by comparing the simulation results to both theoretical values from the C-arms user manual and real measurements made with a QUART didoSVM dosimeter. The average error was 16.46% and 16.39%, respectively. The proposed flexible setup and high simulation precision without a calibration with measured dosimeter values, has great potential to be directly used and integrated intraoperatively for dose measurement.


Remote Mixed Reality System Supporting Interactions with Virtualized Objects
Peng Yang, Itaru Kitahara, Yuichi Ohta
Teaser

Mixed Reality (MR) can merge real and virtual worlds seamlessly. This paper proposes a method to realize smooth collaboration using a remote MR, which makes it possible for geographically distributed users to share the same objects and communicate in real time as if they are at the same place. In this paper, we consider a situation that the users at local and remote sites perform a collaborative work, and real objects to be operated exist only at the local site. It is necessary to share the real objects between the two sites. In prior studies sharing real objects by duplication is either too costly or unrealistic. Therefore, we propose a method to share the objects by virtualizing the real objects using Computer Vision (CV) and then rendering the virtualized objects using MR. We also realize the interaction with virtualized objects by the remote site user and construct a remote collaborative work system. Through experiments, we confirmed the effectiveness of our approach.


Fusion of Vision and Inertial Sensing for Accurate and Efficient Pose Tracking on Smartphones
Xin Yang, Tim Cheng
Teaser

This paper aims at accurate and efficient pose tracking of planar targets on modern smartphones. Existing methods, relying on either visual features or motion sensing based on built-in inertial sensors, are either too computationally expensive to achieve real-time performance on a smartphone, or too noisy to achieve sufficient tracking accuracy. In this paper we present a hybrid tracking method which can achieve real-time performance with high accuracy. Based on the same framework of a state-of-the-art visual feature tracking algorithm [5] which ensures accurate and reliable pose tracking, the proposed hybrid method significantly reduces its computational cost with the assistance of a phone’s built-in inertial sensors. However, noises in inertial sensors and abrupt errors in feature tracking due to severe motion blurs could result in instability of the hybrid tracking system. To address this problem, we propose to employ an adaptive Kamlan filter with abrupt error detection to robustly fuse the inertial and feature tracking results. We evaluated the proposed method on a dataset consisting of 16 video clips with synchronized inertial sensing data. Experimental results demonstrated our method’s superior performance and accuracy on smartphones, compared to a state-of-the-art vision tracking method [5]. The dataset will be made publicly available with the publication of this paper.


Augmenting mobile C-arm fluoroscopes via Stereo-RGBD sensors for multimodal visualization
Severine Habert, Meng Ma, Wadim Kehl, Xiang Wang, Federico Tombari, Pascal Fallavollita, Nassir Navab
Teaser

Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors followed by the X-ray source with a 3D calibration object. Then, we synthesize the projective image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm.


INCAST: Interactive Camera Streams for Surveillance Cams AR
István Szentandrási, Michal Zachariáš, Rudolf Kajan, Jan Tinka, Markéta Dubská, Jakub Sochor, Adam Herout
Teaser

Augmented reality does not make any sense for fixed cameras. Or does it? In this work, we are dealing with static cameras and their usability for augmented reality applications. Knowing that the camera does not move makes camera pose estimation both less and more difficult - one does not have to deal with pose change in time, but on the other hand, obtaining some level of understanding of the scene from a single viewpoint is challenging. We propose several ways how to gain advantage from the camera being static and a pipeline of a system for broadcasting a video stream enriched by information needed for its visual augmenting - Interactive Camera Streams, INCAST. We present a proof-of-concept system showing the usability of INCAST on several use-cases - non-interactive demos and simple AR games.


Natural 3D Interaction using a See-through Mobile AR System
Yuko Unuma, Takashi Komuro
Teaser

In this paper, we propose an interaction system in which the appearance of the image displayed on a mobile display is consistent with that of the real space and that enables a user to interact with virtual objects overlaid on the image using the user’s hand. The three-dimensional scene obtained by a depth camera is projected according to the user’s viewpoint position obtained by face tracking, and the see-through image whose appearance is consistent with that outside the mobile display is generated. Interaction with virtual objects is realized by using the depth information obtained by the depth camera. To move virtual objects as if they were in real space, virtual objects are rendered in the world coordinate system that is fixed to a real scene even if the mobile display moves, and the direction of gravitational force added to virtual objects is made consistent with that of the world coordinate system. The former is realized by using the ICP (Iterative Closest Point) algorithm and the latter is realized by using the information obtained by an accelerometer. Thus, natural interaction with virtual objects using the user’s hand is realized.


Augmented Wire Routing Navigation for Wire Assembly
Mark Rice, Hong Huei Tay, Jamie Ng, Calvin Lim, Senthil Selvaraj, Ellick Wu
Teaser

Within manufacturing, high value digital solutions are needed to optimize and aid shop floor processes. This includes agile technologies that can be easily integrated into factory environments to facilitate manufacturing tasks. In this paper, we present a dynamic system to support the electrical wiring assembly of commercial aircraft. Specifically, we describe the system design, which aims to improve the productivity of factory operators through the integration of wearable and mobile solutions. An evaluation of the augmented component of our system using a pair of smart glasses is reported with 12 participants, as we describe important interaction issues in the ongoing development of this work.


Marker Identification Using IR LEDs and RGB Color Descriptors
Gou Koutaki, Shodai Hirata, Hiromu Sato, Keiichi Uchimura
Teaser

In optical motion capture systems, it is difficult to correctly recognize markers based on their unique identifiers (IDs) in a single frame. In this paper, we propose two types of light-emitting diodes (LEDs) and cameras, infrared (IR) and RGB, in order to correctly detect and identify all markers tracking objects in a given system. To detect and estimate the three-dimensional (3D) position of the marker, we measure IR LEDs using IR stereo cameras. Furthermore, in order to identify each marker, we calculate and compare the RGB color descriptor in the vicinity of its center. Our system consists of general IR and RGB cameras, and is easy to extend by increasing the number of cameras. We implemented an IR/RGB LED marker circuit and constructed a simple motion capture system to test the effectiveness of our system. The results show that our system can detect the 3D positions and unique IDs of markers in one frame.


RGB-D/C-arm Calibration and Application in Medical Augmented Reality
Xiang Wang, Severine Habert, Meng Ma, Chun-Hao Huang, Pascal Fallavollita, Nassir Navab
Teaser

Calibration and registration are the first steps for augmented reality and mixed reality applications. In the medical field, the calibration between an RGB-D camera and a mobile C-arm fluoroscope is a new topic which introduces challenges. In this paper, we propose a precise 3D/2D calibration method to achieve a video augmented fluoroscope. With the design of a suitable calibration phantom for RGB-D/C-arm calibration, we calculate the projection matrix from the depth camera coordinates to the X-ray image. Through a comparison experiment by combining different steps leading to the calibration, we evaluate the effect of every step of our calibration process. Results demonstrated that we obtain a calibration RMS error of 0.54±1.40 mm which is promising for surgical applications. We conclude this paper by showcasing two clinical applications. One is a markerless registration application, the other is an RGB-D camera augmented mobile C-arm visualization.


Transforming your website to an augmented reality view
Dimitrios Ververidis, Spiros Nikolopoulos, Ioannis Kompatsiaris
Teaser

In this paper we present FastAR, a software component capable of transforming Joomla based websites into AR-channels compatible with the most popular augmented reality browsers (i.e. Junaio, Layar, Wikitude). FastAR exploits the consistency of the data structure across multiple sites that have been developed using the same content management system, so as to automate the transformation process of an internet website to an augmented reality channel. The proposed component abstracts all related programming tasks and significantly reduces the time required to generate and publish AR-content, making the entire process manageable by non-experts. In verifying the usefulness and effectiveness of FastAR, we conducted a survey to solicit the opinion of users who carried out the installation and transformation process.



Posters

Improved SPAAM Robustness Through Stereo Calibration
Kenneth Moser, J. Edward Swan II
Teaser

We are investigating methods for improving the robustness and consistency of the Single Point Active Alignment Method (SPAAM) optical see-through (OST) head-mounted display (HMD) calibration procedure. Our investigation focuses on two variants of SPAAM. The first utilizes a standard monocular alignment strategy to calibrate the left and right eye separately, while the second leverages stereoscopic cues availble from binocular HMDs to calibrate both eyes simultaneously. We compare results from repeated calibrations between methods using eye location estimates and inter pupilary distance (IPD) measures. Our findings indicate that the stereo SPAAM method produces more accurate and consistent results during calibration compared to the monocular variant.


Road Maintenance MR System Using LRF and PDR
ChingTzun Chang, Ryosuke Ichikari, Takashi Okuma, Takeshi Kurata, Koji Makita
Teaser

We have been developing a MR system for supporting road maintenance using overlaid visual aids. In this situation, we need a positioning method that can provide sub-meter accuracy and work even if the appearance of the road surface is completely different which is caused by many factors such as construction phase, time (i.e. day and night) and weather. Therefore, we are developing a real-time worker positioning method that can be applied to these situation by using data fusion of laser range finder (LRF) and pedestrian dead-reckoning (PDR). In the real field, plural workers are working and moving around the workspace, so we need to make correspondence between PDR-based trajectories and LRF-based trajectories by defining the similarity of trajectories.. Corresponded pair of trajectories will be fusion for acquiring the position and direction of the worker. In this paper, we proposed a method to calculate the similarity between trajectories and a procedure to fusion them.


Geometric Mapping for Color Compensation using Scene Adaptive Patches
Jong Hun Lee, Yong Hwi Kim, Yong Yi Lee, Kwan Heng Lee
Teaser

The SAR technique using a projector-camera system allows us to make various effect on a real scene without physical reconstitution. In order to project contents on a textured scene without color imperfections, geometric and radiometric compensation of a projection image should be conducted as preprocessing. In this paper, we present a new geometric mapping method for color compensation in the projector-camera system. We capture the scene and segment it into adaptive patch according to the scene structure using the SLIC segmentation. The piece-wise polynomial function is evaluated for each patch to find pixel-to-pixel correspondences between the measured and projection images. Finally, color compensation is performed by using a color mixing matrix. Experimental results show that our geometric mapping method establishes accurate correspondences and color compensation alleviates the color imperfections which is caused by texture of a general scene.


Pseudo Printed Fabrics through Projection Mapping
Yuichiro Fujimoto, Goshiro Yamamoto, Takafumi Taketomi, Christian Sandor, Hirokazu Kato
Teaser

Projection-based Augmented Reality commonly projects on rigid objects, while only few systems project on deformable objects. In this paper, we present Pseudo Printed Fabrics (PPF), which enables the projection on a deforming piece of cloth. This can be applied to previewing a cloth design while manipulating its shape. We support challenging manipulations, including heavy occlusions and stretching the cloth. In previous work, we developed a similar system, based on a novel marker pattern; PPF extends it in two important aspects. First, we improved performance by two orders of magnitudes to achieve interactive performance. Second, we developed a new interpolation algorithm to keep registration during challenging manipulations. We believe that PPF can be applied to domains including virtual-try on and fashion design.


[S&T Short]
Augmented Reality during Cutting and Tearing of Deformable Objects

Christoph Paulus, Nazim HAOUCHINE, David Cazier, Stephane Cotin

Current methods dealing with non-rigid augmented reality only provide an augmented view when the topology of the tracked object is not modified, which is an important limitation. In this paper we solve this shortcoming by introducing a method for physics-based non-rigid augmented reality. Singularities caused by topological changes are detected by analyzing the displacement field of the underlying deformable model. These topological changes are then applied to the physics-based model to approximate the real cut. All these steps, from deformation to cutting simulation, are performed in real-time. This significantly improves the coherence between the actual view and the model, and provides added value.


[S&T Short]
Efficient Computation of Absolute Pose for Gravity-Aware Augmented Reality

Chris Sweeney, John Flynn, Benjamin Nuernberger, Matthew Turk, Tobias Höllerer

We propose a novel formulation for determining the absolute pose of a single or multi-camera system given a known vertical direction. The vertical direction may be easily obtained by detecting the vertical vanishing points with computer vision techniques, or with the aid of IMU sensor measurements from a smartphone. Our solver is general and able to compute absolute camera pose from two 2D-3D correspondences for single or multi-camera systems. We run several synthetic experiments that demonstrate our algorithm's improved robustness to image and IMU noise compared to the current state of the art. Additionally, we run an image localization experiment that demonstrates the accuracy of our algorithm in real-world scenarios. Finally, we show that our algorithm provides increased performance for real-time model-based tracking compared to solvers that do not utilize the vertical direction and show our algorithm in use with an augmented reality application running on a Google Tango tablet.


[Doctoral Consortium]
User Study on Augmented Reality User Interfaces for 3D Media Production
Max Krichenbauer

One of the most intuitive concepts in regards of applications for Augmented Reality (AR) user interfaces (UIs) is the possibility to create virtual 3D media content, such as 3D models and animations for movies and games. Even though this idea has been repeatedly suggested over the last decades, and in spite of recent technological advancements, very little progress has been made towards an actual real-world application of AR in professional media production. To this day, no immersive 3D UI has been commonly used by professionals for 3D computer graphics (CG) content creation. In our recent paper published at ISMAR2014 we have analyzed the current state of 3D media content creation, including a survey on professional 3D media design work, a requirements analysis for prospective 3D UIs, and a UI concept to meet the identified challenges of real-world application of AR to the production pipeline. Our current research continues on this by working on validating our approach in a user study with both amateur and professional 3D artists. We aim to show that several characteristics of UI design common in academic research are highly problematic for research focused on real-world applications. These are: placing the primary focus on intuitiveness, neglect of typical demerits of a novel technology, and relying on easy-to-acquire samples from the general population for both qualitative and quantitative data. The results of this user study will produce new insights helpful for the future research, design, and development of 3D UIs for media creation.


[Doctoral Consortium]
SPAROGRAM: The Spatial Augmented Reality Holographic Display for 3D Interactive Visualization
Minju Kim

Augmented reality (AR) technology has become popular and been widely used in various fields. Recent advances in display and computer graphics technology have minimized the barriers of the screen, causing closer relationship between physical and virtual space. In spite of this, there are still restrictions and difficulties to achieve representing pervasive information on the physical space which is the ultimate goal of AR. The central research question for our dissertation is how to present realistic spatial AR visualization with the tight integration of the three-dimensional visual forms and the real space. In this paper, we propose SPAROGRAM, the experimental prototype that is capable of visualizing augmented information by fully exploiting the space that takes in surroundings of the real object coherently. In order to present information effectively and to explore interaction issues by taking advantage of the new display capabilities, we will approach this topic with two methodologies: 3D Spatial Visualization and Interaction. In the early stage of the study, we solved basic problems. First, we designed SPAROGRAM prototype. Then, to express spatial visualization, stereoscopic image was applied in multi-layer display, while spatial user interaction was applied in order to provide coherent spatial experience. Our initial investigation suggests that the newly conceived spatial AR holographic display possesses the ability to create seamless 3D space perception and to express 3D AR visualization effectively and in more realistic way.


[Doctoral Consortium]
Situated Analytics: Interactive Analytical Reasoning In Physical Space
Neven ElSayed

Our research aim is to develop and examine techniques to support analytical reasoning in physical space. We present novel interaction and visualization techniques for supporting such reasoning, which we call Situated Analytics. Our approach comprises situated and abstract information representation with real-time analytical interaction. This research draws on two research areas: Visual Analytics and Augmented Reality.


[Doctoral Consortium]
AR Guided Capture and Modeling for Improved Virtual Navigation
Benjamin Nuernberger

Using photographs to model the visual world for virtual navigation involves the three steps of collecting photos, performing image-based modeling with those photos, and finally giving the user controls to navigate the final model. In this dissertation, we enhance each step in this pipeline in specific ways for the ultimate goal of augmented and virtual navigation of visual reality. In so doing, this thesis will open new perspectives both in the process of collecting real world content for virtual navigation experiences and in providing novel guided virtual tours. First, we describe augmented reality (AR) interfaces that effectively guide users to capture the visual world, directing them to cover the most important areas of the scene. Next, while most image-based modeling techniques focus on accurate geometry reconstruction, we instead propose modeling methods that alternatively focus on the motor and cognitive components of navigation. Finally, we propose constrained user interfaces for both augmented and virtual navigation. By enhancing each step in this pipeline, our approach enables a more efficient and intuitive virtual navigation experience.


[Doctoral Consortium]
Free-Hand Gesture-based Interaction for Handheld Augmented Reality
Huidong Bai

In this research, we investigate mid-air free-hand gesture-based interfaces for handheld Augmented Reality (AR). We prototype our gesture-based handheld AR interfaces by combing visual AR tracking and free-hand gesture detection. We describe how each method is implemented and evaluated in user experiments that compare them with traditional device-centric methods like touch input. Results from our user studies show that the proposed interaction methods are natural and intuitive, and provide a more fun and engaging user experience. We discuss implications of this research and directions for further work.


[Doctoral Consortium]
Supporting Asynchronous Collaboration within Spatial Augmented Reality
Andrew Irlitti

Collaboration is seen as a promising area of investigation for the utilization of augmented reality. However exploration has concentrated on synchronous solutions, both co-located and remote, with limited investigation into asynchronous practices. Asynchronous collaboration differs from its synchronous counterparts in that the associated knowledge plays a much more vital role in the strength and accuracy of the associated instructions. This dissertation has focused on the investigation into the applicability and acceptance of augmented reality and its corresponding interactions for a producer-consumer asynchronous collaboration, where the underlying communication plays as an important role as the provided augmentations. Initial results demonstrated the benefits of augmented reality for procedural guidance, allowing further investigation into asynchronous techniques such as providing physical anchors for virtual content, mobile handheld spatial augmented reality, and the effect on performance in multi-faceted environment search tasks using a reduced field of view. In this paper we provide the reasoning behind our approaches, and present our existing work to test and validate our proposition.




2nd October, 13:30-15:30, Room 401-402-403

Extended Posters

A Step Closer To Reality: Closed Loop Dynamic Registration Correction in SAR
Hemal Naik, Federico Tombari, Christoph Resch, Peter Keitler, Nassir Navab
Teaser

In Spatial Augmented Reality (SAR) applications, real world objects are augmented with virtual content by means of a calibrated camera-projector system. The virtual content to be projected is prepared by means of a computer generated model (CAD) of the real object. It is often the case that the real object deviates from its CAD model, this resulting in misregistered augmentations. We propose a new method to dynamically correct the planned augmentation by accommodating for the unknown deviations in the object geometry. We use a closed loop approach where the projected features are detected in the camera image and deployed as feedback. As a result, the registration misalignment is identified and the augmentations are corrected in areas affected by the deviation. Our work is especially focused on SAR applications related to the industrial domain, where this problem is omnipresent. We show that our method is effective and beneficial for multiple industrial applications.


Design Guidelines for Generating Augmented Reality Instructions
Cledja Karina Rolim da Silva, Dieter Schmalstieg, Denis Kalkofen, Veronica Teichrieb
Teaser

Most work about instructions in Augmented Reality (AR) does not follow established patterns or design rules ? each approach defines its own method on how to convey instructions. This work describes our initial results and experiences towards defining design guidelines for AR instructions. The guidelines were derived from a survey of the most common visualization techniques and instruction types applied in AR. We studied about how instructions 2D and 3D can be applied in the AR context.


Haptic Ring Interface Enabling Air-Writing in Virtual Reality Environment
Kiwon Yeom, Jounghuem Kwon, Sang-Hun Nam, Bum-Jae You
Teaser

We introduce a novel finger worn ring interface that enables complex spatial interactions through 3D hand movement in virtual reality environment. Users receive physical feedback in the form of vibrations from the wearable ring interface as their finger reaches a certain 3D position. The positions of the fingertip are extracted, linked, and then reconstructed as a trajectory. This system allows the wearer to write characters in midair as if they were using an \ imaginary whiteboard. User can freely write in the air using Korean characters, English letters, both upper and lower case, and digits in real time with over 92% accuracy rate. Thus, it is now conceivable that anything people can do on contemporary touch based devices, they could do in midair with a pseudocontact interface.


Remote Welding Robot Manipulation using Multi-view Images
Yuichi Hiroi, Kei Obata, Katsuhiro Suzuki, Naoto Ienaga, Maki Sugimoto, Hideo Saito, Tadashi Takamaru
Teaser

This paper proposes a remote welding robot manipulation system by using multi-view images. After an operator specifies two-dimensional path on images, the system transforms it into three-dimensional path and displays the movement of the robot by overlaying graphics with images. The accuracy of our system is sufficient to weld objects when combining with a sensor in the robot. The system allows the non-expert operator to weld objects remotely and intuitively, without the need to create a 3D model of a processed object beforehand.


A Particle Filter Approach to Outdoor Localization using Image-based Rendering
Christian Poglitsch, Clemens Arth, Dieter Schmalstieg, Jonathan Ventura
Teaser

We propose an outdoor localization system using a particle filter. In our approach, a textured, geo-registered model of the outdoor environment is used as a reference to estimate the pose of a smartphone. The device position and the orientation obtained from a Global Positioning System (GPS) receiver and an inertial measurement unit (IMU) are used as a first estimation of the true pose. Then, multiple pose hypotheses are randomly distributed about the GPS/IMU measurement and use to produce renderings of the virtual model. With vision-based methods, the rendered images are compared with the image received from the smartphone, and the matching scores are used to update the particle filter. The outcome of our system improves the camera pose estimate in real time without user assistance.


AR4AR: Using Augmented Reality for guidance in Augmented Reality Systems setup
Frieder Pankratz, Gudrun Klinker
Teaser

AR systems have been developed for many years now, ranging from systems consisting of a single sensor and output device to systems with a multitude of sensors and/or output devices. With the increasing complexity of the setup, the complexity of handling the different sensors as well as the necessary calibrations and registrations increases accordingly. A much needed (yet missing) area of augmented reality applications is to support AR system engineers when they set up and maintain an AR system by providing visual guides and giving immediate feedback on the current quality of their calibration measurements.


Exploiting Photogrammetric Targets for Industrial AR
Hemal Naik, Yuji Oyamada, Peter Keitler, Nassir Navab
Teaser

In this work, we encourage the idea of using Photogrammetric targets for object tracking in Industrial Augmented Reality (IAR). Photogrammetric targets, especially uncoded circular targets, are widely used in the industry to perform 3D surface measurements. Therefore, an AR solution based on the uncoded circular targets can improve the work flow integration by reusing existing targets and saving time. These circular targets do not have coded patterns to establish unique 2D-3D correspondences between the targets on the model and their image projections. We solve this particular problem of 2D-3D correspondence of non-coplanar circular targets from a single image. We introduce a Conic Pair Descriptor, which computes the Eucledian invariants from circular targets both in the model space and in the image space. A three stage method is used to compare the descriptors and compute the correspondences with up to 100% precision and 89% recall rates. We are able to achieve tracking performance of 3 FPS (2560x1920 pix) to 8 FPS (640x480 pix) depending on the camera resolution.


Rubix: Dynamic Spatial Augmented Reality by Extraction of Plane Regions with a RGB-D Camera
Masayuki Sano, Kazuki Matsumoto, Bruce Thomas, Hideo Saito
Teaser

Dynamic spatial augmented reality requires accurate real-time 3D pose information of the physical objects that are to be projected onto. Previous depth-based methods for tracking objects required strong features to enable recognition; making it difficult to estimate an accurate 6DOF pose for physical objects with a small set of recognizable features (such as a non-textured cube). We propose a more accurate method with fewer limitations for the pose estimation of a tangible object that has known planar faces and using depth data from an RGB-D camera only. In this paper, the physical object's shape is limited to cubes of different sizes. We apply this new tracking method to achieve dynamic projections onto these cubes. In our method, 3D points from an RGB-D camera are divided into a cluster of planar regions, and the point cloud inside each face of the object is fitted to an already-known geometric model of a cube. With the 6DOF pose of the physical object, SAR generated imagery is then projected correctly onto the physical object. The 6DOF tracking is designed to support tangible interactions with the physical object. We implemented example interactive applications with one or multiple cubes to show the capability of our method.


Content Completion in Lower Dimensional Feature Space through Feature Reduction and Compensation
Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Akira Kojima
Teaser

A novel framework for image/video content completion comprising three stages is proposed. First, input images/videos are converted to a lower dimensional feature space, which is done to achieve effective restoration even in cases where a damaged region includes complex structures and changes in color. Second, a damaged region is restored in the converted feature space. Finally, an inverse conversion from the lower dimensional feature space to the original feature space is performed to generate the completed image in the original feature space. This three-step solution generates two advantages. First, it enhances the possibility of applying patches dissimilar to those in the original color space. Second, it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. Experiments verify the effectiveness of the proposed framework.


ARPML: The Augmented Reality Process Modeling Language
Tobias Müller, Tim Rieger
Teaser

The successful application of augmented reality as a guidance tool for procedural tasks like maintenance or repair requires an easily usable way of modeling support processes. Even though some suggestions have already been made to address this problem, they still have shortcomings and don't provide all the needed features. Thus in a first step the requirements that a solution has to meet are collected and presented. Based on these, the augmented reality process modeling language (ARPML) is developed, which consists of the four building blocks (i) templates, (ii) sensors, (iii) work steps and (iv) tasks. Other than existing approaches it facilitates the creation of multiple views on one process. This allows to specifically select instructions and information needed in targeted work contexts. It also allows to combine multiple variants of one process into one model with only a minimum of redundancy. The application of ARPML is shown with a practical example.


Authoring Tools in Augmented Reality: An Analysis and Classification of Content Design Tools
Roberta Cabral Mota, Rafael Roberto, Veronica Teichrieb
Teaser

Augmented Reality Authoring Tools are important instruments that can help a widespread use of AR. They can be classified as programming or content design tools in which the latter completely removes the necessity of programming skills to develop an AR solution. Several solutions have been developed in the past years, however there are few works aiming to identify patterns and general models for such tools. This work aims to perform a trend analysis on content design tools in order to identify their functionalities regarding AR, authoring paradigms, deployment strategies and general dataflow models. This work is aimed to assist developers willing to create authoring tools, therefore, it focus on the last three aspects. Thus, 19 tools were analyzed and through this evaluation it were identified two authoring paradigms and two deployment strategies. Moreover, from their combination it was possible to elaborate four generic dataflow models in which every tool could be fit into.


Affording Visual Feedback for Natural Hand Interaction in AR to Assess Upper Extremity Motor Dysfunction
Marina A. Cidota, Rory M.S. Clifford, Paul Dezentje, Stephan G. Lukosch, Paulina J.M. Bank
Teaser

For the clinical community, there is great need for objective, quantitative and valid measures of the factors contributing to motor dysfunction. Currently, there are no standard protocols to assess motor dysfunction in various patient groups, where each medical discipline uses subjectively scored clinical tests, qualitative video analysis, or cumbersome marker-based motion capturing. We therefore investigate the potential of Augmented Reality (AR) combined with serious gaming and marker-less tracking of the hand to facilitate efficient, cost-effective and patient-friendly methods for evaluation of upper extremity motor dysfunction in various patient groups. First, the design process of the game and the system architecture of the AR framework are described. To provide unhindered assessment of motor dysfunction, patients should freely operate with the system in a natural way and be able to understand their actions in the Virtual AR world. To test this in our system, we conducted a pilot usability study with five healthy people (aged between 57-63) on three different modalities of visual feedback for natural hand interaction. These modalities are: no virtual hand, partial virtual hand (tip of index finger and tip of thumb) and full virtual hand models. The results of the study show that a virtual representation of the fingertips or hand improves the usability of natural hand interaction.



Posters

On-site AR Interface with Web-based 3D Archiving System for Archaeological Project
Ryosuke Matsushita, Tokihisa Higo, Hiroshi Suita, Yoshihiro Yasumuro
Teaser

This paper proposes an AR interface for on-site use in an archaeological project. We have already been developing a web-based 3D archiving system for supporting the diverse specialties and nations needed for driving the surveys and restoration work at the archaeological project. Our 3D archiving sistem is designed for the spontaneous updating, accumulating and sharing of information on findings to better enable frequent discussions, through a 3D virtual copy of the field site that a user can visit, explore, and embed information in the virtual site over the Internet. Here we present an AR-based human interface to enhance the access from mobile devices at the actual site to the archiving system. Using SFM (structure from motion) and solving PNP problem, a photo taken at the site can be stably matched to the pre-registered photo collection in the archive system and the access to archiving system start smoothly by associating the 3D coordinates between the system and the actual user viewpoint. Our implementation effectively works on an on-going project developed at Mastaba Idout in Saqqara, Egypt.


Photo Billboarding: A Simple Method to Provide Clues that Relate Camera Views and a 2D Map for Mobile Pedestrian Navigation
Junta Watanabe, Shingo Kagami, Koichi Hashimoto
Teaser

This paper describes a mobile pedestrian navigation system that provides users with clues that help understanding spatial relationship between mobile camera views and a 2D map. The proposed method draws on the map upright billboards that correspond to the basal planes of viewing frustums of the camera. The user can take photographs of arbitrary landmarks on the way to build billboards with photographs corresponding to them on the map.


Automatic Visual Feedback from Multiple Views for Motor Learning
Dan Mikami, Mariko Isogawa, Kosuke Takahashi, Akira Kojima
Teaser

Visual feedback system of a trainee's movements for effective motor learning is proposed. It provides a visual feedback of a trainee's movement in synchronization with the reference movement from multiple view angles automatically with delay of a few second. Because the automatic feedback, a trainee can obtain the feedback without operations during motion of memory is clear. By employing features with low computational cost, the proposed system achieve the synchronized video feedback with four cameras on a consumer tablet PC.


[S&T Short]
RGBDX: First Design and Experimental Validation of a Mirror-based RGBD Xray Imaging System

Severine Habert, Jose Gardiazabal, Pascal Fallavollita, Nassir Navab

This paper presents the first design of a mirror based RGBD X-ray imaging system and includes an evaluation study of the depth errors induced by the mirror when used in combination with an infrared pattern-emission RGBD camera. Our evaluation consisted of three experiments. The first demonstrated almost no difference in depth measurements of the camera with and without the use of the mirror. The final two experiments demonstrated that there were no relative and location-specific errors induced by the mirror showing the feasibility of the RGBDX-ray imaging system. Lastly, we showcase the potential of the RGBDX-ray system towards a visualization application in which an X-ray image is fused to the 3D reconstruction of the surgical scene via the RGBD camera, using automatic C-arm pose estimation.


[S&T Short]
Introducing Augmented Reality to Optical Coherence Tomography in Ophthalmic Microsurgery

Hessam Roodaki, Konstantinos Filippatos, Abouzar Eslami, Nassir Navab

Augmented Reality (AR) in microscopic surgery has been subject of several studies in the past two decades. Nevertheless, AR has not found its way into everyday microsurgical workflows. The introduction of new surgical microscopes equipped with Optical Coherence Tomography (OCT) enables the surgeons to perform multimodal (optical and OCT) imaging in the operating room. Taking full advantage of such elaborate source of information requires sophisticated intraoperative image fusion, information extraction, guidance and visualization methods. Medical AR is a unique approach to facilitate utilization of multimodal medical imaging devices. Here we propose a novel medical AR solution to the long-known problem of determining the distance between the surgical instrument tip and the underlying tissue in ophthalmic surgery to further pave the way of AR into the surgical theater. Our method brings augmented reality to OCT for the first time by augmenting the surgeon's view of the OCT images with an estimated instrument cross-section shape and distance to the retinal surface using only information from the shadow of the instrument in intraoperative OCT images. We demonstrate the applicability of our method in retinal surgery using a phantom eye and evaluate the accuracy of the augmented information using a micromanipulator.


[S&T Full]
SoftAR: Visually Manipulating Haptic Softness Perception in Spatial Augmented Reality

Parinya Punpongsanon, Daisuke Iwai, Kosuke Sato

We present SoftAR, a novel spatial augmented reality (AR) technique based on a pseudo- haptics mechanism in the human brain that visually manipulates the sense of softness perceived by a user pushing a soft physical object. Considering the limitations of projection- based approaches that change only the surface appearance of a physical object, we propose two projection visual effects, i.e., surface deformation effect (SDE) and body appearance effect (BAE), on the basis of the observations of humans pushing physical objects. The SDE visualizes a two-dimensional deformation of the object surface with a controlled softness parameter, and BAE changes the color of the pushing hand. Through psychophysical experiments, we confirm that the SDE can manipulate softness perception such that the participant perceives significantly greater softness than the actual softness. Furthermore, fBAE, in which BAE is applied only for the finger area, significantly enhances manipulation of the perception of softness. On the basis of the experimental results, we create a computational model that estimates perceived softness when SDE+fBAE is applied. We construct a prototype SoftAR system in which two application frameworks are implemented, i.e., softness adjustment and softness transfer. The former framework allows a user to adjust the softness parameter of a physical object, and the latter allows the user to replace the softness with that of another object. Through a user study of the prototype, we confirm that perceived softness can be manipulated with accuracy that is less than the just noticeable difference of softness perception. SoftAR does not require user-worn/hand-held equipment and allows users to feel significantly different softness perception without changing materials; therefore, we believe that it will be useful for various applications, particularly the design process of soft products, such as furniture, plush toys, and imitation materials.



Sponsors (Become one)


Diamond



Gold



Silver



Bronze



SME



in special cooperation with



in cooperation with


Partner Event