RegisterGet new password
29 SEP - 3 OCT, 2015 FUKUOKA, JAPAN





ieee.org computer.org vgtc.org sigmr.vrsj.org vrsj.org
Augmentation Without Boundaries

The 14th IEEE International Symposium on Mixed and Augmented Reality
FUKUOKA, JAPAN

Teaser Session

All posters and demos will be presented in one of either four teaser sessions.

We use an online voting system in S&T teaser session. Please check PDF file how to use it.

Please prepare your poster presentation by following Guidelines.


[S&T] 30th September, 15:25-16:25, Main Hall

Overlaying Navigation Signs on a Road Surface using a Head-Up Display
Kaho Ueno, Takashi Komuro
Poster

In this paper, we propose a method for overlaying navigation signs on a road surface and displaying them on a head-up display (HUD). Accurate overlaying is realized by measuring 3D data of the surface in real time using a depth camera. In addition, the effect of head movement is reduced by performing face tracking with a camera that is placed in front of the HUD, and by performing distortion correction of projection images according to the driver’s viewpoint position. Using an experimental system, we conducted an experiment to display a navigation sign and confirmed that the sign is overlaid on a surface. We also confirmed that the sign looks to be fixed on the surface in real space.


Deformation Estimation of Elastic Bodies Using Multiple Silhouette Images for Endoscopic Image Augmentation
Akira Saito, Megumi Nakao, Yuki Uranishi, Tetsuya Matsuda
Poster

This study proposes a method to estimate elastic deformation using silhouettes obtained from multiple endoscopic images. Our method allows to estimate intraoperative deformation of organs using a volumetric mesh model reconstructed from preoperative CT data. We use silhouette information of elastic bodies not to model the shape but to estimate the local displacements. The model shape is updated to satisfy the constraint of silhouettes while preserving the shape as much as possible. The result of the experiments showed that the proposed methods could estimate the deformation with 5mm-1cm RMS errors.


Hands-free AR Work Support System Monitoring Work Progress with Point-cloud Data Processing
Hirohiko Sagawa, Hiroto Nagayoshi, Harumi Kiyomizu, Tsuneya Kurihara
Poster

We present a hands-free AR work support system that provides work instructions to workers without interrupting normal work procedures. This system estimates the work progress by monitoring the status of work objects only on the basis of 3D data captured from a depth sensor mounted on a helmet, and it selects appropriate information to be displayed on a head-mounted display (HMD) on the basis of the estimated work progress. We describe a prototype of the proposed system and the results of primary experiments carried out to evaluate the accuracy and performance of the system.


Endoscopic Image Augmentation Reflecting Shape Changes During Cutting Procedures
Megumi Nakao, Shota Endo, Keiho Imanishi, Tetsuya Matsuda
Poster

This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support cutting procedures. This framework handles the history of measured drill tip’s location as a volume label, and visualizes the remains to be cut overlaid on the endoscopic image in real time. We performed a cutting experiment, and the efficacy of the cutting aid was evaluated among shape similarity, total moved distance of a cutting tool, and the required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.


Toward Enhancing Robustness of DR System: Ranking Model for Background Inpainting
Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Akira Kojima
Poster

A method for blindly predicting inpainted image quality is proposed for enhancing the robustness of diminished reality (DR), which uses inpainting to remove unwanted objects by replacing them with background textures in real time. The method maps from inpainted image features to subjective image quality scores without the need for reference images. It enables more complex background textures to be applied to DR.


Interactive Visualizations for Monoscopic Eyewear to Assist in Manually Orienting Objects in 3D
Carmine Elvezio, Mengu Sukan, Steve Feiner, Barbara Tversky
Poster

Assembly or repair tasks often require objects to be held in specific orientations to view or fit together. Research has addressed the use of AR to assist in these tasks, delivered as registered overlaid graphics on stereoscopic head-worn displays. In contrast, we are interested in using monoscopic head-worn displays, such as Google Glass. To accommodate their small monoscopic field of view, off center from the user's line of sight, we are exploring alternatives to registered overlays. We describe four interactive rotation guidance visualizations for tracked objects intended for these displays.


Movable Spatial AR On-The-Go
Ahyun Lee, Joo-Haeng Lee, Jaehong Kim
Poster

We present a movable spatial augmented reality (SAR) system that can be easily installed in a user workspace. The proposed system aims to dynamically cover a wider projection area using a portable projector attached to a simple robotic device. It has a clear advantage than a conventional SAR scenario where, for example, a projector should be installed with a fixed projection area in the workspace. In the previous research [1], we proposed a data-driven kinematic control method for a movable SAR system. This method targets a SAR system integrated with a user-created robotic (UCR) device where an explicit kinematic configuration such as CAD model is unavailable. Our contribution in this paper is to show the feasibility of the data-driven control method by developing a practical application where dynamic change of projection area matters. We outline the control method and demonstrate an assembly guide example using a casually installed movable SAR system.


2D-3D Co-segmentation for AR-based Remote Collaboration
Kuo-Chin Lien, Benjamin Nuernberger, Matthew Turk, Tobias Höllerer
Poster

In Augmented Reality (AR) based remote collaboration, a remote user can draw a 2D annotation that emphasizes an object of interest to guide a local user accomplishing a task. This annotation is typically performed only once and then sticks to the selected object in the local user's view, independent of his or her camera movement. In this paper, we present an algorithm to segment the selected object, including its occluded surfaces, such that the 2D selection can be appropriately interpreted in 3D and rendered as a useful AR annotation even when the local user moves and significantly changes the viewpoint.


Manipulating Haptic Shape Perception by Visual Surface Deformation and Finger Displacement in Spatial Augmented Reality
Toshio Kanamori, Daisuke Iwai, Kosuke Sato

Poster

Many researchers are trying to realize a pseudo-haptic system which can visually manipulate a user's haptic shape perception when touching a physical object. In this paper, we focus on altering the perceived surface shape of a curved object when touching it with an index finger, by visually deforming it's surface shape and displacing the visual representation of the user's index finger as like s/he is touching the deformed surface, using spatial augmented reality. A experiment were conducted with a projection system to confirm the effect of the visual feedback for altering the perceived shape of curved shape. The result showed that the shape which the participants perceived was deformed from the actual shape they touched. The result prove the possibility for manipulating user's perceived shape of a curved surface by using pseudo-haptics in spatial augmented reality.


Mixed-Reality Store on the Other Side of a Tablet
Masaya Ohta, Shunsuke Nagano, Hotaka Niwa, Katsumi Yamashita
Poster

This paper proposes a mixed-reality shopping system for users who do not own a PC but do own a tablet. In this system, while viewing panoramic images photographed along the aisles of a real store, the user can move freely around the store. Products can be selected and freely viewed from any angle. Furthermore, by utilizing a Photo-based augmented reality (Photo AR) technology the product can be displayed as if it were in the hands of the user. The results of a user evaluation showed that even though the proposed system uses a tablet with a smaller screen it was preferred over a conventional e-commerce site using a larger monitor.


Avatar-Mediated Contact Interaction between Remote Users for Social Telepresence
JIHYE OH, Yeonjoon Kim, Taeil Jin, Sukwon Lee, Youjin Lee, Sung-Hee Lee
Poster

Social touch such as a handshake increases the sense of coexistence and closeness between remote users in a social telepresence environment, but creating such coordinated contact movements with a distant person is extremely difficult if given only visual feedback, without haptic feedback. This paper presents a method to enable hand-contact interaction between remote users in an avatar-mediated telepresence environment. The key approach is, while the avatar directly follows its owner’s motion in normal conditions, it adjusts the pose to maintain contact with the other user when the two users attempt to make contact interaction. To this end, we develop classifiers to recognize the users’ intention for the contact interaction. The contact classifier identifies whether the users try to initiate contact when they are not in contact, and the separation classifier identifies whether the two in contact attempt to break contact. The classifiers are trained based on a set of geometric distance features. During the contact phase, inverse kinematics is solved to determine the pose of the avatar’s arm so as to initiate and maintain natural contact with the other user’s hand. Our system is unique in that two remote users can perform real time hand contact interaction in a social telepresence environment.


Towards Estimating Usability Ratings of Handheld Augmented Reality Using Accelerometer Data
Marc Ericson Santos, Takafumi Taketomi, Goshiro Yamamoto, Gudrun Klinker, Christian Sandor, Hirokazu Kato
Poster

Usability evaluations are important to the development of augmented reality systems. However, conducting large-scale longitudinal studies remains challenging because of the lack of inexpensive but appropriate methods. In response, we propose a method for implicitly estimating usability ratings based on readily available sensor logs. To demonstrate our idea, we explored the use of features of accelerometer data in estimating usability ratings in an annotation task. Results show that our implicit method corresponds with explicit usability ratings at 79-84%. These results should be investigated further in other use cases, with other sensor logs.


Abecedary tracking and mapping: a toolkit for tracking competition
Hideaki Uchiyama, Takafumi Taketomi, Sei Ikeda, João Lima
Poster

This paper introduces a toolkit with camera calibration, monocular Simultaneous Localization and Mapping (SLAM) and registration with a calibration marker. With the toolkit, users can perform the whole procedure of the ISMAR on-site tracking competition in 2015. Since the source code is designed to be well-structured and highly-readable, users can easily install and modify the toolkit. By providing the toolkit, we encourage beginners to learn tracking techniques and participate in the competition.


Retrieving Lights Positions Using Plane Segmentation with Diffuse Illumination Reinforced with Specular Component
Paul-Émile Buteau, Hideo Saito
Poster

We present a novel method to retrieve multiple positions of point lights in real indoor scenes based on a 3D reconstruction. This method takes advantage of illumination over planes detected using a segmentation of the reconstructed mesh of the scene. We can also provide an estimation without suffering from the presence of specular highlights but rather use this component to refine the final estimation. This allows consistent relighting throughout the entire scene for aumented reality purposes.


Maintaining appropriate interpersonal distance using virtual body size
Masaki Maeda, Nobuchika Sakata
Demo

Securing one’s personal space is quite important in leading a comfortable social life. However, it is difficult to maintain an appropriate interpersonal distance all the time. Therefore, we propose an interpersonal distance control system with a video see-through system, consisting of a head-mounted display (HMD), depth sensor, and RGB camera. The proposed system controls the interpersonal distance by changing the size of the person in the HMD view. In this paper, we describe the proposed system and conduct an experiment to confirm the capability of the proposed system. Finally, we show and discuss the results of the experiment.


Vergence-based AR X-ray Vision
Yuki Kitajima, Sei Ikeda, Kosuke Sato
Demo

The ideal AR x-ray vision should enable users to clearly observe and grasp not only occludees, but also occluders. We propose a novel selective visualization method of both occludee and occluder layers with dynamic opacity depending on the user's gaze depth. Using the gaze depth as a trigger to select the layers has a essential advantage over using other gestures or spoken commands in the sense of avoiding collision between user's intentional commands and unintentional actions. Our experiment by a visual paired-comparison task shows that our method has achieved a 20% higher success rate, and significantly reduced 30% of the average task completion time than a non-selective method using a constant and half transparency.


Live Texturing of Augmented Reality Characters from Colored Drawings
Stephane Magnenat, Dat Tien Ngo, Fabio Zund, Mattia Ryffel, Gioacchino Noris, Gerhard Rothlin, Alessia Marra, Maurizio Nitti, Pascal Fua, Markus Gross, Robert W. Sumner
Oral Demo

Coloring books capture the imagination of children and provide them with one of their earliest opportunities for creative expression. However, given the proliferation and popularity of digital devices, real-world activities like coloring can seem unexciting, and children become less engaged in them. Augmented reality holds unique potential to impact this situation by providing a bridge between real-world activities and digital enhancements. In this paper, we present an augmented reality coloring book App in which children color characters in a printed coloring book and inspect their work using a mobile device. The drawing is detected and tracked, and the video stream is augmented with an animated 3-D version of the character that is textured according to the child’s coloring. This is possible thanks to several novel technical contributions. We present a texturing process that applies the captured texture from a 2-D colored drawing to both the visible and occluded regions of a 3-D character in real time. We develop a deformable surface tracking method designed for colored drawings that uses a new outlier rejection algorithm for real-time tracking and surface deformation recovery. We present a content creation pipeline to efficiently create the 2-D and 3-D content. And, finally, we validate our work with two user studies that examine the quality of our texturing algorithm and the overall App experience.


Local Geometric Consensus: a general purpose point pattern-based tracking algorithm
Liming YANG, Jean-marie Normand, Guillaume Moreau

Oral Demo

We present a method which can quickly and robustly match 2D and 3D point patterns based on their sole spatial distribution, but it can also handle other cues if available. This method can be easily adapted to many transformations such as similarity transformations in 2D/3D, and affine and perspective transformations in 2D. It is based on local geometric consensus among several local matchings and a refinement scheme. We provide two implementations of this general scheme, one for the 2D homography case (which can be used for marker or image tracking) and one for the 3D similarity case. We demonstrate the robustness and speed performance of our proposal on both synthetic and real images and show that our method can be used to augment any (textured/textureless) planar objects but also 3D objects.


An Adaptive Augmented Reality Interface for Hand based on Probabilistic Approach
Jinki Jung, Hyeopwoo Lee, Hyun Seung Yang
Demo

In this paper we propose an adaptive Augmented Reality interface for general hand gestures based on a probabilistic model. The proposed interface provides multiple interfaces and the corresponding gesture inputs by recognizing a context of the hand shape which requires the accurate recognition of static and dynamic hand states. For the accuracy, we present a hand representation that is robust to the hand shape variation, and the extraction of hand features based on the fingertip posteriors from a GMM model. Experimental results show that both context-sensitivity and accurate hand gesture recognition are achieved throughout the quantitative evaluation and its implementation as a three-in-one virtual interface.


Realtime Shape-from-Template: System and Applications
Toby Collins, Adrien Bartoli
Demo

An important yet unsolved problem in computer vision and Augmented Reality (AR) is to compute the 3D shape of nonrigid objects from live RGB videos. When the object's shape is provided in a rest pose, this is the Shape-from-Template (SfT) problem. We present a general framework for realtime SfT. This handles generic objects, complex deformations and most of the difficulties present in real imaging conditions. Achieving this has required new solutions to two core sub-problems in SfT: robust registration and fast 3D shape inference. For registration we propose Deformable Render-based Block Matching (DRBM), which is a tracking-based solution that combines the advantages of feature-based and direct approaches without their main disadvantages. Shape inference is achieved by solving a single sparse linear least squares system for each frame, which is done quickly with a Geometric Multi-Grid method. On a standard desktop PC we archive up to 21fps depending on the object. Code will be released to the community.


Natural user interface for ambient objects
Meng Ma, Kevin Merckx, Pascal Fallavollita, Nassir Navab
Demo

We present a natural gesture interface for ambient-objects using a wearable RGB-D sensor. The aim of this work is to propose a methodology that determines accurately where a user is pointing at when gesturing with their finger. First, the wearable RGB-D sensor is affixed around the user forehead. A calibration between the user’s eyes and the RGB-D camera is performed by having the user move their fingers along their line of sight. We detect the fingertip in the depth camera and then find the direction of the line of sight. Finally we estimate where the user is pointing at in the RGB image in different scenarios with a depth map, a detected object and a controlled virtual element. To validate our methods, we perform a point-to-screen experiment. Results demonstrate that when a user is interacting with a display up to 1.5 meters away, our natural gesture interface has an average error of 2.1cm. In conclusion, the presented technique is a viable option for a reliable user interaction.


A Comprehensive Interaction Model for Augmented Reality Systems
Mikel Salazar, Carlos Laorden, Pablo Bringas
Demo

In this extended abstract, we present a model that aims to provide developers with an extensive and extensible set of context-aware interaction techniques, greatly facilitating the creation of meaningful AR-based user experiences. To provide a complete view of the model, we detail the different aspects that form its theoretical foundations, while also discussing several considerations for its correct implementation.


Tracking and Mapping with a Swarm of Heterogeneous Clients
Philipp Fleck, Clemens Arth, Christian Pirchheim, Dieter Schmalstieg
Demo

In this work, we propose a multi-user system for tracking and mapping, which accommodates mobile clients with different capabilities, mediated by a server capable of providing real-time structure from motion. Clients share their observations of the scene according to their individual capabilities. This can involve only keyframe tracking, but also mapping and map densification, if more computational resources are available. Our contribution is a system architecture that lets heterogeneous clients contribute to a collaborative mapping effort, without prescribing fixed capabilities for the client devices. We investigate the implications that the clients' capabilities have on the collaborative reconstruction effort and its use for AR applications.


SlidAR: A 3D Positioning Technique for Handheld Augmented Reality
Jarkko Polvi, Takefumi Taketomi, Goshiro Yamamoto, Christian Sandor, Hirokazu Kato
Demo

Tablet system for visual overlay of rectangular virtual object onto real environment
Hiroyuki Yoshida, Takuya Okamoto, Hideo Saito
Demo

Accurate Passive Eye-Pose Estimation through Corneal Imaging
Alexander Plopski, Christian Nitschke, Kiyoshi Kiyokawa, Dieter Schmalstieg, Haruo Takemura
Demo

EyeAR: Physically-Based Depth of Field through Eye Measurements
Damien Rompapas, Kohei Oshima, Sei Ikeda, Goshiro Yamamoto, Takefumi Taketomi, Christian Sandor, Hirozoku Kato
Demo

R-V Dynamics Illusion Experience System in Mixed Reality Space
Yuta Kataoka, Satoshi Hashiguchi, Taiki Yamada, Fumihisa Shibata, Asako Kimura
Demo

Diminished Reality for Hiding a Pedestrian using Hand-held Camera
Kunihiro Hasegawa, Hideo Saito
Demo

SharpView: Improved Legibility of Defocussed Content on Optical See-Through Head-Mounted Displays
Kohei Oshima, Damien Rompapas, Kenneth Moser, Edward Swan, Sei Ikeda, Goshiro Yamamoto, Takafumi Taketomi, Christian Sandor, Hirokazu Kato
Demo

DroneAR: Augmented Reality Supported Unmanned Aerial Vehicle (UAV) in Agriculture for Farmer Perspective
Yuan Wang, Henry Duh Been-Lim, Hirokazu Kato and Takafumi Taketomi
Demo

DOMINO (Do Mixed-reality Non-stop) Toppling
Ryotaro Hirata, Tomoka Ishibashi, Jianing Qie, Shohei Mori, Fumihisa Shibata, Asako Kimura, Hideyuki Tamura
Demo

Imperceptible On-Screen Markers for Arbitrary Background Images
Goshiro Yamamoto, Luiz Sampaio, Takafumi Taketomi, Christian Sandor, Hirokazu Kato
Demo

Magical Mystery Room, 2nd Stage
Daiki Sakauchi, Yuichi Matsumi, Shohei Mori, Fumihisa Shibata, Asako Kimura, Hideyuki Tamura
Demo

Mobile Binocular Augmented Reality System for Museum
Jae-In Hwang, Elisabeth Adelia Widjojo, Seungmin Roh, Youna Lee, Jinwoo Lee, Junho Kim
Demo

Multiple Kinect for 3D Human Skeleton Posture Using Axis Replacement Method
Nuth Otanasap, Poonpong Boonbrahm
Demo

InstantReach: Virtual Hand Interaction using Smartphone
Yuta Ueda, Daisuke Iwai, Kosuke Sato
Demo

Improving Stability of Vision-based Camera Tracking by Smartphone Sensors
Jaejun Lee, Kei Obata, Maki Sugimoto, Hideo Saito
Demo

Study of the AR marker available on foldable surfaces
Hajime Sasanuma, Yoshitsugu Manabe, Noriko Yata
Demo

Immersive Virtual Tourism with Omnidirectional View Interpolation
Abdoulaye Maiga, Naoki Chiba, Tony Tung, Hideo Saito
Demo



[S&T] 1st October, 11:50-12:50, Main Hall

Improved SPAAM Robustness Through Stereo Calibration
Kenneth Moser, J. Edward Swan II
Poster

We are investigating methods for improving the robustness and consistency of the Single Point Active Alignment Method (SPAAM) optical see-through (OST) head-mounted display (HMD) calibration procedure. Our investigation focuses on two variants of SPAAM. The first utilizes a standard monocular alignment strategy to calibrate the left and right eye separately, while the second leverages stereoscopic cues availble from binocular HMDs to calibrate both eyes simultaneously. We compare results from repeated calibrations between methods using eye location estimates and inter pupilary distance (IPD) measures. Our findings indicate that the stereo SPAAM method produces more accurate and consistent results during calibration compared to the monocular variant.


Road Maintenance MR System Using LRF and PDR
ChingTzun Chang, Ryosuke Ichikari, Takashi Okuma, Takeshi Kurata, Koji Makita
Poster

We have been developing a MR system for supporting road maintenance using overlaid visual aids. In this situation, we need a positioning method that can provide sub-meter accuracy and work even if the appearance of the road surface is completely different which is caused by many factors such as construction phase, time (i.e. day and night) and weather. Therefore, we are developing a real-time worker positioning method that can be applied to these situation by using data fusion of laser range finder (LRF) and pedestrian dead-reckoning (PDR). In the real field, plural workers are working and moving around the workspace, so we need to make correspondence between PDR-based trajectories and LRF-based trajectories by defining the similarity of trajectories.. Corresponded pair of trajectories will be fusion for acquiring the position and direction of the worker. In this paper, we proposed a method to calculate the similarity between trajectories and a procedure to fusion them.


Geometric Mapping for Color Compensation using Scene Adaptive Patches
Jong Hun Lee, Yong Hwi Kim, Yong Yi Lee, Kwan Heng Lee
Poster

The SAR technique using a projector-camera system allows us to make various effect on a real scene without physical reconstitution. In order to project contents on a textured scene without color imperfections, geometric and radiometric compensation of a projection image should be conducted as preprocessing. In this paper, we present a new geometric mapping method for color compensation in the projector-camera system. We capture the scene and segment it into adaptive patch according to the scene structure using the SLIC segmentation. The piece-wise polynomial function is evaluated for each patch to find pixel-to-pixel correspondences between the measured and projection images. Finally, color compensation is performed by using a color mixing matrix. Experimental results show that our geometric mapping method establishes accurate correspondences and color compensation alleviates the color imperfections which is caused by texture of a general scene.


Pseudo Printed Fabrics through Projection Mapping
Yuichiro Fujimoto, Goshiro Yamamoto, Takafumi Taketomi, Christian Sandor, Hirokazu Kato
Poster

Projection-based Augmented Reality commonly projects on rigid objects, while only few systems project on deformable objects. In this paper, we present Pseudo Printed Fabrics (PPF), which enables the projection on a deforming piece of cloth. This can be applied to previewing a cloth design while manipulating its shape. We support challenging manipulations, including heavy occlusions and stretching the cloth. In previous work, we developed a similar system, based on a novel marker pattern; PPF extends it in two important aspects. First, we improved performance by two orders of magnitudes to achieve interactive performance. Second, we developed a new interpolation algorithm to keep registration during challenging manipulations. We believe that PPF can be applied to domains including virtual-try on and fashion design.


Augmented Reality for Radiation Awareness
Nicola Leucht, Severine Habert, Patrick Wucherer, Simon Weidert, Nassir Navab, Pascal Fallavollita
Poster

C-arm fluoroscopes are frequently used during surgeries for intra-operative guidance. Unfortunately, due to X-ray emission and scattering, increased radiation exposure occurs in the operating theatre. The objective of this work is to sensitize the surgeon to their radiation exposure, enable them to check on their exposure over time, and to help them choose their best position related to the C-arm gantry during surgery. First, we aim at simulating the amount of radiation that reaches the surgeon using the Geant4 software, a toolkit developed by CERN. Using a flexible setup in which two RGB-D cameras are mounted to the mobile C-arm, the scene is captured and modeled respectively. After the simulation of particles with specific energies, the dose at the surgeon’s position, determined by the depth cameras, can be measured. The validation was performed by comparing the simulation results to both theoretical values from the C-arms user manual and real measurements made with a QUART didoSVM dosimeter. The average error was 16.46% and 16.39%, respectively. The proposed flexible setup and high simulation precision without a calibration with measured dosimeter values, has great potential to be directly used and integrated intraoperatively for dose measurement.


Remote Mixed Reality System Supporting Interactions with Virtualized Objects
Peng Yang, Itaru Kitahara, Yuichi Ohta
Poster

Mixed Reality (MR) can merge real and virtual worlds seamlessly. This paper proposes a method to realize smooth collaboration using a remote MR, which makes it possible for geographically distributed users to share the same objects and communicate in real time as if they are at the same place. In this paper, we consider a situation that the users at local and remote sites perform a collaborative work, and real objects to be operated exist only at the local site. It is necessary to share the real objects between the two sites. In prior studies sharing real objects by duplication is either too costly or unrealistic. Therefore, we propose a method to share the objects by virtualizing the real objects using Computer Vision (CV) and then rendering the virtualized objects using MR. We also realize the interaction with virtualized objects by the remote site user and construct a remote collaborative work system. Through experiments, we confirmed the effectiveness of our approach.


Fusion of Vision and Inertial Sensing for Accurate and Efficient Pose Tracking on Smartphones
Xin Yang, Tim Cheng
Poster

This paper aims at accurate and efficient pose tracking of planar targets on modern smartphones. Existing methods, relying on either visual features or motion sensing based on built-in inertial sensors, are either too computationally expensive to achieve real-time performance on a smartphone, or too noisy to achieve sufficient tracking accuracy. In this paper we present a hybrid tracking method which can achieve real-time performance with high accuracy. Based on the same framework of a state-of-the-art visual feature tracking algorithm [5] which ensures accurate and reliable pose tracking, the proposed hybrid method significantly reduces its computational cost with the assistance of a phone’s built-in inertial sensors. However, noises in inertial sensors and abrupt errors in feature tracking due to severe motion blurs could result in instability of the hybrid tracking system. To address this problem, we propose to employ an adaptive Kamlan filter with abrupt error detection to robustly fuse the inertial and feature tracking results. We evaluated the proposed method on a dataset consisting of 16 video clips with synchronized inertial sensing data. Experimental results demonstrated our method’s superior performance and accuracy on smartphones, compared to a state-of-the-art vision tracking method [5]. The dataset will be made publicly available with the publication of this paper.


Augmenting mobile C-arm fluoroscopes via Stereo-RGBD sensors for multimodal visualization
Severine Habert, Meng Ma, Wadim Kehl, Xiang Wang, Federico Tombari, Pascal Fallavollita, Nassir Navab
Poster

Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors followed by the X-ray source with a 3D calibration object. Then, we synthesize the projective image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm.


INCAST: Interactive Camera Streams for Surveillance Cams AR
István Szentandrási, Michal Zachariáš, Rudolf Kajan, Jan Tinka, Markéta Dubská, Jakub Sochor, Adam Herout
Poster

Augmented reality does not make any sense for fixed cameras. Or does it? In this work, we are dealing with static cameras and their usability for augmented reality applications. Knowing that the camera does not move makes camera pose estimation both less and more difficult - one does not have to deal with pose change in time, but on the other hand, obtaining some level of understanding of the scene from a single viewpoint is challenging. We propose several ways how to gain advantage from the camera being static and a pipeline of a system for broadcasting a video stream enriched by information needed for its visual augmenting - Interactive Camera Streams, INCAST. We present a proof-of-concept system showing the usability of INCAST on several use-cases - non-interactive demos and simple AR games.


Natural 3D Interaction using a See-through Mobile AR System
Yuko Unuma, Takashi Komuro
Poster

In this paper, we propose an interaction system in which the appearance of the image displayed on a mobile display is consistent with that of the real space and that enables a user to interact with virtual objects overlaid on the image using the user’s hand. The three-dimensional scene obtained by a depth camera is projected according to the user’s viewpoint position obtained by face tracking, and the see-through image whose appearance is consistent with that outside the mobile display is generated. Interaction with virtual objects is realized by using the depth information obtained by the depth camera. To move virtual objects as if they were in real space, virtual objects are rendered in the world coordinate system that is fixed to a real scene even if the mobile display moves, and the direction of gravitational force added to virtual objects is made consistent with that of the world coordinate system. The former is realized by using the ICP (Iterative Closest Point) algorithm and the latter is realized by using the information obtained by an accelerometer. Thus, natural interaction with virtual objects using the user’s hand is realized.


Augmented Wire Routing Navigation for Wire Assembly
Mark Rice, Hong Huei Tay, Jamie Ng, Calvin Lim, Senthil Selvaraj, Ellick Wu
Poster

Within manufacturing, high value digital solutions are needed to optimize and aid shop floor processes. This includes agile technologies that can be easily integrated into factory environments to facilitate manufacturing tasks. In this paper, we present a dynamic system to support the electrical wiring assembly of commercial aircraft. Specifically, we describe the system design, which aims to improve the productivity of factory operators through the integration of wearable and mobile solutions. An evaluation of the augmented component of our system using a pair of smart glasses is reported with 12 participants, as we describe important interaction issues in the ongoing development of this work.


Marker Identification Using IR LEDs and RGB Color Descriptors
Gou Koutaki, Shodai Hirata, Hiromu Sato, Keiichi Uchimura
Poster

In optical motion capture systems, it is difficult to correctly recognize markers based on their unique identifiers (IDs) in a single frame. In this paper, we propose two types of light-emitting diodes (LEDs) and cameras, infrared (IR) and RGB, in order to correctly detect and identify all markers tracking objects in a given system. To detect and estimate the three-dimensional (3D) position of the marker, we measure IR LEDs using IR stereo cameras. Furthermore, in order to identify each marker, we calculate and compare the RGB color descriptor in the vicinity of its center. Our system consists of general IR and RGB cameras, and is easy to extend by increasing the number of cameras. We implemented an IR/RGB LED marker circuit and constructed a simple motion capture system to test the effectiveness of our system. The results show that our system can detect the 3D positions and unique IDs of markers in one frame.


RGB-D/C-arm Calibration and Application in Medical Augmented Reality
Xiang Wang, Severine Habert, Meng Ma, Chun-Hao Huang, Pascal Fallavollita, Nassir Navab
Poster

Calibration and registration are the first steps for augmented reality and mixed reality applications. In the medical field, the calibration between an RGB-D camera and a mobile C-arm fluoroscope is a new topic which introduces challenges. In this paper, we propose a precise 3D/2D calibration method to achieve a video augmented fluoroscope. With the design of a suitable calibration phantom for RGB-D/C-arm calibration, we calculate the projection matrix from the depth camera coordinates to the X-ray image. Through a comparison experiment by combining different steps leading to the calibration, we evaluate the effect of every step of our calibration process. Results demonstrated that we obtain a calibration RMS error of 0.54±1.40 mm which is promising for surgical applications. We conclude this paper by showcasing two clinical applications. One is a markerless registration application, the other is an RGB-D camera augmented mobile C-arm visualization.


Transforming your website to an augmented reality view
Dimitrios Ververidis, Spiros Nikolopoulos, Ioannis Kompatsiaris
Poster

In this paper we present FastAR, a software component capable of transforming Joomla based websites into AR-channels compatible with the most popular augmented reality browsers (i.e. Junaio, Layar, Wikitude). FastAR exploits the consistency of the data structure across multiple sites that have been developed using the same content management system, so as to automate the transformation process of an internet website to an augmented reality channel. The proposed component abstracts all related programming tasks and significantly reduces the time required to generate and publish AR-content, making the entire process manageable by non-experts. In verifying the usefulness and effectiveness of FastAR, we conducted a survey to solicit the opinion of users who carried out the installation and transformation process.


[Doctoral Consortium]
User Study on Augmented Reality User Interfaces for 3D Media Production
Max Krichenbauer

One of the most intuitive concepts in regards of applications for Augmented Reality (AR) user interfaces (UIs) is the possibility to create virtual 3D media content, such as 3D models and animations for movies and games. Even though this idea has been repeatedly suggested over the last decades, and in spite of recent technological advancements, very little progress has been made towards an actual real-world application of AR in professional media production. To this day, no immersive 3D UI has been commonly used by professionals for 3D computer graphics (CG) content creation. In our recent paper published at ISMAR2014 we have analyzed the current state of 3D media content creation, including a survey on professional 3D media design work, a requirements analysis for prospective 3D UIs, and a UI concept to meet the identified challenges of real-world application of AR to the production pipeline. Our current research continues on this by working on validating our approach in a user study with both amateur and professional 3D artists. We aim to show that several characteristics of UI design common in academic research are highly problematic for research focused on real-world applications. These are: placing the primary focus on intuitiveness, neglect of typical demerits of a novel technology, and relying on easy-to-acquire samples from the general population for both qualitative and quantitative data. The results of this user study will produce new insights helpful for the future research, design, and development of 3D UIs for media creation.


[Doctoral Consortium]
SPAROGRAM: The Spatial Augmented Reality Holographic Display for 3D Interactive Visualization
Minju Kim

Augmented reality (AR) technology has become popular and been widely used in various fields. Recent advances in display and computer graphics technology have minimized the barriers of the screen, causing closer relationship between physical and virtual space. In spite of this, there are still restrictions and difficulties to achieve representing pervasive information on the physical space which is the ultimate goal of AR. The central research question for our dissertation is how to present realistic spatial AR visualization with the tight integration of the three-dimensional visual forms and the real space. In this paper, we propose SPAROGRAM, the experimental prototype that is capable of visualizing augmented information by fully exploiting the space that takes in surroundings of the real object coherently. In order to present information effectively and to explore interaction issues by taking advantage of the new display capabilities, we will approach this topic with two methodologies: 3D Spatial Visualization and Interaction. In the early stage of the study, we solved basic problems. First, we designed SPAROGRAM prototype. Then, to express spatial visualization, stereoscopic image was applied in multi-layer display, while spatial user interaction was applied in order to provide coherent spatial experience. Our initial investigation suggests that the newly conceived spatial AR holographic display possesses the ability to create seamless 3D space perception and to express 3D AR visualization effectively and in more realistic way.


[Doctoral Consortium]
Situated Analytics: Interactive Analytical Reasoning In Physical Space
Neven ElSayed

Our research aim is to develop and examine techniques to support analytical reasoning in physical space. We present novel interaction and visualization techniques for supporting such reasoning, which we call Situated Analytics. Our approach comprises situated and abstract information representation with real-time analytical interaction. This research draws on two research areas: Visual Analytics and Augmented Reality.


[Doctoral Consortium]
AR Guided Capture and Modeling for Improved Virtual Navigation
Benjamin Nuernberger

Using photographs to model the visual world for virtual navigation involves the three steps of collecting photos, performing image-based modeling with those photos, and finally giving the user controls to navigate the final model. In this dissertation, we enhance each step in this pipeline in specific ways for the ultimate goal of augmented and virtual navigation of visual reality. In so doing, this thesis will open new perspectives both in the process of collecting real world content for virtual navigation experiences and in providing novel guided virtual tours. First, we describe augmented reality (AR) interfaces that effectively guide users to capture the visual world, directing them to cover the most important areas of the scene. Next, while most image-based modeling techniques focus on accurate geometry reconstruction, we instead propose modeling methods that alternatively focus on the motor and cognitive components of navigation. Finally, we propose constrained user interfaces for both augmented and virtual navigation. By enhancing each step in this pipeline, our approach enables a more efficient and intuitive virtual navigation experience.


[Doctoral Consortium]
Free-Hand Gesture-based Interaction for Handheld Augmented Reality
Huidong Bai

In this research, we investigate mid-air free-hand gesture-based interfaces for handheld Augmented Reality (AR). We prototype our gesture-based handheld AR interfaces by combing visual AR tracking and free-hand gesture detection. We describe how each method is implemented and evaluated in user experiments that compare them with traditional device-centric methods like touch input. Results from our user studies show that the proposed interaction methods are natural and intuitive, and provide a more fun and engaging user experience. We discuss implications of this research and directions for further work.


[Doctoral Consortium]
Supporting Asynchronous Collaboration within Spatial Augmented Reality
Andrew Irlitti

Collaboration is seen as a promising area of investigation for the utilization of augmented reality. However exploration has concentrated on synchronous solutions, both co-located and remote, with limited investigation into asynchronous practices. Asynchronous collaboration differs from its synchronous counterparts in that the associated knowledge plays a much more vital role in the strength and accuracy of the associated instructions. This dissertation has focused on the investigation into the applicability and acceptance of augmented reality and its corresponding interactions for a producer-consumer asynchronous collaboration, where the underlying communication plays as an important role as the provided augmentations. Initial results demonstrated the benefits of augmented reality for procedural guidance, allowing further investigation into asynchronous techniques such as providing physical anchors for virtual content, mobile handheld spatial augmented reality, and the effect on performance in multi-faceted environment search tasks using a reduced field of view. In this paper we provide the reasoning behind our approaches, and present our existing work to test and validate our proposition.




[MASH'D] 1st October, 12:00-12:45, Room 411-412

A Novel Haptic Vibration Media and its application
Yasuhiro Suzuki, Rieko Suzuki
Poster

Tactile sense is a media and has been of interest to Engineering, Arts, Design, etc. We have proposed Tactile Score, which is a novel method for describing tactile sense referred to as the notation system of music scores, and we have investigated design principle of composing tactile senses. We propose a method for retransforming from Tactile Score to tactile sense by using haptic sound vibrations, SENSONIC = sense + sonic. We apply SENSONIC technology to a beauty device, which produces haptic sound vibrations (SENSONIC) translated from a Tactile score of massage for beauty treatment.


The Use of Shadows on Real Floor as a Depth Correction of Stereoscopically Visualized Virtual Objects
Jounghuem Kwon, Sang-Hun Nam, Kiwon Yeom, Bum-Jae You
Poster

This article describes our initial experimental results more about how shadow cues on real space impact an individual’s ability to perceive the depth of objects in a 3D stereoscopic environment. During the course of developing direct touch interaction systems, we found that it was consistently difficult to control depth perception. We observed that users often had problems identifying the 3D location of virtual objects in binocular stereo vision, resulting in failure to touch the virtual object. To address this limitation, we developed and explored a technique to generate shadows in real space. The shadows of virtual objects were artificially generated and projected onto the floor of real space corresponding to the 3D locations of virtual objects. This setting was tested to evaluate whether these shadows improved an individual’s ability to perceive the depth of virtual objects. The effectiveness of the proposed method is herein discussed with the results.


Light Detecting 3D User Interface-equipped System for Mixed and Augmented Reality Games
Jinwoo Park, Sungeun An, Woontack Woo
Poster

This paper suggests a novel and interactive system for three dimensional games in mixed and augmented reality (MAR). The proposed system allows a user to interact with 3D virtual objects with accuracy by locating a source of light using simple light detecting algorithms. From our pilot study, our tangible system proved to be highly usable and beneficial for 3D MAR gameplay with inexpensive accompanying equipment. Here, we provide implementation details, present results from a pilot study, and outline future directions.


Designing for Engagement in Augmented Reality Games to Assess Upper Extremity Motor Dysfunctions
Paul Dezentje, Marina Cidota, Rory Clifford, Stephan Lukosch, Paulina J.m. Bank, Heide Lukosch
Poster

Currently, each medical discipline involved in assessing motor disorders in various patient groups is using subjectively scored clinical tests, qualitative video analysis, or cumbersome marker-based motion capturing. This extended abstract investigates the potential of a serious game implemented as Augmented Reality (AR) application to facilitate objective, quantitative and valid measures of the factors contributing to upper extremity motor dysfunction. First, the design process of the game and the system architecture of the AR framework are described. Second, a study about game engagement is presented based on an experiment we conducted with eight healthy people (aged between 52-79). The results of the study show that there is potential for engagement, but the usability needs to be improved before it can emerge.


Augmented Reality Tool for Markerless Virtual Try-on around Human Arm
San Gunes, Okan Sanli, Ovgu Ozturk Ergun
Poster

We present a Markerless 3D Augmented Reality Application for virtual accessory try-on applications around human arm. The system is based on a Kinect sensor and a multi-layer rendering framework to render RGB, depth data and 3D model of accessories simultaneously. The aim is to support realistic visualization of virtual objects around human arm, by detecting wrist pose and handling occlusion for various interactive marketing and retail applications, such as virtual watch try-on.


Augmented live coding: towards semantically enhanced musical performances
Alo Allik
Demo

NarcissUs: machine learning from machine learning from machine learning from machine
Andrés Villa Torres, Eugen Danzinger
Demo

Wanderl_st: Dartboard as an Agent of Map Navigation
Nathan Guo
Demo

Development of Spatial Visualization Skills with Augmented Reality
Yoshikaz Fujita
Demo



[S&T] 2nd October, 11:30-12:30, Main Hall

On-site AR Interface with Web-based 3D Archiving System for Archaeological Project
Ryosuke Matsushita, Tokihisa Higo, Hiroshi Suita, Yoshihiro Yasumuro
Poster

This paper proposes an AR interface for on-site use in an archaeological project. We have already been developing a web-based 3D archiving system for supporting the diverse specialties and nations needed for driving the surveys and restoration work at the archaeological project. Our 3D archiving sistem is designed for the spontaneous updating, accumulating and sharing of information on findings to better enable frequent discussions, through a 3D virtual copy of the field site that a user can visit, explore, and embed information in the virtual site over the Internet. Here we present an AR-based human interface to enhance the access from mobile devices at the actual site to the archiving system. Using SFM (structure from motion) and solving PNP problem, a photo taken at the site can be stably matched to the pre-registered photo collection in the archive system and the access to archiving system start smoothly by associating the 3D coordinates between the system and the actual user viewpoint. Our implementation effectively works on an on-going project developed at Mastaba Idout in Saqqara, Egypt.


Photo Billboarding: A Simple Method to Provide Clues that Relate Camera Views and a 2D Map for Mobile Pedestrian Navigation
Junta Watanabe, Shingo Kagami, Koichi Hashimoto
Poster

This paper describes a mobile pedestrian navigation system that provides users with clues that help understanding spatial relationship between mobile camera views and a 2D map. The proposed method draws on the map upright billboards that correspond to the basal planes of viewing frustums of the camera. The user can take photographs of arbitrary landmarks on the way to build billboards with photographs corresponding to them on the map.


Automatic Visual Feedback from Multiple Views for Motor Learning
Dan Mikami, Mariko Isogawa, Kosuke Takahashi, Akira Kojima
Poster

Visual feedback system of a trainee's movements for effective motor learning is proposed. It provides a visual feedback of a trainee's movement in synchronization with the reference movement from multiple view angles automatically with delay of a few second. Because the automatic feedback, a trainee can obtain the feedback without operations during motion of memory is clear. By employing features with low computational cost, the proposed system achieve the synchronized video feedback with four cameras on a consumer tablet PC.


A Step Closer To Reality: Closed Loop Dynamic Registration Correction in SAR
Hemal Naik, Federico Tombari, Christoph Resch, Peter Keitler, Nassir Navab
Poster

In Spatial Augmented Reality (SAR) applications, real world objects are augmented with virtual content by means of a calibrated camera-projector system. The virtual content to be projected is prepared by means of a computer generated model (CAD) of the real object. It is often the case that the real object deviates from its CAD model, this resulting in misregistered augmentations. We propose a new method to dynamically correct the planned augmentation by accommodating for the unknown deviations in the object geometry. We use a closed loop approach where the projected features are detected in the camera image and deployed as feedback. As a result, the registration misalignment is identified and the augmentations are corrected in areas affected by the deviation. Our work is especially focused on SAR applications related to the industrial domain, where this problem is omnipresent. We show that our method is effective and beneficial for multiple industrial applications.


Design Guidelines for Generating Augmented Reality Instructions
Cledja Karina Rolim da Silva, Dieter Schmalstieg, Denis Kalkofen, Veronica Teichrieb
Poster

Most work about instructions in Augmented Reality (AR) does not follow established patterns or design rules ? each approach defines its own method on how to convey instructions. This work describes our initial results and experiences towards defining design guidelines for AR instructions. The guidelines were derived from a survey of the most common visualization techniques and instruction types applied in AR. We studied about how instructions 2D and 3D can be applied in the AR context.


Haptic Ring Interface Enabling Air-Writing in Virtual Reality Environment
Kiwon Yeom, Jounghuem Kwon, Sang-Hun Nam, Bum-Jae You
Poster

We introduce a novel finger worn ring interface that enables complex spatial interactions through 3D hand movement in virtual reality environment. Users receive physical feedback in the form of vibrations from the wearable ring interface as their finger reaches a certain 3D position. The positions of the fingertip are extracted, linked, and then reconstructed as a trajectory. This system allows the wearer to write characters in midair as if they were using an \ imaginary whiteboard. User can freely write in the air using Korean characters, English letters, both upper and lower case, and digits in real time with over 92% accuracy rate. Thus, it is now conceivable that anything people can do on contemporary touch based devices, they could do in midair with a pseudocontact interface.


Remote Welding Robot Manipulation using Multi-view Images
Yuichi Hiroi, Kei Obata, Katsuhiro Suzuki, Naoto Ienaga, Maki Sugimoto, Hideo Saito, Tadashi Takamaru
Poster

This paper proposes a remote welding robot manipulation system by using multi-view images. After an operator specifies two-dimensional path on images, the system transforms it into three-dimensional path and displays the movement of the robot by overlaying graphics with images. The accuracy of our system is sufficient to weld objects when combining with a sensor in the robot. The system allows the non-expert operator to weld objects remotely and intuitively, without the need to create a 3D model of a processed object beforehand.


A Particle Filter Approach to Outdoor Localization using Image-based Rendering
Christian Poglitsch, Clemens Arth, Dieter Schmalstieg, Jonathan Ventura
Poster

We propose an outdoor localization system using a particle filter. In our approach, a textured, geo-registered model of the outdoor environment is used as a reference to estimate the pose of a smartphone. The device position and the orientation obtained from a Global Positioning System (GPS) receiver and an inertial measurement unit (IMU) are used as a first estimation of the true pose. Then, multiple pose hypotheses are randomly distributed about the GPS/IMU measurement and use to produce renderings of the virtual model. With vision-based methods, the rendered images are compared with the image received from the smartphone, and the matching scores are used to update the particle filter. The outcome of our system improves the camera pose estimate in real time without user assistance.


AR4AR: Using Augmented Reality for guidance in Augmented Reality Systems setup
Frieder Pankratz, Gudrun Klinker
Poster

AR systems have been developed for many years now, ranging from systems consisting of a single sensor and output device to systems with a multitude of sensors and/or output devices. With the increasing complexity of the setup, the complexity of handling the different sensors as well as the necessary calibrations and registrations increases accordingly. A much needed (yet missing) area of augmented reality applications is to support AR system engineers when they set up and maintain an AR system by providing visual guides and giving immediate feedback on the current quality of their calibration measurements.


Exploiting Photogrammetric Targets for Industrial AR
Hemal Naik, Yuji Oyamada, Peter Keitler, Nassir Navab
Poster

In this work, we encourage the idea of using Photogrammetric targets for object tracking in Industrial Augmented Reality (IAR). Photogrammetric targets, especially uncoded circular targets, are widely used in the industry to perform 3D surface measurements. Therefore, an AR solution based on the uncoded circular targets can improve the work flow integration by reusing existing targets and saving time. These circular targets do not have coded patterns to establish unique 2D-3D correspondences between the targets on the model and their image projections. We solve this particular problem of 2D-3D correspondence of non-coplanar circular targets from a single image. We introduce a Conic Pair Descriptor, which computes the Eucledian invariants from circular targets both in the model space and in the image space. A three stage method is used to compare the descriptors and compute the correspondences with up to 100% precision and 89% recall rates. We are able to achieve tracking performance of 3 FPS (2560x1920 pix) to 8 FPS (640x480 pix) depending on the camera resolution.


Rubix: Dynamic Spatial Augmented Reality by Extraction of Plane Regions with a RGB-D Camera
Masayuki Sano, Kazuki Matsumoto, Bruce Thomas, Hideo Saito
Poster

Dynamic spatial augmented reality requires accurate real-time 3D pose information of the physical objects that are to be projected onto. Previous depth-based methods for tracking objects required strong features to enable recognition; making it difficult to estimate an accurate 6DOF pose for physical objects with a small set of recognizable features (such as a non-textured cube). We propose a more accurate method with fewer limitations for the pose estimation of a tangible object that has known planar faces and using depth data from an RGB-D camera only. In this paper, the physical object's shape is limited to cubes of different sizes. We apply this new tracking method to achieve dynamic projections onto these cubes. In our method, 3D points from an RGB-D camera are divided into a cluster of planar regions, and the point cloud inside each face of the object is fitted to an already-known geometric model of a cube. With the 6DOF pose of the physical object, SAR generated imagery is then projected correctly onto the physical object. The 6DOF tracking is designed to support tangible interactions with the physical object. We implemented example interactive applications with one or multiple cubes to show the capability of our method.


Content Completion in Lower Dimensional Feature Space through Feature Reduction and Compensation
Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Akira Kojima
Poster

A novel framework for image/video content completion comprising three stages is proposed. First, input images/videos are converted to a lower dimensional feature space, which is done to achieve effective restoration even in cases where a damaged region includes complex structures and changes in color. Second, a damaged region is restored in the converted feature space. Finally, an inverse conversion from the lower dimensional feature space to the original feature space is performed to generate the completed image in the original feature space. This three-step solution generates two advantages. First, it enhances the possibility of applying patches dissimilar to those in the original color space. Second, it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. Experiments verify the effectiveness of the proposed framework.


ARPML: The Augmented Reality Process Modeling Language
Tobias Müller, Tim Rieger
Poster

The successful application of augmented reality as a guidance tool for procedural tasks like maintenance or repair requires an easily usable way of modeling support processes. Even though some suggestions have already been made to address this problem, they still have shortcomings and don't provide all the needed features. Thus in a first step the requirements that a solution has to meet are collected and presented. Based on these, the augmented reality process modeling language (ARPML) is developed, which consists of the four building blocks (i) templates, (ii) sensors, (iii) work steps and (iv) tasks. Other than existing approaches it facilitates the creation of multiple views on one process. This allows to specifically select instructions and information needed in targeted work contexts. It also allows to combine multiple variants of one process into one model with only a minimum of redundancy. The application of ARPML is shown with a practical example.


Authoring Tools in Augmented Reality: An Analysis and Classification of Content Design Tools
Roberta Cabral Mota, Rafael Roberto, Veronica Teichrieb
Poster

Augmented Reality Authoring Tools are important instruments that can help a widespread use of AR. They can be classified as programming or content design tools in which the latter completely removes the necessity of programming skills to develop an AR solution. Several solutions have been developed in the past years, however there are few works aiming to identify patterns and general models for such tools. This work aims to perform a trend analysis on content design tools in order to identify their functionalities regarding AR, authoring paradigms, deployment strategies and general dataflow models. This work is aimed to assist developers willing to create authoring tools, therefore, it focus on the last three aspects. Thus, 19 tools were analyzed and through this evaluation it were identified two authoring paradigms and two deployment strategies. Moreover, from their combination it was possible to elaborate four generic dataflow models in which every tool could be fit into.


Affording Visual Feedback for Natural Hand Interaction in AR to Assess Upper Extremity Motor Dysfunction
Marina A. Cidota, Rory M.S. Clifford, Paul Dezentje, Stephan G. Lukosch, Paulina J.M. Bank
Poster

For the clinical community, there is great need for objective, quantitative and valid measures of the factors contributing to motor dysfunction. Currently, there are no standard protocols to assess motor dysfunction in various patient groups, where each medical discipline uses subjectively scored clinical tests, qualitative video analysis, or cumbersome marker-based motion capturing. We therefore investigate the potential of Augmented Reality (AR) combined with serious gaming and marker-less tracking of the hand to facilitate efficient, cost-effective and patient-friendly methods for evaluation of upper extremity motor dysfunction in various patient groups. First, the design process of the game and the system architecture of the AR framework are described. To provide unhindered assessment of motor dysfunction, patients should freely operate with the system in a natural way and be able to understand their actions in the Virtual AR world. To test this in our system, we conducted a pilot usability study with five healthy people (aged between 57-63) on three different modalities of visual feedback for natural hand interaction. These modalities are: no virtual hand, partial virtual hand (tip of index finger and tip of thumb) and full virtual hand models. The results of the study show that a virtual representation of the fingertips or hand improves the usability of natural hand interaction.




Sponsors (Become one)


Diamond



Gold



Silver



Bronze



SME



in special cooperation with



in cooperation with


Partner Event