RegisterGet new password
Augmentation Without Boundaries

The 14th IEEE International Symposium on Mixed and Augmented Reality

Doctoral Consortium

We are proud to present the 4th ISMAR Doctoral Consortium (DC). We are continuing to build on the success of the previous DC, where DC students received invaluable inputs from DC mentors to help improve their research. The goal of the DC is to create an opportunity for Ph.D students to present their research, discuss their current progress and plans, and receive constructive criticism and guidance regarding their work from mentor(s) and DC participants.
This year, six DC students will present their excellent and interesting work at the DC.

Schedule of Doctoral Consortium

Date: 3 October 2015
Place: Room 402-403 at the ISMAR 2015 conference site

9:05-10:00Max Krichenbauer (Nara Institute of Science and Technology), 1st year
User Study on Augmented Reality User Interfaces for 3D Media Production
[Mentor] Gudrun Klinker, Takeshi Kurata
10:05-11:00Minju Kim (KAIST), 2nd year
SPAROGRAM: The Spatial Augmented Reality Holographic Display for 3D Interactive Visualization
[Mentor] Henry Duh, Steven Feiner
11:05-12:00Neven Elsayed (University of South Australia), 3rd year
Situated Analytics: Interactive Analytical Reasoning In Physical Space
[Mentor] Henry Duh, Takeshi Kurata
12:00-13:00Lunch Break
13:00-13:55Benjamin Nuernberger (University of California, Santa Barbara), 4th year
AR Guided Capture and Modeling for Improved Virtual Navigation
[Mentor] Bruce Thomas, Dieter Schmalstieg
14:00-14:55Huidong Bai (HIT Lab NZ, University of Canterbury), last year
Free-Hand Gesture-based Interaction for Handheld Augmented Reality
[Mentor] Bruce Thomas, Gudrun Klinker
15:00-15:55Andrew Irlitti (University of South Australia), last year
Supporting Asynchronous Collaboration within Spatial Augmented Reality
[Mentor] Dieter Schmalstieg, Steven Feiner

User Study on Augmented Reality User Interfaces for 3D Media Production
Max Krichenbauer (Nara Institute of Science and Technology)
Advisor(s): Goshiro Yamamoto, Takafumi Taketomi, Christian Sandor, Hirokazu Kato
Poster on 1 Oct Teaser on 1 Oct

One of the most intuitive concepts in regards of applications for Augmented Reality (AR) user interfaces (UIs) is the possibility to create virtual 3D media content, such as 3D models and animations for movies and games. Even though this idea has been repeatedly suggested over the last decades, and in spite of recent technological advancements, very little progress has been made towards an actual real-world application of AR in professional media production. To this day, no immersive 3D UI has been commonly used by professionals for 3D computer graphics (CG) content creation. In our recent paper published at ISMAR2014 we have analyzed the current state of 3D media content creation, including a survey on professional 3D media design work, a requirements analysis for prospective 3D UIs, and a UI concept to meet the identified challenges of real-world application of AR to the production pipeline.
Our current research continues on this by working on validating our approach in a user study with both amateur and professional 3D artists. We aim to show that several characteristics of UI design common in academic research are highly problematic for research focused on real-world applications. These are: placing the primary focus on intuitiveness, neglect of typical demerits of a novel technology, and relying on easy-to-acquire samples from the general population for both qualitative and quantitative data.
The results of this user study will produce new insights helpful for the future research, design, and development of 3D UIs for media creation.

SPAROGRAM: The Spatial Augmented Reality Holographic Display for 3D Interactive Visualization
Minju Kim (KAIST)
Advisor(s): Kwangyun Wohn
Poster on 1 Oct Teaser on 1 Oct

Augmented reality (AR) technology has become popular and been widely used in various fields. Recent advances in display and computer graphics technology have minimized the barriers of the screen, causing closer relationship between physical and virtual space. In spite of this, there are still restrictions and difficulties to achieve representing pervasive information on the physical space which is the ultimate goal of AR. The central research question for our dissertation is how to present realistic spatial AR visualization with the tight integration of the three-dimensional visual forms and the real space. In this paper, we propose SPAROGRAM, the experimental prototype that is capable of visualizing augmented information by fully exploiting the space that takes in surroundings of the real object coherently. In order to present information effectively and to explore interaction issues by taking advantage of the new display capabilities, we will approach this topic with two methodologies: 3D Spatial Visualization and Interaction. In the early stage of the study, we solved basic problems. First, we designed SPAROGRAM prototype. Then, to express spatial visualization, stereoscopic image was applied in multi-layer display, while spatial user interaction was applied in order to provide coherent spatial experience. Our initial investigation suggests that the newly conceived spatial AR holographic display possesses the ability to create seamless 3D space perception and to express 3D AR visualization effectively and in more realistic way.

Situated Analytics: Interactive Analytical Reasoning In Physical Space
Neven ElSayed (University of South Australia)
Advisor(s): Bruce H. Thomas, Ross T. Smith
Poster on 1 Oct Teaser on 1 Oct

Our research aim is to develop and examine techniques to support analytical reasoning in physical space. We present novel interaction and visualization techniques for supporting such reasoning, which we call Situated Analytics. Our approach comprises situated and abstract information representation with real-time analytical interaction. This research draws on two research areas: Visual Analytics and Augmented Reality.

AR Guided Capture and Modeling for Improved Virtual Navigation
Benjamin Nuernberger (University of California, Santa Barbara)
Advisor(s): Tobias Höllerer, Matthew Turk
Poster on 1 Oct Teaser on 1 Oct

Using photographs to model the visual world for virtual navigation involves the three steps of collecting photos, performing image-based modeling with those photos, and finally giving the user controls to navigate the final model. In this dissertation, we enhance each step in this pipeline in specific ways for the ultimate goal of augmented and virtual navigation of visual reality. In so doing, this thesis will open new perspectives both in the process of collecting real world content for virtual navigation experiences and in providing novel guided virtual tours. First, we describe augmented reality (AR) interfaces that effectively guide users to capture the visual world, directing them to cover the most important areas of the scene. Next, while most image-based modeling techniques focus on accurate geometry reconstruction, we instead propose modeling methods that alternatively focus on the motor and cognitive components of navigation. Finally, we propose constrained user interfaces for both augmented and virtual navigation. By enhancing each step in this pipeline, our approach enables a more efficient and intuitive virtual navigation experience.

Free-Hand Gesture-based Interaction for Handheld Augmented Reality
Huidong Bai (University of Canterbury)
Advisor(s): Mukundan Ramakrishnan, Mark Billinghurst
Poster on 1 Oct Teaser on 1 Oct

In this research, we investigate mid-air free-hand gesture-based interfaces for handheld Augmented Reality (AR). We prototype our gesture-based handheld AR interfaces by combing visual AR tracking and free-hand gesture detection. We describe how each method is implemented and evaluated in user experiments that compare them with traditional device-centric methods like touch input. Results from our user studies show that the proposed interaction methods are natural and intuitive, and provide a more fun and engaging user experience. We discuss implications of this research and directions for further work.

Supporting Asynchronous Collaboration within Spatial Augmented Reality
Andrew Irlitti (University of South Australia)
Advisor(s): Bruce H. Thomas, Stewart Von Itzstein
Poster on 1 Oct Teaser on 1 Oct

Collaboration is seen as a promising area of investigation for the utilization of augmented reality. However exploration has concentrated on synchronous solutions, both co-located and remote, with limited investigation into asynchronous practices. Asynchronous collaboration differs from its synchronous counterparts in that the associated knowledge plays a much more vital role in the strength and accuracy of the associated instructions. This dissertation has focused on the investigation into the applicability and acceptance of augmented reality and its corresponding interactions for a producer-consumer asynchronous collaboration, where the underlying communication plays as an important role as the provided augmentations. Initial results demonstrated the benefits of augmented reality for procedural guidance, allowing further investigation into asynchronous techniques such as providing physical anchors for virtual content, mobile handheld spatial augmented reality, and the effect on performance in multi-faceted environment search tasks using a reduced field of view. In this paper we provide the reasoning behind our approaches, and present our existing work to test and validate our proposition.

Doctoral Consortium Chairs:
Dr. Mark Billinghurst, University of South Australia
Dr. Winyu Chinthammit, HIT Lab AU, University of Tasmania
Dr. Yoshinari Kameda, University of Tsukuba

Sponsors (Become one)






in special cooperation with

in cooperation with

Partner Event