Date: September 29th, 2015
Organizer: Clemens Arth (Graz University of Technology), Jonathan Ventura (University of Colorado), Dieter Schmalstieg (Graz University of Technology)
Abstract: In this tutorial we aim for a review of existing technologies to perform outdoor localization in urban environments at a global level in 6DOF using visual sensors primarily. The goal is to provide a clear overview about the current state-of-the-art in global positioning and orientation estimation, which includes a wide range of methods and algorithms form both the Computer Vision and the Augmented Reality community. We propose the application of a taxonomy, to separate approaches along the continuum of accuracy and the provided number of DOF, which makes the suitability of classes of approaches for a certain application in AR immediately visible and accessible.
In detail we will put a main focus on method that are real-time capable, or can at least be applied through a server-client infrastructure to provide a full and accurate 6DOF pose in large-scale environments instantly. In this way, we will discuss algorithms that are based on single images, panoramic images, as well as SLAM maps and sparse point cloud reconstructions from SfM. We will also put an emphasis on those approaches that allow for localization in 6DOF with a minimum of prior information given, i.e. Using 2D city map only, which is promising to enable global-scale application of AR, giving new experience levels way beyond the current state-of-the-art.
Date: September 29th, 2015
Organizer: Shinsaku Hiura (Hiroshima City University), Hajime Nagahara (Kyushu University), Daisuke Iwai (Osaka University), Toshiyuki Amano (Wakayama University)
Abstract: In this tutorial, we introduce emerging technologies on computational imaging and light field projection to AR/MR researchers. Specifically, this tutorial provides basic knowledge as well as application perspectives on (but not limited to) the following topics: light field cameras and displays, unconventional depth sensing methods, extension of depth of focus for imaging and projection, modification of the appearance of real objects via light field projection, geometric and radiometric calibration and compensation of projection-based display, optimization in multi-projector environments.
Date: October 3rd, 2015
Organizer: Daniel Sonntag (DFKI)
Abstract: The tutorial will introduce you to the design and implementation of Intelligent User Interfaces (IUIs). IUIs aim to incorporate intelligent automated capabilities in human computer interaction, where the net impact is a human-computer interaction that improves performance or usability in critical ways. It also involves designing and implementing an artificial intelligence (AI) component that effectively leverages human skills and capabilities, so that human performance with an application excels. IUIs embody capabilities that have traditionally been associated more strongly with humans than with computers: how to perceive, interpret, learn, use language, reason, plan, and decide.
Web: http://www.dfki.de/~sonntag/ISMAR2015-IUI.htm
Date: October 3rd, 2015
Organizer: Eric Hawkinson (Seibi University), Martin Stack (University of Shiga Prefecture), Jay Klaphake, J.D. (TEDxKyoto Founder/Executive Producer and TEDx Ambassador in Japan)
Abstract: TA variety of cases uses of AR in informal learning environments. The cases uses are drawn from a variety of different contexts. There will be examples of AR use in education, tourism, event organizing, and others. This is mainly geared to people creating learning environments in any industry a foundation to start implementation AR. The featured case use will be how AR was used at TEDxKyoto to engage participants. There will also be several student projects that use AR presented and available for demo.
This tutorial will include the following topics and demos:
Featured Case Use - TEDxKyoto
This year at TEDxKyoto, a new interactive team was assembled and geared to get participants more engaged with speakers, vendors, and volunteers. We wanted to encourage more interaction between all stakeholders both in-person and virtually on-line. Looking to approach the idea on several fronts and link them all together we put together a series of activities that have never been seen at TEDx events ever before. The result was an interesting mix that got great reaction from participants. The TEDx Program is designed to help communities, organizations and individuals to spark conversation and connection through local TED-like experiences. The focus is on curating an interesting program of speakers and performers to engage audiences. Our team’s focus was creating activities for participants that encouraged interaction. One of these activities revolved around the use of augmented reality and mobile technology. We created a smartphone application that allowed participants to explore the venue in a fun and interesting way. This app overlayed digital information on physical things all over the event such as signs, artwork, volunteer T-shirts and the distributed speaker program. User analytics and participant observations were used to analyze activities. The experiment results point to several opportunities for the use of this technology to bring people together in international social settings. Future challenges surrounding technology acceptance and privacy also became apparent. We will explain how these technologies can be used and how it might impact how people of different cultures and backgrounds interact at larger events.
This will be a good introduction to how event organizers can use AR to engage audiences and it may be interesting to AR developers to gain understanding of barriers to adoption for first time users and issues of implementation for non-programmers.
Project videos found at: https://medium.com/@erichawkinson/high-and-low-tech-interactive-tedxkyoto-2398fc329f3e