Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Unobtrusive Augmentation  of Physical Environments: Interaction Techniques, Spatial Displays and Ubiquitous Sensing
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.ORCID iD: 0000-0003-2578-3403
2009 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The fundamental idea of Augmented Reality (AR) is to improve and enhance our perception of the surroundings, through the use of sensing, computing and display systems that make it possible to augment the physical environment with virtual computer graphics. AR is, however, often associated with user-worn equipment, whose current complexity and lack of comfort limit its applicability in many scenarios.

The goal of this work has been to develop systems and techniques for uncomplicated AR experiences that support sporadic and spontaneous interaction with minimal preparation on the user’s part.

This dissertation defines a new concept, Unobtrusive AR, which emphasizes an optically direct view of a visually unaltered physical environment, the avoidance of user-worn technology, and the preference for unencumbering techniques.

The first part of the work focuses on the design and development of two new AR display systems. They illustrate how AR experiences can be achieved through transparent see-through displays that are positioned in front of the physical environment to be augmented. The second part presents two novel sensing techniques for AR, which employ an instrumented surface for unobtrusive tracking of active and passive objects. These techniques have no visible sensing technology or markers, and are suitable for deployment in scenarios where it is important to maintain the visual qualities of the real environment. The third part of the work discusses a set of new interaction techniques for spatially aware handheld displays, public 3D displays, touch screens, and immaterial displays (which are not constrained by solid surfaces or enclosures). Many of the techniques are also applicable to human-computer interaction in general, as indicated by the accompanying qualitative and quantitative insights from user evaluations.

The thesis contributes a set of novel display systems, sensing technologies, and interaction techniques to the field of human-computer interaction, and brings new perspectives to the enhancement of real environments through computer graphics.

Place, publisher, year, edition, pages
Stockholm: KTH , 2009. , xii, 72 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2009:09
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-10439ISBN: 978-91-7415-339-2 (print)OAI: oai:DiVA.org:kth-10439DiVA: diva2:217821
Public defence
2009-06-05, E1, Lindstedtsvägen 3, KTH, 13:00 (English)
Opponent
Supervisors
Note

QC 20100805

Available from: 2009-05-26 Created: 2009-05-14 Last updated: 2015-01-30Bibliographically approved
List of papers
1. ASTOR: An Autostereoscopic Optical See-through Augmented Reality System
Open this publication in new window or tab >>ASTOR: An Autostereoscopic Optical See-through Augmented Reality System
Show others...
2005 (English)In: ISMAR: IEEE and ACM International Symposium on Mixed and Augmented Reality, 2005, 24-27 p.Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel autostereoscopic optical see-through system for Augmented Reality (AR). It uses a transparent holographic optical element (HOE) to separate the views produced by two, or more, digital projectors. It is a minimally intrusive AR system that does not require the user to wear special glasses or any other equipment, since the user will see different images depending on the point of view. The HOE itself is a thin glass plate or plastic film that can easily be incorporated into other surfaces, such as a window. The technology offers great flexibility, allowing the projectors to be placed where they are the least intrusive. ASTOR's capability of sporadic AR visualization is currently ideal for smaller physical workspaces, such as our prototype setup in an industrial environment.

Keyword
Augmented reality; Autostereoscopy; Holographic optical element; Optical see-through; Projection-based; System
National Category
Computer Science Atom and Molecular Physics and Optics
Identifiers
urn:nbn:se:kth:diva-10459 (URN)10.1109/ISMAR.2005.15 (DOI)000233695600003 ()2-s2.0-33750952090 (Scopus ID)
Note
QC 20100804Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-12-06Bibliographically approved
2. Spatial Augmented Reality on Industrial CNC-Machines
Open this publication in new window or tab >>Spatial Augmented Reality on Industrial CNC-Machines
2008 (English)In: SPIE 2008 Electronic Imaging, Vol. 6804: The Engineering Reality of Virtual Reality 2008, 2008, 680409- p.Conference paper, Published paper (Refereed)
Abstract [en]

In this work we present how Augmented Reality (AR) can be used to create an intimate integration of process data with the workspace of an industrial CNC (computer numerical control) machine. AR allows us to combine interactive computer graphics with real objects in a physical environment - in this case, the workspace of an industrial lathe. ASTOR is an autostereoscopic optical see-through spatial AR system, which provides real-time 3D visual feedback without the need for user-worn equipment, such as head-mounted displays or sensors for tracking. The use of a transparent holographic optical element, overlaid onto the safety glass, allows the system to simultaneously provide bright imagery and clear visibility of the tool and workpiece. The system makes it possible to enhance visibility of occluded tools as well as to visualize real-time data from the process in the 3D space. The graphics are geometrically registered with the workspace and provide an intuitive representation of the process, amplifying the user's understanding and simplifying machine operation.

Keyword
ASTOR; Augmented reality; Autostereoscopic display; CNC; Holographic optical element; Mixed reality; Optical see-through; Spatial augmented reality
National Category
Computer Science Atom and Molecular Physics and Optics
Identifiers
urn:nbn:se:kth:diva-10460 (URN)10.1117/12.760960 (DOI)000255679500007 ()2-s2.0-40749160865 (Scopus ID)
Note
QC 20100804Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-12-06Bibliographically approved
3. POLAR: Portable, optical see-through, low-cost augmented reality
Open this publication in new window or tab >>POLAR: Portable, optical see-through, low-cost augmented reality
2006 (English)In: Proc. ACM Symp. Virtual Reality Softw. Technol. VRST, 2006, 227-230 p.Conference paper, Published paper (Refereed)
Abstract [en]

We describe POLAR, a portable, optical see-through, low-cost augmented reality system, which allows a user to see annotated views of small to medium-sized physical objects in an unencumbered way. No display or tracking equipment needs to be worn. We describe the system design, including a hybrid IR/vision head-tracking solution, and present examples of simple augmented scenes. POLAR's compactness could allow it to be used as a lightweight and portable PC peripheral for providing mobile users with on-demand AR access in field work.

Series
Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, 2006
Keyword
Augmented reality, Compact, Low-cost, Optical see-through, Portable, Projection, Personal computers, Systems analysis, Tracking (position), Low cost, Virtual reality
National Category
Software Engineering
Identifiers
urn:nbn:se:kth:diva-155995 (URN)2-s2.0-33748699854 (Scopus ID)1595930981 (ISBN)9781595930989 (ISBN)
Conference
VRST'05 - ACM Symposium on Virtual Reality Software and Technology 2005, 7-9 November 2005, Monterey, CA, USA
Note

QC 20141125

Available from: 2014-11-25 Created: 2014-11-17 Last updated: 2015-01-30Bibliographically approved
4. LightSense: Enabling Spatially Aware Handheld Interaction Devices
Open this publication in new window or tab >>LightSense: Enabling Spatially Aware Handheld Interaction Devices
2007 (English)In: Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007, 119-122 p.Conference paper, Published paper (Refereed)
Abstract [en]

The vision of spatially aware handheld interaction devices has been hard to realize. The difficulties in solving the general tracking problem for small devices have been addressed by several research groups and examples of issues are performance, hardware availability and platform independency. We present Light-Sense, an approach that employs commercially available components to achieve robust tracking of cell phone LEDs, without any modifications to the device. Cell phones can thus be promoted to interaction and display devices in ubiquitous installations of systems such as the ones we present here. This could enable a new generation of spatially aware handheld interaction devices that would unobtrusively empower and assist us in our everyday tasks.

Keyword
Augmented reality, Cell phone, Handheld, LED, Mixed reality, Mobile, Portable, Spatially aware, Ubiquitous
National Category
Computer Science Telecommunications
Identifiers
urn:nbn:se:kth:diva-10462 (URN)10.1109/ISMAR.2006.297802 (DOI)000245714200029 ()2-s2.0-45149104843 (Scopus ID)978-142440650-0 (ISBN)1424406501 (ISBN)
Conference
ISMAR 2006: 5th IEEE and ACM International Symposium on Mixed and Augmented Reality; Santa Barbara, CA; United States; 22 October 2006 through 25 October 2006
Note

QC 20100804

Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2014-11-17Bibliographically approved
5. LUMAR: A Hybrid Spatial Display System for 2D and 3D Handheld Augmented Reality
Open this publication in new window or tab >>LUMAR: A Hybrid Spatial Display System for 2D and 3D Handheld Augmented Reality
2007 (English)In: ICAT: International Conference on Artificial Reality and Teleexistence, 2007, 63-70 p.Conference paper, Published paper (Refereed)
Abstract [en]

LUMAR is a hybrid system for spatial displays, allowing cell phones to be tracked in 2D and 3D through combined egocentric and exocentric techniques based on the Light-Sense and UMAR frameworks. LUMAR differs from most other spatial display systems based on mobile phones with its three-layered information space. The hybrid spatial display system consists of printed matter that is augmented with context-sensitive, dynamic 2D media when the device is on the surface, and with overlaid 3D visualizations when it is held in mid-air.

Keyword
spatially aware; portable; mobile; handheld; cell; phone; augmented reality; mixed reality; ubiquitous
National Category
Telecommunications Computer Science
Identifiers
urn:nbn:se:kth:diva-10463 (URN)10.1109/ICAT.2007.38 (DOI)000253933000008 ()2-s2.0-48349094389 (Scopus ID)978-076953056-7 (ISBN)
Note
QC 20100804Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-12-06Bibliographically approved
6. SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces
Open this publication in new window or tab >>SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces
2008 (English)In: Graphics Interface 2008, 2008, 235-242 p.Conference paper, Published paper (Refereed)
Abstract [en]

Interactive surfaces and related tangible user interfaces often involve everyday objects that are identified, tracked, and augmented with digital information. Traditional approaches for recognizing these objects typically rely on complex pattern recognition techniques, or the addition of active electronics or fiducials that alter the visual qualities of those objects, making them less practical for real-world use. Radio Frequency Identificatin (RFID) technology provides an unobtrusive method of sensing the presence of and identifying tagged nearby objects but has no inherent means of determining the position of tagged objects. Computer vision, on the other hand, is an established approach to track objects with a camera. While shapes and movement on an interactive surface can be determined from classic image processing techniques, object recognition tends to be complex, computationally expensive and sensitive to environmental conditions. We present a set of techniques in which movement and shape information from the computer vision system is fused with RFID events that identify what objects are in the image. By synchronizing these two complementary sensing modalities, we can associate changes in the image with events in the RFID data, in order to recover position, shape and identification of the objects on the surface, while avoiding complex computer vision processes and exolic RFID solutions.

Keyword
Computer vision; Fusion; RFID; Surface computing; Tabletop; Tangible user interface
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-10464 (URN)2-s2.0-63549147778 (Scopus ID)
Conference
Graphics Interface 2008; Windsor, ON, 28 May 2008 through 30 May 2008
Note
QC 20100804Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-08-04Bibliographically approved
7. Rubbing and Tapping for Precise and Rapid Selection on Touch-Screen Displays
Open this publication in new window or tab >>Rubbing and Tapping for Precise and Rapid Selection on Touch-Screen Displays
2008 (English)In: CHI: SIGCHI Conference on Human Factors in Computing Systems, 2008, 295-304 p.Conference paper, Published paper (Refereed)
Abstract [en]

We introduce two families of techniques, rubbing and tapping, that use zooming to make precise interaction on passive touch screens possible. Rub-Pointing uses a diagonal rubbing gesture to integrate pointing and zooming in a single-handed technique. In contrast, Zoom-Tapping is a twohanded technique in which the dominant hand points, while the non-dominant hand taps to zoom, simulating multitouch functionality on a single-touch display. Rub-Tapping is a hybrid technique that integrates rubbing with the dominant hand to point and zoom, and tapping with the nondominant hand to confirm selection. We describe the results of a formal user study comparing these techniques with each other and with the well-known Take-Off and Zoom-Pointing selection techniques. Rub-Pointing and Zoom-Tapping had significantly fewer errors than Take-Off for small targets, and were significantly faster than Take-Off and Zoom-Pointing. We show how the techniques can be used for fluid interaction in an image viewer and in existing applications, such as Google Maps.

Keyword
Interaction techniques; Pointing; Rub-Pointing; Rub-Tapping; Rubbing; Tapping; Touch screens; Zoom-Tapping
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-10465 (URN)10.1145/1357054.1357105 (DOI)000268586100039 ()2-s2.0-57649178619 (Scopus ID)
Conference
26th Annual CHI Conference on Human Factors in Computing Systems, CHI 2008; Florence; 5 April 2008 through 10 April 2008
Note
QC 20100804Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-12-06Bibliographically approved
8. Unencumbered 3D Interaction with See-through Displays
Open this publication in new window or tab >>Unencumbered 3D Interaction with See-through Displays
2008 (English)In: NordiCHI: Nordic Conference on Human–Computer interaction, 2008, 527-530 p.Conference paper, Published paper (Refereed)
Abstract [en]

Augmented Reality (AR) systems that employ user-worn display and sensor technology can be problematic for certain applications as the technology might, for instance, be encumbering to the user or limit the deployment options of the system. Spatial AR systems instead use stationary displays that provide augmentation to an on-looking user. They could avoid issues with damage, breakage and wear, while enabling ubiquitous installations in unmanned environments, through protected display and sensing technology. Our contribution is an exploration of compatible interfaces for public AR environments. We investigate interactive technologies, such as touch, gesture and head tracking, which are specifically appropriate for spatial optical see-through displays. A prototype system for a digital museum display was implemented and evaluated. We present the feedback from domain experts, and the results from a qualitative user study of seven interfaces for public spatial optical see-through displays.

Keyword
3D; Augmented reality; Gesture; Interaction; Interface; Mixed reality; Pose; Public display; See-through; Spatial display; Touch
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-10466 (URN)10.1145/1463160.1463236 (DOI)2-s2.0-70049099185 (Scopus ID)
Note
QC 20100805Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-08-05Bibliographically approved
9. An Immaterial Pseudo-3D Display System with 3D Interaction
Open this publication in new window or tab >>An Immaterial Pseudo-3D Display System with 3D Interaction
2008 (English)In: Three-Dimensional Television: Capture, Transmission, and Display, Springer , 2008, 505-528 p.Chapter in book (Other academic)
Abstract [en]

We present a novel walk-through pseudo-3D display, which enables 3D interactionand interesting possibilities for advanced user interface designs. Our work isbased on the patented FogScreen, an “immaterial” indoor 2D projection screenwhich enables high-quality projected images in free space. We extend the basic2D FogScreen setup with stereoscopic imagery and two-sidedness, in addition tothe use of head tracking to provide correct perspective 3D rendering for a singleuser. We also add support for 2D and 3D interaction for multiple users with theobjects on the screen, via a number of wireless input technologies that let us experimentwith interaction with or without encumbering devices. We evaluate theusability of these interaction techniques by observing non-expert use in real settingsto quantify the effects they have on 3D perception. The result is a wall-sized,immaterial pseudo-3D display that enables engaging 3D visuals with intriguing3D interaction.

Place, publisher, year, edition, pages
Springer, 2008
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-10467 (URN)978-3-540-72531-2 (ISBN)
Note
QC 20100805Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-08-05Bibliographically approved
10. Consigalo: Multi-user, Face-to-face Interaction with Adaptive Audio, on an Immaterial Display
Open this publication in new window or tab >>Consigalo: Multi-user, Face-to-face Interaction with Adaptive Audio, on an Immaterial Display
2008 (English)In: INTETAIN: International Conference on Intelligent Technologies for Interactive Entertainment, 2008Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we describe and discuss interaction techniques and interfaces enabled by immaterial displays. Dual-sided projection allows casual face-to-face interaction between users, with computer-generated imagery in-between them. The immaterial display imposes minimal restrictions to the movements orcommunication of the users.As an example of these novel possibilities, we provide a detailed description of our Consigalo gaming system, which creates an enhanced gaming experience featuring sporadic and unencumbered interaction. Consigalo utilizes a robust 3D trackingsystem, which supports multiple simultaneous users on either side of the projection surface. Users manipulate graphics that arefloating in mid-air with natural gestures. We have also added are sponsive and adaptive sound track to further immerse the usersin the interactive experience. We describe the technology used in the system, the innovative aspects compared to previous largescreengaming systems, the gameplay and our lessons learned from designing and implementing the interactions, visuals and the auditory feedback.

Keyword
Multi-user, interaction, dual-sided, FogScreen, immaterial, adaptive audio.
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-10468 (URN)978-963-9799-13-4 (ISBN)
Note
QC 20100805Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-08-05Bibliographically approved
11. Spatially Aware Handhelds for High-Precision Tangible Interaction with Large Displays
Open this publication in new window or tab >>Spatially Aware Handhelds for High-Precision Tangible Interaction with Large Displays
2009 (English)In: TEI 2009: International Conference on Tangible and Embedded Interaction, 2009, 181-188 p.Conference paper, Published paper (Refereed)
Abstract [en]

While touch-screen displays are becoming increasingly popular, many factors affect user experience and performance. Surface quality, parallax, input resolution, and robustness, for instance, can vary with sensing technology, hardware configurations, and environmental conditions.

We have developed a framework for exploring how we could overcome some of these dependencies, by leveraging the higher visual and input resolution of small, coarsely tracked mobile devices for direct, precise, and rapid interaction on large digital displays.

The results from a formal user study show no significant differences in performance when comparing four techniques we developed for a tracked mobile device, where two existing touch-screen techniques served as baselines. The mobile techniques, however, had more consistent performance and smaller variations among participants, and an overall higher user preference in our setup. Our results show the potential of spatially aware handhelds as an interesting complement or substitute for direct touch-interaction on large displays.

Keyword
Interaction technique; LightSense; Mobile; MobileButtons; MobileDrag; MobileGesture; MobileRub; Spatially aware; Tangible; Touch; Touch-screen
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-10437 (URN)10.1145/1517664.1517705 (DOI)2-s2.0-70349092988 (Scopus ID)
Note
QC 20100805Available from: 2009-05-15 Created: 2009-05-14 Last updated: 2010-08-05Bibliographically approved

Open Access in DiVA

fulltext(5940 kB)311 downloads
File information
File name FULLTEXT02.pdfFile size 5940 kBChecksum SHA-512
05e22c38b69e832d6e93fd7b8d3d234c7fba2c878024c2f0dd2d452e858ecddf0c015ca2ae04f1cf2b9912d34c52724dd080d2d50f1d3804df4dc2dc7f35f099
Type fulltextMimetype application/pdf

Authority records BETA

Olwal, Alex

Search in DiVA

By author/editor
Olwal, Alex
By organisation
Numerical Analysis and Computer Science, NADA
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 315 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1461 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf