Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.ORCID iD: 0000-0003-2578-3403
2008 (English)In: Graphics Interface 2008, 2008, 235-242 p.Conference paper, Published paper (Refereed)
Abstract [en]

Interactive surfaces and related tangible user interfaces often involve everyday objects that are identified, tracked, and augmented with digital information. Traditional approaches for recognizing these objects typically rely on complex pattern recognition techniques, or the addition of active electronics or fiducials that alter the visual qualities of those objects, making them less practical for real-world use. Radio Frequency Identificatin (RFID) technology provides an unobtrusive method of sensing the presence of and identifying tagged nearby objects but has no inherent means of determining the position of tagged objects. Computer vision, on the other hand, is an established approach to track objects with a camera. While shapes and movement on an interactive surface can be determined from classic image processing techniques, object recognition tends to be complex, computationally expensive and sensitive to environmental conditions. We present a set of techniques in which movement and shape information from the computer vision system is fused with RFID events that identify what objects are in the image. By synchronizing these two complementary sensing modalities, we can associate changes in the image with events in the RFID data, in order to recover position, shape and identification of the objects on the surface, while avoiding complex computer vision processes and exolic RFID solutions.

Place, publisher, year, edition, pages
2008. 235-242 p.
Keyword [en]
Computer vision; Fusion; RFID; Surface computing; Tabletop; Tangible user interface
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-10464Scopus ID: 2-s2.0-63549147778OAI: oai:DiVA.org:kth-10464DiVA: diva2:217819
Conference
Graphics Interface 2008; Windsor, ON, 28 May 2008 through 30 May 2008
Note
QC 20100804Available from: 2009-05-15 Created: 2009-05-15 Last updated: 2010-08-04Bibliographically approved
In thesis
1. Unobtrusive Augmentation  of Physical Environments: Interaction Techniques, Spatial Displays and Ubiquitous Sensing
Open this publication in new window or tab >>Unobtrusive Augmentation  of Physical Environments: Interaction Techniques, Spatial Displays and Ubiquitous Sensing
2009 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The fundamental idea of Augmented Reality (AR) is to improve and enhance our perception of the surroundings, through the use of sensing, computing and display systems that make it possible to augment the physical environment with virtual computer graphics. AR is, however, often associated with user-worn equipment, whose current complexity and lack of comfort limit its applicability in many scenarios.

The goal of this work has been to develop systems and techniques for uncomplicated AR experiences that support sporadic and spontaneous interaction with minimal preparation on the user’s part.

This dissertation defines a new concept, Unobtrusive AR, which emphasizes an optically direct view of a visually unaltered physical environment, the avoidance of user-worn technology, and the preference for unencumbering techniques.

The first part of the work focuses on the design and development of two new AR display systems. They illustrate how AR experiences can be achieved through transparent see-through displays that are positioned in front of the physical environment to be augmented. The second part presents two novel sensing techniques for AR, which employ an instrumented surface for unobtrusive tracking of active and passive objects. These techniques have no visible sensing technology or markers, and are suitable for deployment in scenarios where it is important to maintain the visual qualities of the real environment. The third part of the work discusses a set of new interaction techniques for spatially aware handheld displays, public 3D displays, touch screens, and immaterial displays (which are not constrained by solid surfaces or enclosures). Many of the techniques are also applicable to human-computer interaction in general, as indicated by the accompanying qualitative and quantitative insights from user evaluations.

The thesis contributes a set of novel display systems, sensing technologies, and interaction techniques to the field of human-computer interaction, and brings new perspectives to the enhancement of real environments through computer graphics.

Place, publisher, year, edition, pages
Stockholm: KTH, 2009. xii, 72 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2009:09
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-10439 (URN)978-91-7415-339-2 (ISBN)
Public defence
2009-06-05, E1, Lindstedtsvägen 3, KTH, 13:00 (English)
Opponent
Supervisors
Note

QC 20100805

Available from: 2009-05-26 Created: 2009-05-14 Last updated: 2015-01-30Bibliographically approved

Open Access in DiVA

No full text

Other links

Scopusportal.acm.org

Authority records BETA

Olwal, Alex

Search in DiVA

By author/editor
Olwal, Alex
By organisation
Numerical Analysis and Computer Science, NADA
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 40 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf