Change search
Refine search result
1 - 29 of 29
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1. DiVerdi, S.
    et al.
    Rakkolainen, I.
    Höllerer, T.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI (closed 20111231).
    A novel walk-through 3D display2006In: Proc SPIE Int Soc Opt Eng, 2006Conference paper (Refereed)
    Abstract [en]

    We present a novel walk-through 3D display based on the patented FogScreen, an "immaterial" indoor 2D projection screen, which enables high-quality projected images in free space. We extend the basic 2D FogScreen setup in three major ways. First, we use head tracking to provide correct perspective rendering for a single user. Second, we add support for multiple types of stereoscopic imagery. Third, we present the front and back views of the graphics content on the two sides of the FogScreen, so that the viewer can cross the screen to see the content from the back. The result is a wallsized, immaterial display that creates an engaging 3D visual.

  • 2. DiVerdi, Stephen
    et al.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Rakkolainen, Ismo
    Höllerer, Tobias
    An Immaterial Pseudo-3D Display System with 3D Interaction2008In: Three-Dimensional Television: Capture, Transmission, and Display, Springer , 2008, 505-528 p.Chapter in book (Other academic)
    Abstract [en]

    We present a novel walk-through pseudo-3D display, which enables 3D interactionand interesting possibilities for advanced user interface designs. Our work isbased on the patented FogScreen, an “immaterial” indoor 2D projection screenwhich enables high-quality projected images in free space. We extend the basic2D FogScreen setup with stereoscopic imagery and two-sidedness, in addition tothe use of head tracking to provide correct perspective 3D rendering for a singleuser. We also add support for 2D and 3D interaction for multiple users with theobjects on the screen, via a number of wireless input technologies that let us experimentwith interaction with or without encumbering devices. We evaluate theusability of these interaction techniques by observing non-expert use in real settingsto quantify the effects they have on 3D perception. The result is a wall-sized,immaterial pseudo-3D display that enables engaging 3D visuals with intriguing3D interaction.

  • 3.
    Ericson, Finn
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC).
    Interaction and rendering techniques for handheld phantograms2011In: Conf Hum Fact Comput Syst Proc, 2011, 1339-1344 p.Conference paper (Refereed)
    Abstract [en]

    We present a number of rendering and interaction techniques that exploit the user's viewpoint for improved realism and immersion in 3D applications on handheld devices. Unlike 3D graphics on stationary screens, graphics on handheld devices are seldom regarded from a fixed perspective. This is particularly true for recent mobile platforms, where it is increasingly popular to use device orientation for interaction. We describe a set of techniques for improved perception of rendered 3D content. View-point correct anamorphosis and stereoscopy are discussed along with ways to approximate the spatial relationship between the user and the device. We present the design and implementation of a prototype phantogram viewer that was used to explore these methods for interaction with real-time photorealistic 3D models on commercially available mobile devices.

  • 4.
    Ioakeimidou, Foteini
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC).
    Nordberg, Axel
    KTH, School of Technology and Health (STH), Neuronic Engineering.
    von Holst, Hans
    KTH, School of Technology and Health (STH), Neuronic Engineering.
    3D Visualization and Interaction with Spatiotemporal X-ray Data to Minimize Radiation in Image-guided Surgery2011In: 2011 24TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS) / [ed] Olive, M; Solomonides, T, NEW YORK, NY: IEEE , 2011Conference paper (Refereed)
    Abstract [en]

    Image-guided surgery (IGS) often depends on X-ray imaging, since pre-operative MRI, CT and PET scans do not provide an up-to-date internal patient view during the operation. X-rays introduce hazardous radiation, but long exposures for monitoring are often necessary to increase accuracy in critical situations. Surgeons often also take multiple X-rays from different angles, as X-rays only provide a distorted 2D perspective from the current viewpoint. We introduce a prototype IGS system that augments 2D X-ray images with spatiotemporal information using a motion tracking system, such that the use of X-rays can be reduced. In addition, an interactive visualization allows exploring 2D X-rays in timeline views and 3D clouds where they are arranged according to the viewpoint at the time of acquisition. The system could be deployed and used without time-consuming calibration, and has the potential to improve surgeons' spatial awareness, while increasing efficiency and patient safety.

  • 5. Lakatos, D.
    et al.
    Blackshaw, M.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz). GoogleMountain View, United States .
    Barryte, Z.
    Perlin, K.
    Ishii, H.
    T(ether): Spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation2014In: SUI 2014 - Proceedings of the 2nd ACM Symposium on Spatial User Interaction, Association for Computing Machinery (ACM), 2014, 90-93 p.Conference paper (Refereed)
    Abstract [en]

    T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.

  • 6. Leithinger, D.
    et al.
    Follmer, S.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz). Google (X)Mountain View, United States .
    Ishii, H.
    Physical Telepresence: Shape capture and display for embodied, computer-mediated remote collaboration2014In: UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, Association for Computing Machinery (ACM), 2014, 461-470 p.Conference paper (Refereed)
    Abstract [en]

    We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.

  • 7. Leithinger, Daniel
    et al.
    Follmer, Sean
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz). Massachusetts Institute of Technology, United States.
    Ishii, Hiroshi
    Shape Displays: Spatial Interaction with Dynamic Physical Form2015In: IEEE Computer Graphics and Applications, ISSN 0272-1716, E-ISSN 1558-1756, Vol. 35, no 5, 5-11 p.Article in journal (Refereed)
  • 8.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI.
    Augmenting Surface Interaction through Context-Sensitive Mobile Devices2009In: HUMAN-COMPUTER INTERACTION - INTERACT 2009, PT II, PROCEEDINGS / [ed] Gross T; Gulliksen J; Kotze P; Oestreicher L; Palanque P; Prates RO; Winckler M, 2009, Vol. 5727, 336-339 p.Conference paper (Refereed)
    Abstract [en]

    We discuss the benefits of using a mobile device to expand and improve the interactions on a large touch-sensitive Surface. The mobile device's denser arrangement of pixels and touch-sensor elements, and its rich set of mechanical on-board input controls, can be leveraged for increased expressiveness, visual feedback and more precise direct-manipulation. We also show how these devices can support unique input from Multiple simultaneous users in collaborative scenarios. Handheld mobile devices and large interactive surfaces can be Mutually beneficial in numerous ways, while their complementary nature allows them to preserve the behavior of the original user interface.

  • 9.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    LightSense: Enabling Spatially Aware Handheld Interaction Devices2007In: Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007, 119-122 p.Conference paper (Refereed)
    Abstract [en]

    The vision of spatially aware handheld interaction devices has been hard to realize. The difficulties in solving the general tracking problem for small devices have been addressed by several research groups and examples of issues are performance, hardware availability and platform independency. We present Light-Sense, an approach that employs commercially available components to achieve robust tracking of cell phone LEDs, without any modifications to the device. Cell phones can thus be promoted to interaction and display devices in ubiquitous installations of systems such as the ones we present here. This could enable a new generation of spatially aware handheld interaction devices that would unobtrusively empower and assist us in our everyday tasks.

  • 10.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Unencumbered 3D Interaction with See-through Displays2008In: NordiCHI: Nordic Conference on Human–Computer interaction, 2008, 527-530 p.Conference paper (Refereed)
    Abstract [en]

    Augmented Reality (AR) systems that employ user-worn display and sensor technology can be problematic for certain applications as the technology might, for instance, be encumbering to the user or limit the deployment options of the system. Spatial AR systems instead use stationary displays that provide augmentation to an on-looking user. They could avoid issues with damage, breakage and wear, while enabling ubiquitous installations in unmanned environments, through protected display and sensing technology. Our contribution is an exploration of compatible interfaces for public AR environments. We investigate interactive technologies, such as touch, gesture and head tracking, which are specifically appropriate for spatial optical see-through displays. A prototype system for a digital museum display was implemented and evaluated. We present the feedback from domain experts, and the results from a qualitative user study of seven interfaces for public spatial optical see-through displays.

  • 11.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Unobtrusive Augmentation  of Physical Environments: Interaction Techniques, Spatial Displays and Ubiquitous Sensing2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The fundamental idea of Augmented Reality (AR) is to improve and enhance our perception of the surroundings, through the use of sensing, computing and display systems that make it possible to augment the physical environment with virtual computer graphics. AR is, however, often associated with user-worn equipment, whose current complexity and lack of comfort limit its applicability in many scenarios.

    The goal of this work has been to develop systems and techniques for uncomplicated AR experiences that support sporadic and spontaneous interaction with minimal preparation on the user’s part.

    This dissertation defines a new concept, Unobtrusive AR, which emphasizes an optically direct view of a visually unaltered physical environment, the avoidance of user-worn technology, and the preference for unencumbering techniques.

    The first part of the work focuses on the design and development of two new AR display systems. They illustrate how AR experiences can be achieved through transparent see-through displays that are positioned in front of the physical environment to be augmented. The second part presents two novel sensing techniques for AR, which employ an instrumented surface for unobtrusive tracking of active and passive objects. These techniques have no visible sensing technology or markers, and are suitable for deployment in scenarios where it is important to maintain the visual qualities of the real environment. The third part of the work discusses a set of new interaction techniques for spatially aware handheld displays, public 3D displays, touch screens, and immaterial displays (which are not constrained by solid surfaces or enclosures). Many of the techniques are also applicable to human-computer interaction in general, as indicated by the accompanying qualitative and quantitative insights from user evaluations.

    The thesis contributes a set of novel display systems, sensing technologies, and interaction techniques to the field of human-computer interaction, and brings new perspectives to the enhancement of real environments through computer graphics.

  • 12.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Di Verdi, Stephen
    Rakkolainen, Ismo
    Höllerer, Tobias
    Consigalo: Multi-user, Face-to-face Interaction with Adaptive Audio, on an Immaterial Display2008In: INTETAIN: International Conference on Intelligent Technologies for Interactive Entertainment, 2008Conference paper (Refereed)
    Abstract [en]

    In this paper, we describe and discuss interaction techniques and interfaces enabled by immaterial displays. Dual-sided projection allows casual face-to-face interaction between users, with computer-generated imagery in-between them. The immaterial display imposes minimal restrictions to the movements orcommunication of the users.As an example of these novel possibilities, we provide a detailed description of our Consigalo gaming system, which creates an enhanced gaming experience featuring sporadic and unencumbered interaction. Consigalo utilizes a robust 3D trackingsystem, which supports multiple simultaneous users on either side of the projection surface. Users manipulate graphics that arefloating in mid-air with natural gestures. We have also added are sponsive and adaptive sound track to further immerse the usersin the interactive experience. We describe the technology used in the system, the innovative aspects compared to previous largescreengaming systems, the gameplay and our lessons learned from designing and implementing the interactions, visuals and the auditory feedback.

  • 13.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    DiVerdi, S.
    Candussi, N.
    Rakkolainen, I.
    Höllerer, T.
    An immaterial, dual-sided display system with 3D interaction2006In: Proc. IEEE Virtual Real., 2006Conference paper (Refereed)
    Abstract [en]

    We present an interactive wall-sized immaterial display that introduces a number of interesting possibilities for advanced interface design. The immaterial nature of a thin sheet of fog allows users to penetrate and even walk through the screen, while its dual-sided nature allows for new possibilities in multi-user face-to-face collaboration and pseudo-3D visualization.

  • 14.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    DiVerdi, S.
    Rakkolainen, I.
    Höllerer, T.
    Consigalo: Multi-user face-to-face interaction on immaterial displays2008In: INTETAIN 2008 - 2nd International Conference on INtelligent TEchnologies for Interactive EnterTAINment / [ed] Kruger A.,Rehg J.,Feiner S., ICST , 2008Conference paper (Refereed)
    Abstract [en]

    In this paper, we describe and discuss interaction techniques and interfaces enabled by immaterial displays. Dual-sided projection allows casual face-to-face interaction between users, with computer-generated imagery in-between them. The immaterial display imposes minimal restrictions to the movements or communication of the users. As an example of these novel possibilities, we provide a detailed description of our Consigalo gaming system, which creates an enhanced gaming experience featuring sporadic and unencumbered interaction. Consigalo utilizes a robust 3D tracking system, which supports multiple simultaneous users on either side of the projection surface. Users manipulate graphics that are floating in mid-air with natural gestures. We have also added a responsive and adaptive sound track to further immerse the users in the interactive experience. We describe the technology used in the system, the innovative aspects compared to previous largescreen gaming systems, the gameplay and our lessons learned from designing and implementing the interactions, visuals and the auditory feedback.

  • 15.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Feiner, S.
    Interaction techniques using prosodic features of speech and audio localization2005In: Int Conf Intell User Interfaces Proc IUI, 2005, 284-286 p.Conference paper (Refereed)
    Abstract [en]

    We describe several approaches for using prosodic features of speech and audio localization to control interactive applications. This information can be applied to parameter control, as well as to speech disambiguation. We discuss how characteristics of spoken sentences can be exploited in the user interface; for example, by considering the speed with which a sentence is spoken and the presence of extraneous utterances. We also show how coarse audio localization can be used for low-fidelity gesture tracking, by inferring the speaker's head position.

  • 16.
    Olwal, Alex
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Feiner, S.
    Unit: Modular development of distributed interaction techniques for highly interactive user interfaces2004In: Proc. GRAPHITE Int. Conf. Comput. Graph. Interact. Tech. Australas. S. East Asia, 2004, 131-138 p.Conference paper (Refereed)
    Abstract [en]

    The Unit framework uses a dataflow programming language to describe interaction techniques for highly interactive environments, such as augmented, mixed, and virtual reality. Unit places interaction techniques in an abstraction layer between the input devices and the application, which allows the application developer to separate application functionality from interaction techniques and behavior. Unit's modular approach leads to the design of reusable application-independent interaction control components, portions of which can be distributed across different machines. Unit makes it possible at run time to experiment with interaction technique behavior, as well as to switch among different input device configurations. We provide both a visual interface and a programming API for the specification of the dataflow. To demonstrate how Unit works and to show the benefits to the interaction design process, we describe a few interaction techniques implemented using Unit. We also show how Unit's distribution mechanism can offload CPU intensive operations, as well as avoid costly special-purpose hardware in experimental setups.

  • 17.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Feiner, Steven
    Spatially Aware Handhelds for High-Precision Tangible Interaction with Large Displays2009In: TEI 2009: International Conference on Tangible and Embedded Interaction, 2009, 181-188 p.Conference paper (Refereed)
    Abstract [en]

    While touch-screen displays are becoming increasingly popular, many factors affect user experience and performance. Surface quality, parallax, input resolution, and robustness, for instance, can vary with sensing technology, hardware configurations, and environmental conditions.

    We have developed a framework for exploring how we could overcome some of these dependencies, by leveraging the higher visual and input resolution of small, coarsely tracked mobile devices for direct, precise, and rapid interaction on large digital displays.

    The results from a formal user study show no significant differences in performance when comparing four techniques we developed for a tracked mobile device, where two existing touch-screen techniques served as baselines. The mobile techniques, however, had more consistent performance and smaller variations among participants, and an overall higher user preference in our setup. Our results show the potential of spatially aware handhelds as an interesting complement or substitute for direct touch-interaction on large displays.

  • 18.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Feiner, Steven
    Heyman, Susanna
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Rubbing and Tapping for Precise and Rapid Selection on Touch-Screen Displays2008In: CHI: SIGCHI Conference on Human Factors in Computing Systems, 2008, 295-304 p.Conference paper (Refereed)
    Abstract [en]

    We introduce two families of techniques, rubbing and tapping, that use zooming to make precise interaction on passive touch screens possible. Rub-Pointing uses a diagonal rubbing gesture to integrate pointing and zooming in a single-handed technique. In contrast, Zoom-Tapping is a twohanded technique in which the dominant hand points, while the non-dominant hand taps to zoom, simulating multitouch functionality on a single-touch display. Rub-Tapping is a hybrid technique that integrates rubbing with the dominant hand to point and zoom, and tapping with the nondominant hand to confirm selection. We describe the results of a formal user study comparing these techniques with each other and with the well-known Take-Off and Zoom-Pointing selection techniques. Rub-Pointing and Zoom-Tapping had significantly fewer errors than Take-Off for small targets, and were significantly faster than Take-Off and Zoom-Pointing. We show how the techniques can be used for fluid interaction in an image viewer and in existing applications, such as Google Maps.

  • 19.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI (closed 20111231).
    Frykholm, Oscar
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI (closed 20111231).
    Groth, Kristina
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI (closed 20111231).
    Moll, Jonas
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI (closed 20111231).
    Design and Evaluation of Interaction Technology for Medical Team Meetings2011In: 13th IFIP TC 13 International Conference, Lisbon, Portugal, September 5-9, 2011, Proceedings, Part I, Springer, 2011, 505-522 p.Conference paper (Refereed)
    Abstract [en]

    Multi-disciplinary team meetings (MDTMs) are essential in health-care, where medical specialists discuss diagnosis and treatment of patients. We introduce a prototype multi-display groupware system, intended to augment the discussions of medical imagery, through a range of input mechanisms, multi-user interfaces and interaction techniques on multi-touch devices and pen-based technologies. Observations of MDTMs, as well as interviews and observations of surgeons and radiologists, serve as a foundation for guidelines and a set of implemented techniques. We present a detailed analysis of a study where the techniques’ potential was explored with radiologists and surgeons of different specialties and varying expertise. The results show that the implemented technologies have the potential to bring numerous benefits to the team meetings with minimal modification to the current workflow. We discuss how they can augment the expressiveness and communication between meeting participants, facilitate understanding for novices, and improve remote collaboration.

  • 20.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Gustafsson, Jonny
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Lindfors, Christoffer
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Spatial Augmented Reality on Industrial CNC-Machines2008In: SPIE 2008 Electronic Imaging, Vol. 6804: The Engineering Reality of Virtual Reality 2008, 2008, 680409- p.Conference paper (Refereed)
    Abstract [en]

    In this work we present how Augmented Reality (AR) can be used to create an intimate integration of process data with the workspace of an industrial CNC (computer numerical control) machine. AR allows us to combine interactive computer graphics with real objects in a physical environment - in this case, the workspace of an industrial lathe. ASTOR is an autostereoscopic optical see-through spatial AR system, which provides real-time 3D visual feedback without the need for user-worn equipment, such as head-mounted displays or sensors for tracking. The use of a transparent holographic optical element, overlaid onto the safety glass, allows the system to simultaneously provide bright imagery and clear visibility of the tool and workpiece. The system makes it possible to enhance visibility of occluded tools as well as to visualize real-time data from the process in the 3D space. The graphics are geometrically registered with the workspace and provide an intuitive representation of the process, amplifying the user's understanding and simplifying machine operation.

  • 21.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Henrysson, Anders
    LUMAR: A Hybrid Spatial Display System for 2D and 3D Handheld Augmented Reality2007In: ICAT: International Conference on Artificial Reality and Teleexistence, 2007, 63-70 p.Conference paper (Refereed)
    Abstract [en]

    LUMAR is a hybrid system for spatial displays, allowing cell phones to be tracked in 2D and 3D through combined egocentric and exocentric techniques based on the Light-Sense and UMAR frameworks. LUMAR differs from most other spatial display systems based on mobile phones with its three-layered information space. The hybrid spatial display system consists of printed matter that is augmented with context-sensitive, dynamic 2D media when the device is on the surface, and with overlaid 3D visualizations when it is held in mid-air.

  • 22.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Höllerer, T.
    POLAR: Portable, optical see-through, low-cost augmented reality2006In: Proc. ACM Symp. Virtual Reality Softw. Technol. VRST, 2006, 227-230 p.Conference paper (Refereed)
    Abstract [en]

    We describe POLAR, a portable, optical see-through, low-cost augmented reality system, which allows a user to see annotated views of small to medium-sized physical objects in an unencumbered way. No display or tracking equipment needs to be worn. We describe the system design, including a hybrid IR/vision head-tracking solution, and present examples of simple augmented scenes. POLAR's compactness could allow it to be used as a lightweight and portable PC peripheral for providing mobile users with on-demand AR access in field work.

  • 23.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Lachanas, Dimitrios
    KTH, School of Computer Science and Communication (CSC).
    Zacharouli, Ermioni
    KTH, School of Computer Science and Communication (CSC).
    OldGen: Mobile phone personalization for older adults2011In: Conference on Human Factors in Computing Systems, 2011, 3393-3396 p.Conference paper (Refereed)
    Abstract [en]

    Mobile devices are currently difficult to customize for the usability needs of elderly users. The elderly are instead referred to specially designed "senior phones" or software add-ons. These tend to compromise in functionality as they attempt to solve many disabilities in a single solution. We present OldGen, a prototype framework where a novel concept enables accessibility features on generic mobile devices, by decoupling the software user interface from the phone's physical form factor. This opens up for better customization of the user interface, its functionality and behavior, and makes it possible to adapt it to the specific needs of each individual. OldGen makes the user interface portable, such that it could be moved between different phone hardware, regardless of model and brand. Preliminary observations and evaluations with elderly users indicate that this concept could address individual user interface related accessibility issues on general-purpose devices.

  • 24.
    Olwal, Alex
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindfors, Christoffer
    KTH, Superseded Departments, Production Engineering.
    Gustafsson, Jonny
    KTH, Superseded Departments, Production Engineering.
    An Autostereoscopic Optical See-through Display for Augmented Reality2004In: Proceeding SIGGRAPH '04 ACM SIGGRAPH 2004 Sketches, 2004, 108- p.Conference paper (Refereed)
  • 25.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lindfors, Christoffer
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Gustafsson, Jonny
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Kjellberg, Torsten
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Mattson, Lars
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    ASTOR: An Autostereoscopic Optical See-through Augmented Reality System2005In: ISMAR: IEEE and ACM International Symposium on Mixed and Augmented Reality, 2005, 24-27 p.Conference paper (Refereed)
    Abstract [en]

    We present a novel autostereoscopic optical see-through system for Augmented Reality (AR). It uses a transparent holographic optical element (HOE) to separate the views produced by two, or more, digital projectors. It is a minimally intrusive AR system that does not require the user to wear special glasses or any other equipment, since the user will see different images depending on the point of view. The HOE itself is a thin glass plate or plastic film that can easily be incorporated into other surfaces, such as a window. The technology offers great flexibility, allowing the projectors to be placed where they are the least intrusive. ASTOR's capability of sporadic AR visualization is currently ideal for smaller physical workspaces, such as our prototype setup in an industrial environment.

  • 26.
    Olwal, Alex
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Wilson, Andrew
    SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces2008In: Graphics Interface 2008, 2008, 235-242 p.Conference paper (Refereed)
    Abstract [en]

    Interactive surfaces and related tangible user interfaces often involve everyday objects that are identified, tracked, and augmented with digital information. Traditional approaches for recognizing these objects typically rely on complex pattern recognition techniques, or the addition of active electronics or fiducials that alter the visual qualities of those objects, making them less practical for real-world use. Radio Frequency Identificatin (RFID) technology provides an unobtrusive method of sensing the presence of and identifying tagged nearby objects but has no inherent means of determining the position of tagged objects. Computer vision, on the other hand, is an established approach to track objects with a camera. While shapes and movement on an interactive surface can be determined from classic image processing techniques, object recognition tends to be complex, computationally expensive and sensitive to environmental conditions. We present a set of techniques in which movement and shape information from the computer vision system is fused with RFID events that identify what objects are in the image. By synchronizing these two complementary sensing modalities, we can associate changes in the image with events in the RFID data, in order to recover position, shape and identification of the objects on the surface, while avoiding complex computer vision processes and exolic RFID solutions.

  • 27. Rakkolainen, Ismo
    et al.
    Höllerer, Tobias
    DiVerdi, Stephen
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI.
    Mid-air display experiments to create novel user interfaces2009In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 44, no 3, 389-405 p.Article in journal (Refereed)
    Abstract [en]

    Displays are the most visible part of most computer applications. Novel display technologies strongly influence and inspire new forms of computer use and interaction. We are particularly interested in the interplay of novel displays and interaction for ubiquitous computing or ambient media environments, as emerging display technologies may become game-changers in how we define and use computers, possibly changing the context of computing fundamentally. We present some of our experiments and lessons learnt with a new category of displays, the "immaterial" FogScreen. It can be described as a novel media platform, exhibiting some fundamental differences to and advantages over other displays. It also enables novel kinds of user interfaces and experiences. In this paper we give insights about the special properties and strengths of the FogScreen by looking at a set of successfully demonstrated interfaces and applications. We also discuss its future potential for user interface design.

  • 28. Sandor, C.
    et al.
    Bell, B.
    Olwal, Alex
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Temiyabutr, S.
    Feiner, S.
    Visual end user configuration of hybrid user interfaces2004In: Proc. ACM SIGMM Workshop Eff. Telepresence ETP, 2004, 67-68 p.Conference paper (Refereed)
    Abstract [en]

    Hybrid user interfaces are a promising paradigm for human-computer interaction, employing a range of displays and devices. However, most experimental hybrid user interfaces use a relatively rigid configuation. Our demo explores the possibilities of end users configuring the setup of a hybrid user interface, using novel interaction techniques and visualizations, based on a shared augmented reality. Copyright 2004 ACM.

  • 29. Sandor, C.
    et al.
    Olwal, Alex
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Bell, B.
    Feiner, S.
    Immersive mixed-reality configuration of hybrid user interfaces2005In: Proceedings: Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005, 2005, 110-113 p.Conference paper (Refereed)
    Abstract [en]

    Information in hybrid user interfaces can be spread over a variety of different, but complementary, displays, with which users interact through a potentially equally varied range of interaction devices. Since the exact configuration of these displays and devices may not be known in advance, it is desirable for users to he able to reconfigure at runtime the data flow between interaction devices and objects on the displays. To make this possible, we present the design and implementation of a prototype mixed reality system that allows users to immersively reconfigure a running hybrid user interface.

1 - 29 of 29
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf