Change search
Refine search result
1 - 42 of 42
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Aronsson, Sanna
    et al.
    Artman, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Lindquist, Sinna
    Mikael, Mitchell
    Persson, Tomas
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Ramberg, Robert
    Stockholms Universitet.
    Romero, Mario
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    van de Vehn, Pontus
    Supporting after action review in simulator mission training: Co-creating visualization concepts for training of fast-jet fighter pilots2019In: The Journal of Defence Modeling and Simulation: Applications, Methodology, Technology, ISSN 1548-5129, E-ISSN 1557-380X, Vol. 16, no 3, p. 219-231Article in journal (Refereed)
    Abstract [en]

    This article presents the design and evaluation of visualization concepts supporting After Action Review (AAR) in simulator mission training of fast-jet fighter pilots. The visualization concepts were designed based on three key characteristics of representations: re-representation, graphical constraining, and computational offloading. The visualization concepts represent combined parameters of missile launch and threat range, the former meant to elicit discussions about the prerequisites for launching missiles, and the latter to present details of what threats a certain aircraft is facing at a specific moment. The visualization concepts were designed to: 1) perceptually and cognitively offload mental workload from participants in support of determining relevant situations to discuss; 2) re-represent parameters in a format that facilitates reading-off of crucial information; and 3) graphically constrain plausible interpretations. Through a series of workshop iterations, two visualization concepts were developed and evaluated with 11 pilots and instructors. All pilots were unanimous in their opinion that the visualization concepts should be implemented as part of the AAR. Offloading, in terms of finding interesting events in the dynamic and unique training sessions, was the most important guiding concept, while re-representation and graphical constraining enabled a more structured and grounded collaboration during the AAR.

  • 2. Berrada, Dounia
    et al.
    Romero, Mario
    Georgia Institute of Technology, US.
    Abowd, Gregory
    Blount, Marion
    Davis, John
    Automatic Administration of the Get Up and Go Test2007In: HealthNet'07: Proceedings of the 1st ACM SIGMOBILE International Workshop on Systems and Networking Support for Healthcare and Assisted Living Environments, ACM Digital Library, 2007, p. 73-75Conference paper (Refereed)
    Abstract [en]

    In-home monitoring using sensors has the potential to improve the life of elderly and chronically ill persons, assist their family and friends in supervising their status, and provide early warning signs to the person's clinicians. The Get Up and Go test is a clinical test used to assess the balance and gait of a patient. We propose a way to automatically apply an abbreviated version of this test to patients in their residence using video data without body-worn sensors or markers.

  • 3.
    de Giorgio, Andrea
    et al.
    KTH.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Onori, Mauro
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Wang, Lihui
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Human-machine Collaboration in Virtual Reality for Adaptive Production Engineering2017In: Procedia Manufacturing, ISSN 2351-9789, Vol. 11, p. 1279-1287Article in journal (Refereed)
    Abstract [en]

    This paper outlines the main steps towards an open and adaptive simulation method for human-robot collaboration (HRC) in production engineering supported by virtual reality (VR). The work is based on the latest software developments in the gaming industry, in addition to the already commercially available hardware that is robust and reliable. This allows to overcome VR limitations of the industrial software provided by manufacturing machine producers and it is based on an open-source community programming approach and also leads to significant advantages such as interfacing with the latest developed hardware for realistic user experience in immersive VR, as well as the possibility to share adaptive algorithms. A practical implementation in Unity is provided as a functional prototype for feasibility tests. However, at the time of this paper, no controlled human-subject studies on the implementation have been noted, in fact, this is solely provided to show preliminary proof of concept. Future work will formally address the questions that are raised in this first run.

  • 4.
    Elblaus, Ludvig
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Tsaknaki, Vasiliki
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Lewandowski, Vincent
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Hwang, Sungjae
    Song, John
    Gim, Junghyeon
    Griggio, Carla
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Leiva, Germán
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Georgia Institute of Technology.
    Sweeney, David
    Regan, Tim
    Helmes, John
    Vlachokyriakos, Vasillis
    Lindley, Siân
    Taylor, Alex
    Demo Hour2015In: interactions, ISSN 1072-5520, E-ISSN 1558-3449, Vol. 22, no 5, p. 6-9Article in journal (Refereed)
    Abstract [en]

    Interactivity is a unique forum of the ACM CHI Conference that showcases hands-on demonstrations, novel interactive technologies, and artistic installations. At CHI 2015 in Seoul we hosted more than 30 exhibits, including an invited digital interactive art exhibit. Interactivity highlights the diverse group of computer scientists, sociologists, designers, psychologists, artists, and many more who make up the CHI community.

  • 5. Frey, Brian
    et al.
    Rosier, Kate
    Southern, Caleb
    Romero, Mario
    Georgia Institute of Technology.
    From Texting App to Braille Literacy2012In: CHI ’12 Extended Abstracts on Human Factors in Computing Systems, Association for Computing Machinery (ACM), 2012, p. 2495-2500Conference paper (Refereed)
    Abstract [en]

    We report the results of a pilot study that explores potential uses for BrailleTouch in the instruction of braille literacy for the visually impaired. BrailleTouch is an eyes-free text entry application for smart phones. We conducted individual semi-structured interviews and a focus group with four domain expert participants.

  • 6. Frey, Brian
    et al.
    Southern, Caleb
    Romero, Mario
    Georgia Institute of Technology, USA.
    Brailletouch: Mobile Texting for the Visually Impaired2011In: Proceedings of the 6th International Conference on Universal Access in Human-computer Interaction: Context Diversity - Volume Part III, ACM Digital Library, 2011, p. 19-25Conference paper (Refereed)
    Abstract [en]

    BrailleTouch is an eyes-free text entry application for mobile devices. Currently, there exist a number of hardware and software solutions for eyes-free text entry. Unfortunately, the hardware solutions are expensive and the software solutions do not offer adequate performance. BrailleTouch bridges this gap. We present our design rationale and our explorative evaluation of BrailleTouch with HCI experts and visually impaired users.

  • 7. GOMEZ ZAMORA, Paula
    et al.
    Romero, Mario
    Georgia Institute of Technology, United States.
    Ellen, Y I L U E N D O
    ACTIVITY SHAPES: Analysis methods of video-recorded human activity in a co-visible space2012In: Eighth International Space Syntax Symposium / [ed] Margarita Greene, José Reyes, Andrea Castro, Santiago de Chile: PUC , 2012Conference paper (Refereed)
    Abstract [en]

    The aim of this research is to develop two methods to help us understand the fundamental distinctionsamong human activities in terms of spatial occupancy. To characterize the features of the distribution ofhuman activities in a space (and over time), we introduce the concept of “activity shapes.” To obtain adistinctive analysis of activity shapes, we ran an experiment in which a group of six adults shared a fully covisiblespace and sequentially performed three specific activities characterized as eccentric, concentric, ordistributed. We video recorded the three scenarios using overhead cameras that allowed us to closely mapparticipants’ positions on the floor layout, obtaining the data in two formats: 1) a sequence of images fromthe overhead videos, automatically stored and pre‐computed to extract and aggregate motion; and 2) adataset of individuals’ identification and positions over time, manually annotated after repeatedobservations of the videos. Using the images sequence, we qualitatively analyzed the activity shapes usingViz‐A‐Vis, a tool for visualizing of activity through computer vision (Romero et al., 2008; 2011). Using thedataset, we performed two analyses: 1) the geometry and the topology of the activity shapes; and 2) theirspatiotemporal configurations, introducing the use of statistical analysis of space occupancy patterns. Whileit is not possible to generalize to all activity conditions from these three samples, we discovered sometendencies in the activity shapes. Our findings revealed several main distinctions in terms of geometry,topology, dispersion, gravitation, and clustering; supporting the development of the methods presented inthis work and directions of future implementation of these analyses in more complex spaces and scenariosthat complement space syntax analysis.

  • 8.
    Griggio, Carla F.
    et al.
    KTH. Université Paris -Sud XI Orsay, France; EIT ICT Labs, France.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz). Georgia Institute of Technology.
    Canvas Dance: An Interactive Dance Visualization for Large-Group Interaction2015In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, ACM Digital Library, 2015, p. 379-382Conference paper (Refereed)
    Abstract [en]

    We present Canvas Dance, a prototype of an interactive dance visualization for large-group interaction that targets non-professional dancers in informal environments such as parties or nightclubs, and uses the smartphones of the dancers as the input device for the motion signal. The visualization is composed of individual representations for each dancer, and the visual mappings designed for their dance moves have three main goals: to help the users identify their own representation, to uncover and inspire imitation among dancers, and to support unpredictable dance moves.

  • 9.
    Griggio, Carla
    et al.
    KTH.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Georgia Institute of Technology.
    A real-time dance visualization framework for the design of mappings that favor user appropriation2015Conference paper (Refereed)
    Abstract [en]

    In this paper we present a real-time dance visualization framework with the goal of easily mapping motion data from an accelerometer and a gyroscope into visual effects that users can compose and appropriate to their own dancing style. We used this framework to design a set of dance-tovisuals mappings through a user-centered approach. As a result, we conclude with a list of factors that help users to understand how to interact with a real-time dance visualization with no prior instructions.

  • 10.
    Griggio, Carla
    et al.
    KTH. EIT ICT Labs Master School, Sweden.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Leiva, Germán
    KTH. EIT ICT Labs Master School, Sweden.
    Towards an Interactive Dance Visualization for Inspiring Coordination Between Dancers2015Conference paper (Refereed)
    Abstract [en]

    In this work in progress we present early results in the process of understanding how interactive dance visualizations can inspire coordinated dance moves between dancers in informal contexts. Inspired by observations at nightclubs and parties with DJs, we designed an interactive dance visualization prototype called "Canvas Dance" and evaluated it in a user study with 3 small groups of people. We conclude by offering a set of design considerations for future work on interactive dance visualizations for non-professional dancers in informal contexts.

  • 11. Gómez, Paula
    et al.
    Do, Ellen Yi-Luen
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Georgia Institute of Technology.
    Activity Shapes: towards a spatiotemporal analysis in architecture2014In: DEARQ: Journal of Architecture, ISSN 2011-3188, E-ISSN 2215-969X, no 26Article in journal (Refereed)
    Abstract [en]

    Computational spatial analyses play an important role in architectural design processes, providing feedback about spatial configurations that may inform design decisions. Current spatial analyses convey geometrical aspects of space, but aspects such as space use are not encompassed within the analyses, although they are fundamental for architectural programming. Through this study, we initiate the discussion of including human activity as an input that will change the focus of current computational spatial analyses toward a detailed understanding of activity patterns in space and time. We envision that the emergent insights will serve as guidelines for future evaluation of design intents motivated by spatial occupancy, since we –designers– mentally constructing a model of the situation and activities on it (Eastman, 2001).

  • 12.
    Hasselqvist, Hanna
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Bogdan, Cristian
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Shafqat, Omar
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Applied Thermodynamics and Refrigeration.
    Supporting Energy Management as a Cooperative Amateur Activity2015In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, ACM Digital Library, 2015, p. 1483-1488Conference paper (Refereed)
    Abstract [en]

    There is increasing concern regarding current energy feedback approaches as they focus on the individual level, and mostly on household electricity, while the bulk of energy use often lies in heating and cooling. The aim is typically to change user routines, which does not bring a long-lasting impact. In our case study, we address these concerns for apartment buildings by looking at housing cooperatives, the dominant form of apartment ownership in the Nordic countries. These cooperatives manage the heating costs in common and therefore have a large potential for energy saving through long-lasting improvements and investments. We also emphasise the amateur nature of energy work within such cooperatives and consider the implications of our field study findings, interpreted through these amateur and cooperative perspectives, for the design of interactive artefacts.

  • 13.
    Kasperi, Johan
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Picha Edwardsson, Malin
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Occlusion in outdoor Augmented Reality using geospatial building data2017In: VRST '17 Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, Association for Computing Machinery (ACM), 2017, Vol. Part F131944, article id a30Conference paper (Refereed)
    Abstract [en]

    Aligning virtual and real objects in Augmented Reality (AR) is essential for the user experience. Without alignment, the user loses suspension of disbelief and the sense of depth, distance, and size. Occlusion is a key feature to be aligned. Virtual content should be partially or fully occluded if real world objects are in its line-of-sight. The challenge for simulating occlusion is to construct the geometric model of the environment. Earlier studies have aimed to create realistic occlusions, yet most have either required depth-sensing hardware or a static predened environment. is paper proposes and evaluates an alternative model-based method for dynamic outdoor AR of virtual buildings rendered on non depth-sensing smartphones. It uses geospatial data to construct the geometric model of real buildings surrounding the virtual building. The method removes the target regions from the virtual building using masks constructed from real buildings. While the method is not pixel-perfect, meaning that the simulated occlusion is not fully realistic, results from the user study indicate that it fullled its goal. A majority of the participants expressed that their experience and depth perception improved with the method activated. The result from this study has applications to mobile AR since the majority of smartphones are not equipped with depth sensors. Using geospatial data for simulating occlusions is a suciently eective solution until depth-sensing AR devices are more widely available.

  • 14. Kinnaird, Peter
    et al.
    Romero, Mario
    Georgia Institute of Technology.
    Focus Groups for Functional InfoVis Prototype Evaluation: A Case Study2010Conference paper (Refereed)
    Abstract [en]

    In this position paper, we describe our experience conducting a focus group for evaluating an Information Visualization system prototype. We concentrate on the method used and how it differs from traditional focus group methodology. Our position is that Information Visualization system prototypes provide exceptional grounds for customized focus group methodologies due to the exploratory nature of many of the tasks these systems are designed to support.

  • 15. Kinnaird, Peter
    et al.
    Romero, Mario
    Georgia Institute of Technology.
    Abowd, Gregory
    Connect 2 Congress: Visual Analytics for Civic Oversight2010In: CHI ’10 Extended Abstracts on Human Factors in Computing Systems, ACM Digital Library, 2010, p. 2853-2862Conference paper (Refereed)
    Abstract [en]

    Strong representative democracies rely on educated, informed, and active citizenry to provide oversight of the government. We present Connect 2 Congress (C2C), a novel, high temporal-resolution and interactive visualization of legislative behavior. We present the results of focus group and domain expert interviews that demonstrate how different stakeholders use C2C for a variety of investigative activities. The evaluation provided evidence that users are able to support or reject claims made by candidates and conduct free-form, low-cost, exploratory analysis into the legislative behavior of representatives across time periods.

  • 16. Munteanu, C.
    et al.
    Molyneaux, H.
    Moncur, W.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Georgia Institute of Technology.
    O'Donnell, S.
    Vines, J.
    Situational ethics: Re-thinking approaches to formal ethics requirements for human-computer interaction2015In: Conference on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery (ACM), 2015, p. 105-114Conference paper (Refereed)
    Abstract [en]

    Most Human-Computer Interaction (HCI) researchers are accustomed to the process of formal ethics review for their evaluation or field trial protocol. Although this process varies by country, the underlying principles are universal. While this process is often a formality, for field research or lab-based studies with vulnerable users, formal ethics requirements can be challenging to navigate - A common occurrence in the social sciences; yet, in many cases, foreign to HCI researchers. Nevertheless, with the increase in new areas of research such as mobile technologies for marginalized populations or assistive technologies, this is a current reality. In this paper we present our experiences and challenges in conducting several studies that evaluate interactive systems in difficult settings, from the perspective of the ethics process. Based on these, we draft recommendations for mitigating the effect of such challenges to the ethical conduct of research. We then issue a call for interaction researchers, together with policy makers, to refine existing ethics guidelines and protocols in order to more accurately capture the particularities of such field-based evaluations, qualitative studies, challenging labbased evaluations, and ethnographic observations.

  • 17. Nazneen, N.
    et al.
    Rozga, Agata
    Romero, Mario
    Georgia Institute of Technology, USA.
    Findley, Addie J.
    Call, Nathan A.
    Abowd, Gregory D.
    Arriaga, Rosa I.
    Supporting Parents for In-home Capture of Problem Behaviors of Children with Developmental Disabilities2012In: Personal and Ubiquitous Computing, ISSN 1617-4909, E-ISSN 1617-4917, Vol. 16, no 2, p. 193-207Article in journal (Refereed)
    Abstract [en]

    Ubiquitous computing has shown promise in applications for health care in the home. In this paper, we focus on a study of how a particular ubicomp capability, selective archiving, can be used to support behavioral health research and practice. Selective archiving technology, which allows the capture of a window of data prior to and after an event, can enable parents of children with autism and related disabilities to record video clips of events leading up to and following an instance of problem behavior. Behavior analysts later view these video clips to perform a functional assessment. In contrast to the current practice of direct observation, a powerful method to gather data about child problem behaviors but costly in terms of human resources and liable to alter behavior in the subjects, selective archiving is cost effective and has the potential to provide rich data with minimal instructions to the natural environment. To assess the effectiveness of parent data collection through selective archiving in the home, we developed a research tool, CRAFT (Continuous Recording And Flagging Technology) and conducted a study by installing CRAFT in eight households of children with developmental disabilities and severe behavior concerns. The results of this study show the promise and remaining challenges for this technology. We have also shown that careful attention to the design of a ubicomp system for use by other domain specialists or non-technical users is key to moving ubicomp research forward.

  • 18. Pousman, Zachary
    et al.
    Romero, Mario
    Georgia Institute of Technology, USA.
    Smith, Adam
    Mateas, Michael
    Living with Tableau Machine: A Longitudinal Investigation of a Curious Domestic Intelligence2008In: Proceedings of the 10th International Conference on Ubiquitous Computing, ACM Press, 2008, p. 370-379Conference paper (Refereed)
    Abstract [en]

    We present a longitudinal investigation of Tableau Machine, an intelligent entity that interprets and reflects the lives of occupants in the home. We created Tableau Machine (TM) to explore the parts of home life that are unrelated to accomplishing tasks. Task support for "smart homes" has inspired many researchers in the community. We consider design for experience, an orthogonal dimension to task-centric home life. TM produces abstract visualizations on a large LCD every few minutes, driven by a set of four overhead cameras that capture a sense of the social life of a domestic space. The openness and ambiguity of TM allow for a cycle of co-interpretation with householders. We report on three longitudinal deployments of TM for a period of six weeks. Participant families engaged with TM at the outset to understand how their behaviors were influencing the machine, and, while TM remained puzzling, householders interacted richly with TM and its images. We extract some key design implications for an experience-focused smart home.

  • 19. Rehg, J. M.
    et al.
    Abowd, G. D.
    Rozga, A.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Clements, M. A.
    Sclaroff, S.
    Essa, I.
    Ousley, O. Y.
    Li, Y.
    Kim, C.
    Rao, H.
    Kim, J. C.
    Presti, L. L.
    Zhang, J.
    Lantsman, D.
    Bidwell, J.
    Ye, Z.
    Decoding children's social behavior2013In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, 2013, p. 3414-3421Conference paper (Refereed)
    Abstract [en]

    We introduce a new problem domain for activity recognition: the analysis of children's social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1-2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3-5 minute child-adult interaction. In each session, the adult examiner followed a semi-structured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.

  • 20.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    A Case Study in Expo-Based Learning Applied to Information Visualization2015In: SIGRAD 2015 / [ed] L. Kjelldahl and C. Peters, Linköping University Electronic Press, 2015, , p. 4Conference paper (Refereed)
    Abstract [en]

    We present preliminary results of the effect of Expo-Based Learning (EBL) applied to a course on information visualization. We define EBL as project-based learning (PBL) augmented with constructively-aligned large public demos [RTP14]. In this paper, we analyze the results of challenging and grading enrolled students to compete and present their projects publicly at an open student competition organized by a second university. We surveyed the students at the end of the course before the competition started and the end of the competition. We present the result of the impact of the student competition as it relates to the intended learning outcomes from the perspective of the students.

  • 21.
    Romero, Mario
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Georgia Institute of Technology.
    Flat is the New Pitch-Black: Discussing Blind use of Touchscreens2014Conference paper (Refereed)
    Abstract [en]

    The increasingly ubiquitous touchscreen, from the smart phone to the treadmill, is a significant hurdle for blind individuals who cannot rely on their sense of touch for decoding its interface. Advances in smart phone screen readers, such as the iPhone’s Voice Over, have enabled blind users to effectively navigate touchscreens. While Voice Over efficiently outputs information, text input remained a challenge. To address this, we previously introduced BrailleTouch, a soft braille keyboard for efficient blind text entry on touchscreens. In this position paper, we present the tailored touch-based user experience design and evaluation techniques we developed for BrailleTouch which we have not previously discussed.

  • 22.
    Romero, Mario
    KTH.
    My users taught me to read with my ears2018In: interactions, ISSN 1072-5520, E-ISSN 1558-3449, Vol. 25, no 3, p. 6-7Article in journal (Refereed)
    Abstract [en]

    The Interactions website (interactions.acm.org) hosts a stable of bloggers who share insights and observations on HCI, often challenging current practices. Each issue we'll publish selected posts from some of the leading and emerging voices in the field.

  • 23. Romero, Mario
    Project-Based Learning of Advanced Computer Graphics and Interaction2013In: Eurographics 2013 - Education Papers, 2013, p. 1-6Conference paper (Refereed)
    Abstract [en]

    This paper presents an educational case study and its pedagogical lessons. It is a project-based course in advanced computer graphics and interaction, DH2413, conducted in the fall of 2012 at the Royal Institute of Technology (KTH), Stockholm, Sweden. The students and the teacher, the author, learned through a constructivist approach. The students defined and researched the material covered in class through their theme selection of original research projects which consisted of interactive graphics systems. The students demonstrated, taught, and discussed with each other what they had learned. Finally, the students openly presented their work to hundreds of people in large public venues. The teacher s role was to design the learning environment, guide the research, provide indepth lectures on the research material chosen by the students, and organize and motivate the students to produce accountable results. In synthesis, the pedagogical lessons are: 1) learning means building with self-motivation, guidance, and accountability; 2) self-motivation means trust and independence; 3) guidance means asking for less, not more; and 4) accountability means public presentations of working systems.

  • 24.
    Romero, Mario
    Georgia Institute of Technology.
    Supporting human interpretation and analysis of activity captured through overhead video2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Many disciplines spend considerable resources studying behavior. Tools range from pen-and-paper observation to biometric sensing. A tool's appropriateness depends on the goal and justification of the study, the observable context and feature set of target behaviors, the observers' resources, and the subjects' tolerance to intrusiveness. We present two systems: Viz-A-Vis and Tableau Machine. Viz-A-Vis is an analytical tool appropriate for onsite, continuous, wide-coverage and long-term capture, and for objective, contextual, and detailed analysis of the physical actions of subjects who consent to overhead video observation. Tableau Machine is a creative artifact for the home. It is a long-lasting, continuous, interactive, and abstract Art installation that captures overhead video and visualizes activity to open opportunities for creative interpretation. We focus on overhead video observation because it affords a near one-to-one correspondence between pixels and floor plan locations, naturally framing the activity in its spatial context. Viz-A-Vis is an information visualization interface that renders and manipulates computer vision abstractions. It visualizes the hidden structure of behavior in its spatiotemporal context. We demonstrate the practicality of this approach through two user studies. In the first user study, we show an important search performance boost when compared against standard video playback and against the video cube. Furthermore, we determine a unanimous user choice for overviewing and searching with Viz-A-Vis. In the second study, a domain expert evaluation, we validate a number of real discoveries of insightful environmental behavior patterns by a group of senior architects using Viz-A-Vis. Furthermore, we determine clear influences of Viz-A-Vis over the resulting architectural designs in the study. Tableau Machine is a sensing, interpreting, and painting artificial intelligence. It is an Art installation with a model of perception and personality that continuously and enduringly engages its co-occupants in the home, creating an aura of presence. It perceives the environment through overhead cameras, interprets its perceptions with computational models of behavior, maps its interpretations to generative abstract visual compositions, and renders its compositions through paintings. We validate the goal of opening a space for creative interpretation through a study that included three long-term deployments in real family homes.

  • 25.
    Romero, Mario
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Andrée, Jonas
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Thuresson, Björn
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Designing and Evaluating Embodied Sculpting: a Touching Experience2014Conference paper (Refereed)
    Abstract [en]

    We discuss the design and evaluation of embodied sculpting, the mediated experience of creating a virtual object with volume which users can see, hear, and touch as they mold the material with their body. Users’ digitized bodies share the virtual space of the digital model through a depth-sensor camera. They can use their hands, bodies, or any object to shape the sculpture. As they mold the model, they see a real-time rendering of it and receive sound and haptic feedback of the interaction. We discuss the opportunities and challenges of both designing for haptic embodiment and evaluating it through haptic experimentation.

  • 26.
    Romero, Mario
    et al.
    Georgia Institute of Technology, USA.
    Bigham, Jeffrey
    Guerreiro, Tiago
    Kane, Shaun
    Konstantinos, Votis
    Mascetti, Sergio
    Southern, Caleb
    Zimmermann, Gottfried
    Frontiers in Accessible Interfaces for Pervasive Computing2012Conference paper (Refereed)
  • 27.
    Romero, Mario
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Georgia Institute of Technology.
    Björn, ThuressonKTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).Peters, ChristopherKTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).Landazuri, Natalia
    Expo-Based Learning (EBL): Augmenting Project-Based Learning with Large Public Presentations2015Conference proceedings (editor) (Refereed)
  • 28.
    Romero, Mario
    et al.
    Georgia Institute of Technology.
    Bobick, Aaron
    Tracking Head Yaw by Interpolation of Template Responses2004In: Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’04) Volume 5 - Volume 05, Washington DC: IEEE Computer Society, 2004, Vol. 5, p. 83-Conference paper (Refereed)
    Abstract [en]

    We propose an appearance based machine learning architecturethat estimates and tracks in real time largerange head yaw given a single non-calibrated monoculargrayscale low resolution image sequence of the head. Thearchitecture is composed of five parallel template detectors,a Radial Basis Function Network and two Kalman filters.The template detectors are five view-specific images of thehead ranging across full profiles in discrete steps of 45 degrees.The Radial Basis Function Network interpolates theresponse vector from the normalized correlation of the inputimage and the 5 template detectors. The first Kalman filtermodels the position and velocity of the response vector infive dimensional space. The second is a running averagethat filters the scalar output of the network. We assume thehead image has been closely detected and segmented, that itundergoes only limited roll and pitch and that there are nosharp contrasts in illumination. The architecture is personindependentand is robust to changes in appearance, gestureand global illumination. The goals of this paper are,one, to measure the performance of the architecture, two,to asses the impact the temporal information gained fromvideo has on accuracy and stability and three, to determinethe effects of relaxing our assumptions.

  • 29.
    Romero, Mario
    et al.
    Georgia Institute of Technology.
    Frey, Brian
    Southern, Caleb
    Abowd, Gregory D.
    BrailleTouch: Designing a Mobile Eyes-free Soft Keyboard2011In: Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, ACM Digital Library, 2011, p. 707-709Conference paper (Refereed)
    Abstract [en]

    Texting is the essence of mobile communication and connectivity, as evidenced by today's teenagers, tomorrow's workforce. Fifty-four percent of American teens contact each other daily by texting, as compared to face-to-face (33%) and talking on the phone (30%) according to the Pew Research Center's Internet & American Life Project, 2010. Arguably, today's technologies support mobile text input poorly, primarily due to the size constraints of mobile devices. This is the case for everyone, but it is particularly relevant to the visually impaired. According to the World Health Organization, 284 million people are visually impaired worldwide. In order to connect these users to the global mobile community, we need to design effective and efficient methods for eyes-free text input on mobile devices. Furthermore, everyone would benefit from effective mobile texting for safety and speed. This design brief presents BrailleTouch, our working prototype solution for eyes-free mobile text input.

  • 30.
    Romero, Mario
    et al.
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz). Georgia Institute of Technology.
    Hasselqvist, Hanna
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Svensson, Gert
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Supercomputers Keeping People Warm in the Winter2014Conference paper (Refereed)
    Abstract [en]

    We present the design and evaluation of the heat recovery system for KTH's Lindgren, Stockholm's fastest supercomputer, a Cray XE6. Lindgren came into service in 2010 and has since been primarily used for complex numeric simulations of fluid mechanics and computational chemistry and biology. The heat exchange system collects the wasted heat from Lindgren's 36,384 CPU cores and transfers it via the standard district heating and cooling system to a neighboring building which houses the Chemistry laboratories. We analyze the impact of Lindgren's heat recycle system as a function of outside temperature and we estimate the system's carbon emission savings. Since the original installation of Lindgren in 2010, it has become common practice to use water cooling systems for supercomputers, as water is a better heat transfer medium than air. We discuss the relevant design lessons from Lindgren as they relate to practical and sustainable waste heat recovery designs for today's platforms. Finally, we estimate that the recovered heat from Lindgren reduced the carbon emissions by nearly 50 tons over the 2012-13 winter, the sample period of our analysis.

  • 31.
    Romero, Mario
    et al.
    The Georgia Institute of Technology, USA.
    Mateas, Michael
    A preliminary investigation of Alien Presence2005In: Proceedings of Human-Computer Interaction International (HCII 2005), Las Vegas, NV, USA, July 2005, HCII , 2005, , p. 9Conference paper (Refereed)
    Abstract [en]

    Work in ubiquitous computing and ambient intelligence tends to focus on information access and task supportsystems informed by the office environment, which tend to view the whole world as an office, or on surveillancesystems that feature asymmetric information access, providing interpretations of activity to a central authority. Thealien presence provides an alternative model of ambient intelligence; an alien presence actively interprets abstractqualities of human activity (e.g. mood, social energy) and reports these interpretations, not to a central authority, butback to the user’s themselves in the form of ambient, possibly physical displays. The goal of an alien presence is nottask accomplishment and efficient access to information, but rather to open unusual viewpoints onto everydayhuman activity, create pleasure, and provide opportunities for contemplation and wonder. The design of an alienpresence is an interdisciplinary endeavor drawing on artificial intelligence techniques, art practices of creation andcritique, and HCI methods of design and evaluation. In this paper we present preliminary work on the TableauxMachine, an alien presence designed for the home environment, as well as discuss a number of general design issuesof alien presence including co-interpretation, authorship, richness of expression vs. system complexity, tensionsbetween viewing computation as a medium vs. as a model, issues of privacy, and evaluation.

  • 32.
    Romero, Mario
    et al.
    Georgia Institute of Technology.
    Pousman, Zachary
    Mateas, Michael
    Alien Presence in the Home: A Formative Evaluation of Collage Machine2006Conference paper (Refereed)
  • 33.
    Romero, Mario
    et al.
    Georgia Institute of Technology, USA.
    Pousman, Zachary
    Mateas, Michael
    Alien Presence in the Home: The Design of Tableau Machine2008In: Personal and Ubiquitous Computing, ISSN 1617-4909, E-ISSN 1617-4917, Vol. 12, no 5, p. 373-382Article in journal (Refereed)
    Abstract [en]

    We introduce a design strategy, alien presence, which combines work in human---computer interaction, artificial intelligence, and media art to create enchanting experiences involving reflection over and contemplation of daily activities. An alien presence actively interprets and characterizes daily activity and reflects it back via generative, ambient displays that avoid simple one-to-one mappings between sensed data and output. We describe the alien presence design strategy for achieving enchantment, and report on Tableau Machine, a concrete example of an alien presence design for domestic spaces. We report on an encouraging formative evaluation indicating that Tableau Machine does indeed support reflection and actively engages users in the co-construction of meaning around the display.

  • 34.
    Romero, Mario
    et al.
    Georgia Institute of Technology, USA.
    Pousman, Zachary
    Mateas, Michael
    Tableau Machine: an Alien Presence in the Home2006In: Extended Abstracts on Human Factors in Computing Systems: CHI EA '06 / [ed] ACM, Montreal, Canada: Association for Computing Machinery (ACM), 2006, p. 1265-1270Conference paper (Refereed)
    Abstract [en]

    We present Tableau Machine, a non-human social actor for the home. The machine senses, interprets and reports abstract qualities of human activity through the language of visual art. The goal of the machine is to serve as a strange mirror of everyday life, open unusual viewpoints and generate engaging and long lasting conversations and reflections. We introduce new models for sensing, interpreting, and reporting human activity and we describe results of our formative evaluation which suggest reflection and social engagement among participants.

  • 35.
    Romero, Mario
    et al.
    Georgia Institute of Technology.
    Summet, Jay
    Stasko, John
    Abowd, Gregory
    Viz-A-Vis: Toward Visualizing Video Through Computer Vision2008In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 14, no 6, p. 1261-1268Article in journal (Refereed)
    Abstract [en]

    In the established procedural model of information visualization, the first operation is to transform raw data into data tables [1]. The transforms typically include abstractions that aggregate and segment relevant data and are usually defined by a human, user or programmer. The theme of this paper is that for video, data transforms should be supported by low level computer vision. High level reasoning still resides in the human analyst, while part of the low level perception is handled by the computer. To illustrate this approach, we present Viz-A-Vis, an overhead video capture and access system for activity analysis in natural settings over variable periods of time. Overhead video provides rich opportunities for long-term behavioral and occupancy analysis, but it poses considerable challenges. We present initial steps addressing two challenges. First, overhead video generates overwhelmingly large volumes of video impractical to analyze manually. Second, automatic video analysis remains an open problem for computer vision.

  • 36.
    Romero, Mario
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Thuresson, Björn
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Kis, Filip
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Coppard, J.
    Andrée, Jenny
    KTH.
    Landázuri, N.
    Augmenting PBL with large public presentations: A case study in interactive graphics pedagogy2014In: ITICSE 2014 - Proceedings of the 2014 Innovation and Technology in Computer Science Education Conference, 2014, p. 15-20Conference paper (Refereed)
    Abstract [en]

    We present a case study analyzing and discussing the effects of introducing the requirement of public outreach of original student work into the project-based learning of Advanced Graphics and Interaction (AGI) at KTH Royal Institute of Technology. We propose Expo-Based Learning as Project-Based Learning augmented with the constructively aligned goal of achieving public outreach beyond the course. We promote this outreach through three challenges: 1) large public presentations; 2) multidisciplinary collaboration; and 3) professional portfolio building. We demonstrate that the introduction of these challenges, especially the public presentations, had lasting positive impact in the intended technical learning outcomes of AGI with the added benefit of learning team work, presentation skills, timeliness, accountability, self-motivation, technical expertise, and professionalism.

  • 37.
    Romero, Mario
    et al.
    Georgia Institute of Technology.
    Vialard, Alice
    Peponis, John
    Stasko, John
    Abowd, Gregory
    Evaluating Video Visualizations of Human Behavior2011In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011, p. 1441-1450Conference paper (Refereed)
    Abstract [en]

    Previously, we presented Viz-A-Vis, a VIsualiZation of Activity through computer VISion [17]. Viz-A-Vis visualizes behavior as aggregate motion over observation space. In this paper, we present two complementary user studies of Viz-A-Vis measuring its performance and discovery affordances. First, we present a controlled user study aimed at comparatively measuring behavioral analysis preference and performance for observation and search tasks. Second, we describe a study with architects measuring discovery affordances and potential impacts on their work practices. We conclude: 1) Viz-A-Vis significantly reduced search time; and 2) it increased the number and quality of insightful discoveries.

  • 38. Shin, Grace
    et al.
    Choi, Taeil
    Rozga, Agata
    Romero, Mario
    Georgia Institute of Technology.
    VizKid: A Behavior Capture and Visualization System of Adult-child Interaction2011In: Proceedings of the 1st International Conference on Human Interface and the Management of Information: Interacting with Information - Volume Part II, 2011, p. 190-198Conference paper (Refereed)
    Abstract [en]

    We present VizKid, a capture and visualization system for supporting the analysis of social interactions between two individuals. The development of this system is motivated by the need for objective measures of social approach and avoidance behaviors of children with autism. VizKid visualizes the position and orientation of an adult and a child as they interact with one another over an extended period of time. We report on the design of VizKid and its rationale.

  • 39. Smith, Adam M
    et al.
    Romero, Mario
    Georgia Institute of Technology.
    Pousman, Zachary
    Mateas, Michael
    Tableau Machine: A Creative Alien Presence.2008In: AAAI Spring Symposium: Creative Intelligent Systems, AAAI , 2008, p. 82-89Conference paper (Refereed)
    Abstract [en]

    We present the design of Tableau Machine (TM), an AIbased,interactive, visual art generator for shared livingspaces. TM is an instance of what we call “alien presence”:an ambient, non-human, embodied, intelligent agent. Fromoverhead video in key public spaces, TM interprets itsenvironment, including its human audience, and expressesits interpretation by displaying a sequence of abstractimages of its own design. This paper is a case study in thedesign of an art generator with deep and long-termconnections to its physical and social environment.

  • 40. Southern, Caleb
    et al.
    Clawson, James
    Frey, Brian
    Abowd, Gregory
    Romero, Mario
    Georgia Institute of Technology.
    An Evaluation of BrailleTouch: Mobile Touchscreen Text Entry for the Visually Impaired2012In: Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services, Association for Computing Machinery (ACM), 2012, p. 317-326Conference paper (Refereed)
    Abstract [en]

    We present the evaluation of BrailleTouch, an accessible keyboard for blind users on touchscreen smartphones. Based on the standard Perkins Brailler, BrailleTouch implements a six-key chorded braille soft keyboard. Eleven blind participants typed for 165 twenty-minute sessions on three mobile devices: 1) BrailleTouch on a smartphone; 2) a soft braille keyboard on a touchscreen tablet; and 3) a commercial braille keyboard with physical keys. Expert blind users averaged 23.2 words per minute (wpm) on the BrailleTouch smartphone. The fastest participant, a touchscreen novice, achieved 32.1 wpm during his first session. Overall, participants were able to transfer their existing braille typing skills to a touchscreen device within an hour of practice. We report the speed for braille text entry on three mobile devices, an in depth error analysis, and the lessons learned for the design and evaluation of accessible and eyes-free soft keyboards.

  • 41. Southern, Caleb
    et al.
    Clawson, James
    Frey, Brian
    Abowd, Gregory
    Romero, Mario
    Georgia Institute of Technology.
    Braille Touch: Mobile Touchscreen Text Entry for the Visually Impaired2012In: Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services Companion, 2012, p. 155-156Conference paper (Refereed)
    Abstract [en]

    We present a demonstration of BrailleTouch, an accessible keyboard for blind users on a touchscreen smartphone (see Figure 1). Based on the standard Perkins Brailler, BrailleTouch implements a six-key chorded braille soft keyboard [1]. We will briefly introduce audience members to the braille code, and then allow them to hold the BrailleTouch prototype and enter text, with the aid of a visual chart of the braille alphabet.

  • 42. Vines, John
    et al.
    McNaney, Roisin
    Clarke, Rachel
    Lindsay, Stephen
    McCarthy, John
    Howard, Steve
    Romero, Mario
    Wallace, Jayne
    Designing For- and With- Vulnerable People2013In: CHI ’13 Extended Abstracts on Human Factors in Computing Systems, 2013, p. 3231-3234Conference paper (Refereed)
    Abstract [en]

    Ubiquitous technology, coupled with a surge in empirical research on people that engages people with multiple challenges in their lives, is increasingly revealing the potential for HCI to enrich the lives of vulnerable people. Designing for people with vulnerabilities requires an approach to participation that is sensitive to the risks of possible stigmatization and an awareness of the challenges for participant involvement. This workshop will bring together researchers and practitioners to explore the critical issues surrounding designing with and for vulnerable individuals. We aim to provoke discussion about how 'vulnerability' is defined in HCI, what methodological and ethical concerns are raised when working with specific cases, and ways of designing for future technologies that support vulnerable people in novel and sensitive ways.

1 - 42 of 42
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf