Change search
Refine search result
1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Lungaro, Pietro
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Mittal, Ashutosh
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Fanghella Valero, Alfredo
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Gaze- and QoE-aware video streaming solutions for mobile VR2017In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, Association for Computing Machinery , 2017Conference paper (Refereed)
    Abstract [en]

    This demo showcases a novel approach to content delivery for 360? video streaming. It exploits information from connected eye-trackers embedded in the users' VR HMDs. The presented technology enables the delivery of high quality, in real time, around the users' fixations points while lowering the image quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The network connection between the VR system and the content server is in this demo emulated, allowing users to experience the QoE performances achievable with datarates and RTTs in the range of current 4G and upcoming 5G networks. Users can further control additional service parameters, including video types, content resolution in the foveal region and background and size of the foveal region. At the end of each run, users are presented with a summary of the amount of bandwidth consumed with the used system settings and a comparison with the cost of current content delivery solutions. The overall goal of this demo is to provide a tangible experience of the tradeoffs among bandwidth, RTT and QoE for the mobile provision of future data intensive VR services.

  • 2.
    Mateu Gisbert, Conrado
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Novel synthetic environment to design and validate future onboard interfaces for self-driving vehicles2018Report (Refereed)
    Abstract [en]

    This thesis presents a novel synthetic environment for supporting advanced explorations of user interfaces and interaction modalities for future transport systems. The main goal of the work is the definition of novel interfaces solutions designed for increasing trust in self-driving vehicles. The basic idea is to provide insights to the passengers concerning the information available to the Artificial Intelligence (AI) modules on-board of the car, including the driving behaviour of the vehicle and its decision making.

    Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, they have very low degrees of realism for what concerns the digital components of the various traffic scenarios. These includes the visuals of the driving simulator and the behaviours of both other vehicles on the road and pedestrians.  High visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future on-board interfaces. An innovative experimental testbed based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected user studies, presenting novel self-driving interfaces and associated user experience results, are described. These explore the capabilities of inducing trust in autonomous vehicles and explore Heads-Up Diplays (HUDs), Augmented Reality (ARs) and directional audio solutions.

    The work includes three core phases focusing on the development of software for the testbed, the definition of relevant interfaces and experiments and focused testing with panels comprising different user demographics.

    Specific investigations will focus on the design and exploration of a set of alternative visual feedback mechanisms (adopting AR visualizations) to gather information about the surrounding environment and AI decision making. The performances of these will be assessed with real users in respect of their capability to foster trust in the vehicle and on the level of understandability of the provided signals.

    Moreover, additional accessory studies will focus on the exploration of different designs for triggering driving handover, i.e. the transfer vehicle control from AI to human drivers, which is a central problem in current embodiments of self-driving vehicles.

1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf