Change search
Refine search result
1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Lungaro, Pietro
    et al.
    KTH.
    Sjöberg, Rickard
    Ericsson Res, Stockholm, Sweden..
    Valero, Alfredo Jose Fanghella
    KTH.
    Mittal, Ashutosh
    KTH.
    Tollmar, Konrad
    KTH.
    Gaze-Aware Streaming Solutions for the Next Generation of Mobile VR Experiences2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 4, p. 1535-1544Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate. A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth. A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.

  • 2.
    Lungaro, Pietro
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Mittal, Ashutosh
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Fanghella Valero, Alfredo
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Gaze- and QoE-aware video streaming solutions for mobile VR2017In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, Association for Computing Machinery , 2017Conference paper (Refereed)
    Abstract [en]

    This demo showcases a novel approach to content delivery for 360? video streaming. It exploits information from connected eye-trackers embedded in the users' VR HMDs. The presented technology enables the delivery of high quality, in real time, around the users' fixations points while lowering the image quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The network connection between the VR system and the content server is in this demo emulated, allowing users to experience the QoE performances achievable with datarates and RTTs in the range of current 4G and upcoming 5G networks. Users can further control additional service parameters, including video types, content resolution in the foveal region and background and size of the foveal region. At the end of each run, users are presented with a summary of the amount of bandwidth consumed with the used system settings and a comparison with the cost of current content delivery solutions. The overall goal of this demo is to provide a tangible experience of the tradeoffs among bandwidth, RTT and QoE for the mobile provision of future data intensive VR services.

  • 3.
    Tollmar, Konrad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Lungaro, Pietro
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Valero, Alfredo Fanghella
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Mittal, Ashutosh
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Beyond foveal rendering: Smart eye-tracking enabled networking (SEEN)2017In: ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017, Association for Computing Machinery (ACM), 2017, article id 3085163Conference paper (Refereed)
    Abstract [en]

    Smart Eye-tracking Enabled Networking (SEEN) is a novel end-to-end framework using real-time eye-gaze information beyond state-of-the-art solutions. Our approach can effectively combine the computational savings of foveal rendering with the bandwidth savings required to enable future mobile VR content provision.

1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf