Change search
Link to record
Permanent link

Direct link
BETA
Lungaro, Pietro
Publications (6 of 6) Show all publications
Lungaro, P. & Tollmar, K. (2019). Immersivemote: Combining foveated AI and streaming for immersive remote operations. In: ACM SIGGRAPH 2019 Talks, SIGGRAPH 2019: . Paper presented at ACM SIGGRAPH 2019 Talks - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2019; Los Angeles; United States; 28 July 2019 through 1 August 2019. Association for Computing Machinery (ACM), Article ID 3329895.
Open this publication in new window or tab >>Immersivemote: Combining foveated AI and streaming for immersive remote operations
2019 (English)In: ACM SIGGRAPH 2019 Talks, SIGGRAPH 2019, Association for Computing Machinery (ACM), 2019, article id 3329895Conference paper, Published paper (Refereed)
Abstract [en]

Immersivemote is a novel technology combining our former foveated streaming solution with our novel foveated AI concept. While we have previously shown that foveated streaming can achieve 90% bandwidth savings, as compared to existing streaming solutions, foveated AI is designed to enable real-time video augmentations that are controlled through eye-gaze. The combined solution is therefore capable of effectively interfacing remote operators with mission critical information obtained, in real time, from task-aware machine understanding of the scene and IoT data.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Keywords
Eye-tracking, Foveated streaming, Remote operations
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-258137 (URN)10.1145/3306307.3329895 (DOI)2-s2.0-85071335635 (Scopus ID)9781450363174 (ISBN)
Conference
ACM SIGGRAPH 2019 Talks - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2019; Los Angeles; United States; 28 July 2019 through 1 August 2019
Note

QC 20191002

Available from: 2019-10-02 Created: 2019-10-02 Last updated: 2019-10-02Bibliographically approved
Lungaro, P., Tollmar, K., Saeik, F., Mateu Gisbert, C. & Dubus, G. (2018). Demonstration of a low-cost hyper-realistic testbed for designing future onboard experiences. In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018: . Paper presented at 10th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, 23 September 2018 through 25 September 2018 (pp. 235-238). Association for Computing Machinery, Inc
Open this publication in new window or tab >>Demonstration of a low-cost hyper-realistic testbed for designing future onboard experiences
Show others...
2018 (English)In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, Association for Computing Machinery, Inc , 2018, p. 235-238Conference paper, Published paper (Refereed)
Abstract [en]

This demo presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing vehicular testbeds and simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remotely controlled or monitored vehicles, it is expected that the digital components of the driving experience will become more relevant. That is because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of self-driving interfaces have been implemented, including Heads-Up Display (HUDs), Augmented Reality (ARs) and directional audio.

Place, publisher, year, edition, pages
Association for Computing Machinery, Inc, 2018
Keywords
AR, Experimental assessment, HUD, Self-driving, Testbed, Argon, Augmented reality, Costs, Testbeds, Design and evaluations, Digital components, Driving experiences, Experimental platform, Heads-up display, Pre-requisites, Self drivings, User interfaces
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-252258 (URN)10.1145/3239092.3267850 (DOI)2-s2.0-85063139649 (Scopus ID)9781450359474 (ISBN)
Conference
10th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, 23 September 2018 through 25 September 2018
Note

QC20190614

Available from: 2019-06-14 Created: 2019-06-14 Last updated: 2019-06-14Bibliographically approved
Saeik, F., Lungaro, P. & Tollmar, K. (2018). Demonstration of Gaze-Aware Video Streaming Solutions for Mobile VR. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR): . Paper presented at 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), : 18-22 March 2018.
Open this publication in new window or tab >>Demonstration of Gaze-Aware Video Streaming Solutions for Mobile VR
2018 (English)In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2018Conference paper, Published paper (Refereed)
Abstract [en]

This demo features an embodiment of Smart Eye-tracking Enabled Networking (SEEN), a novel content delivery method for optimizing the provision of 360° video streaming. SEEN relies on eye-gaze information from connected eye trackers to provide high quality, in real time, in the proximity of users' fixations points, while lowering the quality at the periphery of the users' fields of view. The goal is to exploit the characteristics of the human vision to reduce the bandwidth required for the mobile provision of future data intensive services in Virtual Reality (VR). This demo provides a tangible experience of the tradeoffs among bandwidth consumption, network performances (RTT) and Quality of Experience (QoE) associated with SEEN's novel content provision mechanisms.

Keywords
Human-centered computing, User interface management systems, Eye-tracking, Video streaming, QoE
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-234832 (URN)10.1109/VR.2018.8447551 (DOI)2-s2.0-85053845286 (Scopus ID)978-1-5386-3365-6 (ISBN)978-1-5386-3366-3 (ISBN)
Conference
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), : 18-22 March 2018
Projects
SEEN - Smart Eye-tracking Enabled Networking
Note

QC 20180914

Available from: 2018-09-11 Created: 2018-09-11 Last updated: 2018-10-30Bibliographically approved
Lungaro, P., Tollmar, K., Saeik, F., Mateu Gisbert, C. & Dubus, G. (2018). DriverSense: A hyper-realistic testbed for the design and evaluation of novel user interfaces in self-driving vehicles. In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018: . Paper presented at 10th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, 23 September 2018 through 25 September 2018 (pp. 127-131). Association for Computing Machinery, Inc
Open this publication in new window or tab >>DriverSense: A hyper-realistic testbed for the design and evaluation of novel user interfaces in self-driving vehicles
Show others...
2018 (English)In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, Association for Computing Machinery, Inc , 2018, p. 127-131Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remote controlled vehicular modalities, it is expected that the digital components of the driving experience will become more and more relevant, because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future onboard interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected case studies, including Heads-Up Diplays (HUDs), Augmented Reality (ARs) and directional audio solutions, are presented. 

Place, publisher, year, edition, pages
Association for Computing Machinery, Inc, 2018
Keywords
AR, Autonomous vehicular systems, Trust, Argon, Augmented reality, Remote control, Testbeds, Design and evaluations, Digital components, Driving experiences, Driving tasks, Experimental platform, Pre-requisites, Vehicular systems, User interfaces
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-252259 (URN)10.1145/3239092.3265955 (DOI)2-s2.0-85063134845 (Scopus ID)9781450359474 (ISBN)
Conference
10th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, 23 September 2018 through 25 September 2018
Note

QC20190607

Available from: 2019-06-07 Created: 2019-06-07 Last updated: 2019-06-07Bibliographically approved
Tollmar, K., Lungaro, P., Valero, A. F. & Mittal, A. (2017). Beyond foveal rendering: Smart eye-tracking enabled networking (SEEN). In: ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017: . Paper presented at ACM SIGGRAPH 2017 Talks - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2017, Los Angeles Convention CenterLos Angeles, United States, 30 July 2017 through 3 August 2017. Association for Computing Machinery (ACM), Article ID 3085163.
Open this publication in new window or tab >>Beyond foveal rendering: Smart eye-tracking enabled networking (SEEN)
2017 (English)In: ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017, Association for Computing Machinery (ACM), 2017, article id 3085163Conference paper, Published paper (Refereed)
Abstract [en]

Smart Eye-tracking Enabled Networking (SEEN) is a novel end-to-end framework using real-time eye-gaze information beyond state-of-the-art solutions. Our approach can effectively combine the computational savings of foveal rendering with the bandwidth savings required to enable future mobile VR content provision.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2017
Keywords
Eye-tracking, Foveal rendering, Foveated Content delivery, Mobile systems, QoE, User-experience
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-217846 (URN)10.1145/3084363.3085163 (DOI)000441139200078 ()2-s2.0-85033382788 (Scopus ID)9781450350082 (ISBN)
Conference
ACM SIGGRAPH 2017 Talks - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2017, Los Angeles Convention CenterLos Angeles, United States, 30 July 2017 through 3 August 2017
Funder
VINNOVA, 2016-01854
Note

QC 20171122

Available from: 2017-11-22 Created: 2017-11-22 Last updated: 2018-08-28Bibliographically approved
Lungaro, P., Tollmar, K., Mittal, A. & Fanghella Valero, A. (2017). Gaze- and QoE-aware video streaming solutions for mobile VR. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST: . Paper presented at 23rd ACM Conference on Virtual Reality Software and Technology, VRST 2017, 8 November 2017 through 10 November 2017. Association for Computing Machinery
Open this publication in new window or tab >>Gaze- and QoE-aware video streaming solutions for mobile VR
2017 (English)In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, Association for Computing Machinery , 2017Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

This demo showcases a novel approach to content delivery for 360? video streaming. It exploits information from connected eye-trackers embedded in the users' VR HMDs. The presented technology enables the delivery of high quality, in real time, around the users' fixations points while lowering the image quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The network connection between the VR system and the content server is in this demo emulated, allowing users to experience the QoE performances achievable with datarates and RTTs in the range of current 4G and upcoming 5G networks. Users can further control additional service parameters, including video types, content resolution in the foveal region and background and size of the foveal region. At the end of each run, users are presented with a summary of the amount of bandwidth consumed with the used system settings and a comparison with the cost of current content delivery solutions. The overall goal of this demo is to provide a tangible experience of the tradeoffs among bandwidth, RTT and QoE for the mobile provision of future data intensive VR services.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2017
Keywords
Eye-tracking, Foveated content delivery, QoE, Video streaming, VR, Bandwidth, Communication channels (information theory), Eye movements, Virtual reality, Bandwidth requirement, Content delivery, Content servers, Data intensive, Network connection, Perceived quality, Service parameters
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-227053 (URN)10.1145/3139131.3141782 (DOI)000455354500085 ()2-s2.0-85038603036 (Scopus ID)9781450355483 (ISBN)
Conference
23rd ACM Conference on Virtual Reality Software and Technology, VRST 2017, 8 November 2017 through 10 November 2017
Note

QC 20180503

Available from: 2018-05-03 Created: 2018-05-03 Last updated: 2019-09-23Bibliographically approved
Organisations

Search in DiVA

Show all publications