Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
3D Visualization of Single Images using Patch Level Depth
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.ORCID iD: 0000-0003-3779-5647
2011 (English)In: SIGMAP 2011, 2011, 61-66 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we consider the task of 3D photo visualization using a single monocular image. The main idea is to use single photos taken by capturing devices such as ordinary cameras, mobile phones, tablet PCs etc. and visualize them in 3D on normal displays. Supervised learning approach is hired to retrieve depth information from single images. This algorithm is based on the hierarchical multi-scale Markov Random Field (MRF) which models the depth based on the multi-scale global and local features and relation between them in a monocular image. Consequently, the estimated depth image is used to allocate the specified depth parameters for each pixel in the 3D map. Accordingly, the multi-level depth adjustments and coding for color anaglyphs is performed. Our system receives a single 2D image as input and provides a anaglyph coded 3D image in output. Depending on the coding technology the special low-cost anaglyph glasses for viewers will be used.

Place, publisher, year, edition, pages
2011. 61-66 p.
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-141824OAI: oai:DiVA.org:kth-141824DiVA: diva2:698786
Conference
SIGMAP 2011, International Conference on Signal Processing and Multimedia Applications(SIGMAP). Jul 18, 2011 - Jul 21, 2011, Seville - Spain
Note

QC 20140226

Available from: 2014-02-25 Created: 2014-02-25 Last updated: 2016-06-10Bibliographically approved
In thesis
1. 3D Gesture Recognition and Tracking for Next Generation of Smart Devices: Theories, Concepts, and Implementations
Open this publication in new window or tab >>3D Gesture Recognition and Tracking for Next Generation of Smart Devices: Theories, Concepts, and Implementations
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The rapid development of mobile devices during the recent decade has been greatly driven by interaction and visualization technologies. Although touchscreens have signicantly enhanced the interaction technology, it is predictable that with the future mobile devices, e.g., augmentedreality glasses and smart watches, users will demand more intuitive in-puts such as free-hand interaction in 3D space. Specically, for manipulation of the digital content in augmented environments, 3D hand/body gestures will be extremely required. Therefore, 3D gesture recognition and tracking are highly desired features for interaction design in future smart environments. Due to the complexity of the hand/body motions, and limitations of mobile devices in expensive computations, 3D gesture analysis is still an extremely diffcult problem to solve.

This thesis aims to introduce new concepts, theories and technologies for natural and intuitive interaction in future augmented environments. Contributions of this thesis support the concept of bare-hand 3D gestural interaction and interactive visualization on future smart devices. The introduced technical solutions enable an e ective interaction in the 3D space around the smart device. High accuracy and robust 3D motion analysis of the hand/body gestures is performed to facilitate the 3D interaction in various application scenarios. The proposed technologies enable users to control, manipulate, and organize the digital content in 3D space.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2014. xii, 101 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 14:02
Keyword
3D gestural interaction, gesture recognition, gesture tracking, 3D visualization, 3D motion analysis, augmented environments
National Category
Engineering and Technology
Research subject
Media Technology
Identifiers
urn:nbn:se:kth:diva-141938 (URN)978-91-7595-031-0 (ISBN)
Public defence
2014-03-17, F3, Lindstedtsvägen 26, KTH, Stockholm, 13:15 (English)
Opponent
Supervisors
Note

QC 20140226

Available from: 2014-02-26 Created: 2014-02-26 Last updated: 2014-02-26Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Li, Haibo

Search in DiVA

By author/editor
Yousefi, ShahrouzLi, Haibo
By organisation
Media Technology and Interaction Design, MID
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 58 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf