Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Stereoscopic visualization of monocular images in photo collections
Umeå University, Sweden.
Umeå University, Sweden.
Umeå University, Sweden. (MID)ORCID iD: 0000-0003-3779-5647
2011 (English)In: Wireless Communications and Signal Processing (WCSP), 2011 International Conference on, IEEE , 2011, 1-5 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we propose a novel approach for 3D video/photo visualization using an ordinary digital camera. The idea is to turn any 2D camera into 3D based on the data derived from a collection of captured photos or a recorded video. For a given monocular input, the retrieved information from the overlapping photos can be used to provide required information for performing 3D output. Robust feature detection and matching between images is hired to find the transformation between overlapping frames. The transformation matrix will map images to the same horizontal baseline. Afterwards, the projected images will be adjusted to the stereoscopic model. Finally, stereo views will be coded into 3D channels for visualization. This approach enables us making 3D output using randomly taken photos of a scene or a recorded video. Our system receives 2D monocular input and provides double layer coded 3D output. Depending on the coding technology different low cost 3D glasses will be used for viewers.

Place, publisher, year, edition, pages
IEEE , 2011. 1-5 p.
Keyword [en]
cameras, feature extraction, image matching, matrix algebra, stereo image processing, video coding, video retrieval, 3D channel, 3D glasses, 3D video-photo visualization, coding technology, digital camera, feature detection, information retrieval, monocular images, overlapping frames, overlapping photos, photo collections, stereoscopic visualization, transformation matrix, Image color analysis, Robustness, Three dimensional displays, Visualization
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-141822DOI: 10.1109/WCSP.2011.6096688Scopus ID: 2-s2.0-84555194972ISBN: 978-1-4577-1008-7 (print)OAI: oai:DiVA.org:kth-141822DiVA: diva2:698784
Conference
WCSP 2011
Note

QC 20140226

Available from: 2014-02-25 Created: 2014-02-25 Last updated: 2016-04-26Bibliographically approved
In thesis
1. 3D Gesture Recognition and Tracking for Next Generation of Smart Devices: Theories, Concepts, and Implementations
Open this publication in new window or tab >>3D Gesture Recognition and Tracking for Next Generation of Smart Devices: Theories, Concepts, and Implementations
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The rapid development of mobile devices during the recent decade has been greatly driven by interaction and visualization technologies. Although touchscreens have signicantly enhanced the interaction technology, it is predictable that with the future mobile devices, e.g., augmentedreality glasses and smart watches, users will demand more intuitive in-puts such as free-hand interaction in 3D space. Specically, for manipulation of the digital content in augmented environments, 3D hand/body gestures will be extremely required. Therefore, 3D gesture recognition and tracking are highly desired features for interaction design in future smart environments. Due to the complexity of the hand/body motions, and limitations of mobile devices in expensive computations, 3D gesture analysis is still an extremely diffcult problem to solve.

This thesis aims to introduce new concepts, theories and technologies for natural and intuitive interaction in future augmented environments. Contributions of this thesis support the concept of bare-hand 3D gestural interaction and interactive visualization on future smart devices. The introduced technical solutions enable an e ective interaction in the 3D space around the smart device. High accuracy and robust 3D motion analysis of the hand/body gestures is performed to facilitate the 3D interaction in various application scenarios. The proposed technologies enable users to control, manipulate, and organize the digital content in 3D space.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2014. xii, 101 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 14:02
Keyword
3D gestural interaction, gesture recognition, gesture tracking, 3D visualization, 3D motion analysis, augmented environments
National Category
Engineering and Technology
Research subject
Media Technology
Identifiers
urn:nbn:se:kth:diva-141938 (URN)978-91-7595-031-0 (ISBN)
Public defence
2014-03-17, F3, Lindstedtsvägen 26, KTH, Stockholm, 13:15 (English)
Opponent
Supervisors
Note

QC 20140226

Available from: 2014-02-26 Created: 2014-02-26 Last updated: 2014-02-26Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Li, Haibo

Search in DiVA

By author/editor
Yousefi, ShahrouzLi, Haibo
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 39 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf