Change search
Refine search result
1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Abedan Kondori, Farid
    et al.
    Yousefi, Shahrouz
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. Department of Applied Physics and Electronics, Umeå University, Umeå, Sweden.
    Smart Baggage in Aviation2011In: Proceedings - 2011 IEEE International Conferences on Internet of Things and Cyber, Physical and Social Computing, 2011Conference paper (Refereed)
    Abstract [en]

    Nowadays, the Internet has dramatically changed the way people take the normal course of actions. By the recent growth of the Internet, connecting different objects to users through mobile phones and computers is no longer a dream. Aviation industry is one of the areas which have a strong potential to benefit from the Internet of Things. Among many problems related to air travel, delayed and lost luggage are the most common and irritating. Therefore, this paper suggests anew baggage control system, where users can simply track their baggage at the airport to avoid losing them. Attaching a particular pattern on the bag, which can be detected and localized from long distance by an ordinary camera, users are able to track their baggage. The proposed system is much cheaper than previous implementations and does not require sophisticated equipment.

  • 2. Abedan Kondori, Farid
    et al.
    Yousefi, Shahrouz
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Direct Head Pose Estimation Using Kinect-type Sensors2014In: Electronics Letters, ISSN 0013-5194, E-ISSN 1350-911XArticle in journal (Refereed)
  • 3. Abedan Kondori, Farid
    et al.
    Yousefi, Shahrouz
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Liu, Li
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. Nanjing University of Posts and Telecommunications, Nanjing, China .
    Head Operated Electric Wheelchair2014In: Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation, 2014, p. 53-56Conference paper (Refereed)
    Abstract [en]

    Currently, the most common way to control an electric wheelchair is to use joystick. However, there are some individuals unable to operate joystick-driven electric wheelchairs due to sever physical disabilities, like quadriplegia patients. This paper proposes a novel head pose estimation method to assist such patients. Head motion parameters are employed to control and drive an electric wheelchair. We introduce a direct method for estimating user head motion, based on a sequence of range images captured by Kinect. In this work, we derive new version of the optical flow constraint equation for range images. We show how the new equation can be used to estimate head motion directly. Experimental results reveal that the proposed system works with high accuracy in real-time. We also show simulation results for navigating the electric wheelchair by recovering user head motion.

  • 4. Kondori, F. A.
    et al.
    Yousefi, Shahrouz
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Direct three-dimensional head pose estimation from Kinect-type sensors2014In: Electronics Letters, ISSN 0013-5194, E-ISSN 1350-911X, Vol. 50, no 4, p. 268-269Article in journal (Refereed)
    Abstract [en]

    A direct method for recovering three-dimensional (3D) head motion parameters from a sequence of range images acquired by Kinect sensors is presented. Based on the range images, a new version of the optical flow constraint equation is derived, which can be used to directly estimate 3D motion parameters without any need of imposing other constraints. Since all calculations with the new constraint equation are based on the range images, Z(x, y, t), the existing techniques and experiences developed and accumulated on the topic of motion from optical flow can be directly applied simply by treating the range images as normal intensity images I(x, y, t). In this reported work, it is demonstrated how to employ the new optical flow constraint equation to recover the 3D motion of a moving head from the sequences of range images, and furthermore, how to use an old trick to handle the case when the optical flow is large. It is shown, in the end, that the performance of the proposed approach is comparable with that of some of the state-of-the-art approaches that use range data to recover 3D motion parameters.

  • 5. Kondori, F. A.
    et al.
    Yousefi, Shahrouz
    Umeå University, Sweden .
    Li, Haibo
    Umeå University, Sweden .
    Real 3D interaction behind mobile phones for augmented environments2011Conference paper (Refereed)
    Abstract [en]

    Number of mobile devices such as mobile phones or PDAs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with device easier and more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera's field of view. This paper suggests the use of particular patterns from local orientation of the image called Rotational Symmetries to detect and localize human gesture. Relative rotation and translation of human gesture between consecutive frames are estimated by means of extracting stable features. Consequently, this information can be used to facilitate the 3D manipulation of virtual objects in various applications in mobile devices.

  • 6. Kondori, F. A.
    et al.
    Yousefi, Shahrouz
    Umeå University, Sweden .
    Li, Haibo
    Umeå University, Sweden .
    Sonning, S.
    3D head pose estimation using the Kinect2011Conference paper (Refereed)
    Abstract [en]

    Head pose estimation plays an essential role for bridging the information gap between humans and computers. Conventional head pose estimation methods are mostly done in images captured by cameras. However accurate and robust pose estimation is often problematic. In this paper we present an algorithm for recovering the six degrees of freedom (DOF) of motion of a head from a sequence of range images taken by the Microsoft Kinect for Xbox 360. The proposed algorithm utilizes a least-squares minimization of the difference between the measured rate of change of depth at a point and the rate predicted by the depth rate constraint equation. We segment the human head from its surroundings and background, and then we estimate the head motion. Our system has the capability to recover the six DOF of the head motion of multiple people in one image. The proposed system is evaluated in our lab and presents superior results.

  • 7. Kondori, Farid Abedan
    et al.
    Yousefi, Shahrouz
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Kouma, Jean-Paul
    Liu, Li
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Direct hand pose estimation for immersive gestural interaction2015In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 66, p. 91-99Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel approach for performing intuitive gesture based interaction using depth data acquired by Kinect. The main challenge to enable immersive gestural interaction is dynamic gesture recognition. This problem can be formulated as a combination of two tasks; gesture recognition and gesture pose estimation. Incorporation of fast and robust pose estimation method would lessen the burden to a great extent. In this paper we propose a direct method for real-time hand pose estimation. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Extensive experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation On two different setups; desktop computing, and mobile platform. This reveals the system capability to accommodate different interaction procedures. In addition, a user study is conducted to evaluate learnability, user experience and interaction quality in 3D gestural interaction in comparison to 2D touchscreen interaction.

  • 8.
    Kondori, Farid Abedan
    et al.
    Umeå Univ, SE-90187 Umea, Sweden..
    Yousefi, Shahrouz
    KTH.
    Ostovar, Ahmad
    Umeå Univ, SE-90187 Umea, Sweden..
    Liu, Li
    Umeå Univ, SE-90187 Umea, Sweden..
    Li, Haibo
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    A Direct Method for 3D Hand Pose Recovery2014In: 2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), IEEE COMPUTER SOC , 2014, p. 345-350Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach for performing intuitive 3D gesture-based interaction using depth data acquired by Kinect. Unlike current depth-based systems that focus only on classical gesture recognition problem, we also consider 3D gesture pose estimation for creating immersive gestural interaction. In this paper, we formulate gesture-based interaction system as a combination of two separate problems, gesture recognition and gesture pose estimation. We focus on the second problem and propose a direct method for recovering hand motion parameters. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Our experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation. This application is intended to explore the system capabilities in real-time biomedical applications. Eventually, system usability test is conducted to evaluate the learnability, user experience and interaction quality in 3D interaction in comparison to 2D touch-screen interaction.

  • 9.
    Yousefi, Shahrouz
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    3D Gesture Recognition and Tracking for Next Generation of Smart Devices: Theories, Concepts, and Implementations2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The rapid development of mobile devices during the recent decade has been greatly driven by interaction and visualization technologies. Although touchscreens have signicantly enhanced the interaction technology, it is predictable that with the future mobile devices, e.g., augmentedreality glasses and smart watches, users will demand more intuitive in-puts such as free-hand interaction in 3D space. Specically, for manipulation of the digital content in augmented environments, 3D hand/body gestures will be extremely required. Therefore, 3D gesture recognition and tracking are highly desired features for interaction design in future smart environments. Due to the complexity of the hand/body motions, and limitations of mobile devices in expensive computations, 3D gesture analysis is still an extremely diffcult problem to solve.

    This thesis aims to introduce new concepts, theories and technologies for natural and intuitive interaction in future augmented environments. Contributions of this thesis support the concept of bare-hand 3D gestural interaction and interactive visualization on future smart devices. The introduced technical solutions enable an e ective interaction in the 3D space around the smart device. High accuracy and robust 3D motion analysis of the hand/body gestures is performed to facilitate the 3D interaction in various application scenarios. The proposed technologies enable users to control, manipulate, and organize the digital content in 3D space.

  • 10.
    Yousefi, Shahrouz
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    3D Photo Browsing for Future Mobile Devices2012In: Proceedings of the 20th ACM International Conference on Multimedia, 2012, p. 1401-1404Conference paper (Refereed)
    Abstract [en]

    By introducing the interactive 3D photo/video browsing and exploration system, we propose novel approaches for handling the limitations of the current 2D mobile technology from two aspects: interaction design and visualization. Our contributions feature an effective interaction that happens in the 3D space behind the mobile device's camera. 3D motion analysis of the user's gesture captured by the device's camera is performed to facilitate the interaction between users and multimedia collections in various applications. This approach will solve a wide range of problems with the current input facilities such as miniature keyboards, tiny joysticks and 2D touch screens. The suggested interactive technology enables users to control, manipulate, organize, and re-arrange their photo/video collections in 3D space using bare-hand, marker-less gesture. Moreover, with the proposed techniques we aim to visualize the 2D photo collection, in 3D, on normal 2D displays. This process is automatically done by retrieving the 3D structure from single images, finding the stereo/multiple views of a scene or using the geo-tagged meta-data from huge photo collections. By using the design and implementation of the contributions of this work, we aim to achieve the following goals: Solving the limitations of the current 2D interaction facilities by 3D gestural interaction; Increasing the usability of the multimedia applications on mobile devices; Enhancing the quality of user experience with the digital collections.

  • 11.
    Yousefi, Shahrouz
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Abedan Kondori, Farid
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH.
    Bare-hand Gesture Recognition and Tracking through the Large-scale Image Retrieval2014In: VISAPP 2014 - Proceedings of the 9th International Conference on Computer Vision Theory and Applications, 2014Conference paper (Refereed)
  • 12.
    Yousefi, Shahrouz
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Abedan Kondori, Farid
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC).
    Interactive 3D Visualization on a 4K Wall-Sized Display2014Conference paper (Refereed)
    Abstract [en]

    This paper introduces a novel vision-based approach for realistic interaction between user and display's content. An extremely accurate motion capture system is proposed to measure and track the user's head motion in 3D space. Video frames captured by the low-cost head-mounted camera are processed to retrieve the 3D motion parameters. The retrieved information facilitates the real-time 3D interaction. This technology turns any 2D screen to interactive 3D display, enabling users to control and manipulate the content as a digital window. The proposed system is tested and verified on a huge wall-sized 4K screen.

  • 13.
    Yousefi, Shahrouz
    et al.
    Umeå University, Sweden.
    Abedan Kondori, Farid
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Tracking Fingers in 3D Space for Mobile Interaction2010Conference paper (Refereed)
  • 14.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    3D Visualization of Single Images using Patch Level Depth2011In: SIGMAP 2011, 2011, p. 61-66Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the task of 3D photo visualization using a single monocular image. The main idea is to use single photos taken by capturing devices such as ordinary cameras, mobile phones, tablet PCs etc. and visualize them in 3D on normal displays. Supervised learning approach is hired to retrieve depth information from single images. This algorithm is based on the hierarchical multi-scale Markov Random Field (MRF) which models the depth based on the multi-scale global and local features and relation between them in a monocular image. Consequently, the estimated depth image is used to allocate the specified depth parameters for each pixel in the 3D map. Accordingly, the multi-level depth adjustments and coding for color anaglyphs is performed. Our system receives a single 2D image as input and provides a anaglyph coded 3D image in output. Depending on the coding technology the special low-cost anaglyph glasses for viewers will be used.

  • 15.
    Yousefi, Shahrouz
    et al.
    Umeå University, Sweden.
    Kondori, Farid Abedan
    Li, Haibo
    Umeå University, Sweden.
    Camera-based gesture tracking for 3D interaction behind mobile devices2012In: International journal of pattern recognition and artificial intelligence, ISSN 0218-0014, Vol. 26, no 8, article id 1260008Article in journal (Refereed)
    Abstract [en]

    Number of mobile devices such as Smartphones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays that make the interaction with the device easier and more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera's field of view. In this paper, our gestural interaction relies on particular patterns from local orientation of the image called rotational symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of diffrerent orders that ensures a reliable detector for fingertips and user's gesture. Consequently, gesture detection and tracking can be used as an efficient tool for 3D manipulation in various virtual/augmented reality applications.

  • 16.
    Yousefi, Shahrouz
    et al.
    Digital Media Lab., Department of Applied Physics and Electronics, Umeå University.
    Kondori, Farid Abedan
    Digital Media Lab., Department of Applied Physics and Electronics, Umeå University.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC). Digital Media Lab., Department of Applied Physics and Electronics, Umeå University.
    Experiencing real 3D gestural interaction with mobile devices2013In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 34, no 8, p. 912-921Article in journal (Refereed)
    Abstract [en]

    Number of mobile devices such as smart phones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with the device more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device, in the camera's field of view. In this paper, our gestural interaction heavily relies on particular patterns from local orientation of the image called Rotational Symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of different orders that ensures a reliable detector for hand gesture. Consequently, gesture detection and tracking can be hired as an efficient tool for 3D manipulation in various applications in computer vision and augmented reality. The final output will be rendered into color anaglyphs for 3D visualization. Depending on the coding technology, different low cost 3D glasses can be used for the viewers.

  • 17.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Robust correction of 3D geo-metadata in photo collections by forming a photo grid2011In: Wireless Communications and Signal Processing (WCSP), 2011 International Conference on, IEEE , 2011, p. 1-5Conference paper (Refereed)
    Abstract [en]

    In this work, we present a technique for efficient and robust estimation of the exact location and orientation of a photo capture device in a large data set. The provided data set includes a set of photos and the associated information from GPS and orientation sensor. This attached metadata is noisy and lacks precision. Our strategy to correct this uncertain data is based on the data fusion between measurement model, derived from sensor data, and signal model given by the computer vision algorithms. Based on the retrieved information from multiple views of a scene we make a grid of images. Our robust feature detection and matching between images result in finding a reliable transformation. Consequently, relative location and orientation of the data set construct the signal model. On the other hand, information extracted from the single images combined with the measurement data make the measurement model. Finally, Kalman filter is used to fuse these two models iteratively and enhance the estimation of the ground truth(GT) location and orientation. Practically, this approach can help us to design a photo browsing system from a huge collection of photos, enabling 3D navigation and exploration of our huge data set.

  • 18.
    Yousefi, Shahrouz
    et al.
    Umeå University, Sweden.
    Kondori, Farid Abedan
    Umeå University, Sweden.
    Li, Haibo
    Umeå University, Sweden.
    Stereoscopic visualization of monocular images in photo collections2011In: Wireless Communications and Signal Processing (WCSP), 2011 International Conference on, IEEE , 2011, p. 1-5Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a novel approach for 3D video/photo visualization using an ordinary digital camera. The idea is to turn any 2D camera into 3D based on the data derived from a collection of captured photos or a recorded video. For a given monocular input, the retrieved information from the overlapping photos can be used to provide required information for performing 3D output. Robust feature detection and matching between images is hired to find the transformation between overlapping frames. The transformation matrix will map images to the same horizontal baseline. Afterwards, the projected images will be adjusted to the stereoscopic model. Finally, stereo views will be coded into 3D channels for visualization. This approach enables us making 3D output using randomly taken photos of a scene or a recorded video. Our system receives 2D monocular input and provides double layer coded 3D output. Depending on the coding technology different low cost 3D glasses will be used for viewers.

  • 19.
    Yousefi, Shahrouz
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Kondori, FaridAbedan
    Li, Haibo
    3D Gestural Interaction for Stereoscopic Visualization on Mobile Devices2011In: Computer Analysis of Images and Patterns / [ed] Real, Pedro; Diaz-Pernil, Daniel; Molina-Abril, Helena; Berciano, Ainhoa; Kropatsch, Walter, Springer Berlin Heidelberg , 2011, Vol. 6855, p. 555-562Chapter in book (Other academic)
    Abstract [en]

    Number of mobile devices such as smart phones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with device more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera’s field of view. In this paper, our gestural interaction heavily relies on particular patterns from local orientation of the image called Rotational Symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of different orders which ensures a reliable detector for hand gesture. Consequently, gesture detection and tracking can be hired as an efficient tool for 3D manipulation in various applications in computer vision and augmented reality. The final output will be rendered into color anaglyphs for 3D visualization. Depending on the coding technology different low cost 3D glasses will be used for viewers.

  • 20.
    Yousefi, Shahrouz
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    3D Hand Gesture Analysis Through a Real-time Gesture Search Engine2015In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 12, article id 67Article in journal (Refereed)
    Abstract [en]

    3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  • 21.
    Yousefi, Shahrouz
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    3D Interaction Through a Real-Time Gesture Search Engine2015In: COMPUTER VISION - ACCV 2014 WORKSHOPS, PT II, [Yousefi, Shahrouz; Li, Haibo] KTH Royal Inst Technol, S-10044 Stockholm, Sweden., 2015, p. 199-213Conference paper (Refereed)
    Abstract [en]

    3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real-time with high accuracy using an ordinary camera.

  • 22.
    Yousefi, Shahrouz
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Abedan Kondori, Farid
    Real-time 3D Gesture Recognition and Tracking System for Mobile Devices2014Patent (Other (popular science, discussion, etc.))
  • 23.
    Yousefi, Shahrouz
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Liu, L.
    3D gesture analysis using a large-scale gesture database2014In: Advances in Visual Computing (ISVC 2014), Pt 1, Springer, 2014, Vol. 8887, p. 206-217Conference paper (Refereed)
    Abstract [en]

    3D gesture analysis is a highly desired feature of future interaction design. Specifically, in augmented environments, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities. This paper, introduces a novel solution for real-time 3D gesture analysis using an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique search algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query input and database and retrieving the best match. Once the best match is found from the database in real-time, the pre-calculated 3D parameters can instantly be used for gesture-based interaction.

1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf