Change search
Refine search result
1 - 20 of 20
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bretzner, Lars
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Laptev, Ivan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Hand-gesture recognition using multi-scale colour features, hierarchical features and particle filtering2002In: Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002. Proceedings, IEEE conference proceedings, 2002, p. 63-74Conference paper (Refereed)
    Abstract [en]

    This paper presents algorithms and a prototype systemfor hand tracking and hand posture recognition. Hand posturesare represented in terms of hierarchies of multi-scalecolour image features at different scales, with qualitativeinter-relations in terms of scale, position and orientation. Ineach image, detection of multi-scale colour features is performed.Hand states are then simultaneously detected andtracked using particle filtering, with an extension of layeredsampling referred to as hierarchical layered sampling. Experimentsare presented showing that the performance ofthe system is substantially improved by performing featuredetection in colour space and including a prior with respectto skin colour. These components have been integrated intoa real-time prototype system, applied to a test problem ofcontrolling consumer electronics using hand gestures. In asimplified demo scenario, this system has been successfullytested by participants at two fairs during 2001.

  • 2.
    Bretzner, Lars
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Laptev, Ivan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Lenman, S.
    Sundblad, Y.
    A Prototype System for Computer Vision Based Human Computer Interaction2001Report (Other academic)
  • 3.
    Laptev, Ivan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Local spatio-temporal image features for motion interpretation2004Doctoral thesis, monograph (Other scientific)
    Abstract [en]

    Visual motion carries information about the dynamics of ascene. Automatic interpretation of this information isimportant when designing computer systems forvisualnavigation, surveillance, human-computer interaction, browsingof video databases and other growing applications.

    In this thesis, we address the issue of motionrepresentation for the purpose of detecting and recognizingmotion patterns in video sequences. We localize the motion inspace and time and propose to use local spatio-temporal imagefeatures as primitives when representing and recognizingmotions. To detect such features, we propose to maximize ameasure of local variation of the image function over space andtime and show that such a method detects meaningful events inimage sequences. Due to its local nature, the proposed methodavoids the in.uence of global variations in the scene andovercomes the need for spatial segmentation and tracking priorto motion recognition. These properties are shown to be highlyuseful when recognizing human actions in complexscen es.

    Variations in scale and in relative motions of the cameramay strongly in.uence the structure of image sequences andtherefore the performance of recognition schemes. To addressthis problem, we develop a theory of local spatio-temporaladaptation and show that this approach provides invariance whenanalyzing image sequences under scaling and velocitytransformations. To obtain discriminative representations ofmotion patterns, we also develop several types of motiondescriptors and use them for classifying and matching localfeatures in image sequences. An extensive evaluation of thisapproach is performed and results in the context of the problemof human action recognition are presented. I

    n summary, this thesis provides the following contributions:(i) it introduces the notion of local features in space-timeand demonstrates the successful application of such featuresfor motion interpretation; (ii) it presents a theory and anevaluation of methods for local adaptation with respect toscale and velocity transformations in image sequences and (iii)it presents and evaluates a set of local motion descriptors,which in combination with methods for feature detection andfeature adaptation allow for robust recognition of humanactions in complexs cenes with cluttered and non-stationarybackgrounds as well as camera motion.

  • 4.
    Laptev, Ivan
    et al.
    IRISA/INRIA.
    Caputo, Barbara
    Schüldt, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Local velocity-adapted motion events for spatio-temporal recognition2007In: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 108, no 3, p. 207-229Article in journal (Refereed)
    Abstract [en]

    In this paper, we address the problem of motion recognition using event-based local motion representations. We assume that similar patterns of motion contain similar events with consistent motion across image sequences. Using this assumption, we formulate the problem of motion recognition as a matching of corresponding events in image sequences. To enable the matching, we present and evaluate a set of motion descriptors that exploit the spatial and the temporal coherence of motion measurements between corresponding events in image sequences. As the motion measurements may depend on the relative motion of the camera, we also present a mechanism for local velocity adaptation of events and evaluate its influence when recognizing image sequences subjected to different camera motions. When recognizing motion patterns, we compare the performance of a nearest neighbor (NN) classifier with the performance of a support vector machine (SVM). We also compare event-based motion representations to motion representations in terms of global histograms. A systematic experimental evaluation on a large video database with human actions demonstrates that (i) local spatio-temporal image descriptors can be defined to carry important information of space-time events for subsequent recognition, and that (ii) local velocity adaptation is an important mechanism in situations when the relative motion between the camera and the interesting events in the scene is unknown. The particular advantage of event-based representations and velocity adaptation is further emphasized when recognizing human actions in unconstrained scenes with complex and non-stationary backgrounds.

  • 5.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    A Distance Measure and a Feature Likelihood Map Concept for Scale-Invariant Model Matching2003In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 52, no 2, p. 97-120Article in journal (Refereed)
    Abstract [en]

    This paper presents two approaches for evaluating multi-scale feature-based object models. Within the first approach, a scale-invariant distance measure is proposed for comparing two image representations in terms of multi-scale features. Based on this measure, the maximisation of the likelihood of parameterised feature models allows for simultaneous model selection and parameter estimation.

    The idea of the second approach is to avoid an explicit feature extraction step and to evaluate models using a function defined directly from the image data. For this purpose, we propose the concept of a feature likelihood map, which is a function normalised to the interval [0, 1], and that approximates the likelihood of image features at all points in scale-space.

    To illustrate the applicability of both methods, we consider the area of hand gesture analysis and show how the proposed evaluation schemes can be integrated within a particle filtering approach for performing simultaneous tracking and recognition of hand models under variations in the position, orientation, size and posture of the hand. The experiments demonstrate the feasibility of the approach, and that real time performance can be obtained by pyramid implementations of the proposed concepts.

  • 6.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.
    A multi-scale feature likelihood map for direct evaluation of object hypotheses2001In: Proc Scale-Space and Morphology in Computer Vision, Springer Berlin/Heidelberg, 2001, Vol. 2106, p. 98-110Conference paper (Refereed)
    Abstract [en]

    This paper develops and investigates a new approach for evaluating feature based object hypotheses in a direct way. The idea is to compute a feature likelihood map (FLM), which is a function normalized to the interval [0, 1], and which approximates the likelihood of image features at all points in scale-space. In our case, the FLM is defined from Gaussian derivative operators and in such a way that it assumes its strongest responses near the centers of symmetric blob-like or elongated ridge-like structures and at scales that reflect the size of these structures in the image domain. While the FLM inherits several advantages of feature based image representations, it also (i) avoids the need for explicit search when matching features in object models to image data, and (ii) eliminates the need for thresholds present in most traditional feature based approaches. In an application presented in this paper, the FLM is applied to simultaneous tracking and recognition of hand models based on particle filtering. The experiments demonstrate the feasibility of the approach, and that real time performance can be obtained by a pyramid implementation of the proposed concept.

  • 7.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Interest point detection and scale selection in space-time2003In: Scale Space Methods in Computer Vision: 4th International Conference, Scale Space 2003 Isle of Skye, UK, June 10–12, 2003 Proceedings, Springer Berlin/Heidelberg, 2003, Vol. 2695, p. 372-387Conference paper (Refereed)
    Abstract [en]

    Several types of interest point detectors have been proposed for spatial images. This paper investigates how this notion can be generalised to the detection of interesting events in space-time data. Moreover, we develop a mechanism for spatio-temporal scale selection and detect events at scales corresponding to their extent in both space and time. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect regions in space-time where the image structures have significant local variations in both space and time. In this way, events that correspond to curved space-time structures are emphasised, while structures with locally constant motion are disregarded. To construct this operator, we start from a multi-scale windowed second moment matrix in space-time, and combine the determinant and the trace in a similar way as for the spatial Harris operator. All space-time maxima of this operator are then adapted to characteristic scales by maximising a scale-normalised space-time Laplacian operator over both spatial scales and temporal scales. The motivation for performing temporal scale selection as a complement to previous approaches of spatial scale selection is to be able to robustly capture spatio-temporal events of different temporal extent. It is shown that the resulting approach is truly scale invariant with respect to both spatial scales and temporal scales. The proposed concept is tested on synthetic and real image sequences. It is shown that the operator responds to distinct and stable points in space-time that often correspond to interesting events. The potential applications of the method are discussed.

  • 8.
    Laptev, Ivan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Local descriptors for spatio-temporal recognition2006In: Spatial Coherence For Visual Motion Analysis: First International Workshop, SCVMA 2004, Prague, Czech Republic, May 15, 2004. Revised Papers / [ed] MacLean, WJ, Springer Berlin/Heidelberg, 2006, Vol. 3667, p. 91-103Conference paper (Refereed)
    Abstract [en]

    This paper presents and investigates a set of local space-time descriptors for representing and recognizing motion patterns in video. Following the idea of local features in the spatial domain, we use the notion of space-time interest points and represent video data in terms of local space-time events. To describe such events, we define several types of image descriptors over local spatio-temporal neighborhoods and evaluate these descriptors in the context of recognizing human activities. In particular, we compare motion representations in terms of spatio-temporal jets, position dependent histograms, position independent histograms, and principal component analysis computed for either spatio-temporal gradients or optic flow. An experimental evaluation on a video database with human actions shows that high classification performance can be achieved, and that there is a clear advantage of using local position dependent histograms, consistent with previously reported findings regarding spatial recognition.

  • 9.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    On Space-Time Interest Points2003Report (Other academic)
    Abstract [en]

    Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features capture interesting events in video and can be used for a compact representation and for interpretation of video data.

    To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.

  • 10.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Space-time interest points2003In: Proceedings of Ninth IEEE International Conference on Computer Vision, 2003: ICCV'03, IEEE conference proceedings, 2003, p. 432-439Conference paper (Refereed)
    Abstract [en]

    Local image features or interest points provide compact and abstract representations of patterns in an image. We propose to extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for its interpretation. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We then estimate the spatio-temporal extents of the detected events and compute their scale-invariant spatio-temporal descriptors. Using such descriptors, we classify events and construct video representation in terms of labeled space-time points. For the problem of human motion analysis, we illustrate how the proposed method allows for detection of walking people in scenes with occlusions and dynamic backgrounds.

  • 11.
    Laptev, Ivan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Tracking of multi-state hand models using particle filtering and a hierarchy of multi-scale image features2001Report (Refereed)
    Abstract [en]

    This paper presents an approach for simultaneous tracking and recognition of hierarchical object representations in terms of multiscale image features. A scale-invariant dissimilarity measure is proposed for comparing scale-space features at different positions and scales. Based on this measure, the likelihood of hierarchical, parameterized models can be evaluated in such a way that maximization of the measure over different models and their parameters allows for both model selection and parameter estimation. Then, within the framework of particle filtering, we consider the area of hand gesture analysis, and present a method for simultaneous tracking and recognition of hand models under variations in the position, orientation, size and posture of the hand. In this way, qualitative hand states and quantitative hand motions can be captured, and be used for controlling different types of computerised equipment.

  • 12.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Tracking of multi-state hand models using particle filtering and a hierarchy of multi-scale image features2001In: Scale-Space and Morphology in Computer Vision: Third International Conference, Scale-Space 2001 Vancouver, Canada, July 7–8, 2001 Proceedings, Springer Berlin/Heidelberg, 2001, Vol. 2106, p. 63-74Conference paper (Refereed)
    Abstract [en]

    This paper presents an approach for simultaneous tracking and recognition of hierarchical object representations in terms of multiscale image features. A scale-invariant dissimilarity measure is proposed for comparing scale-space features at different positions and scales. Based on this measure, the likelihood of hierarchical, parameterized models can be evaluated in such a way that maximization of the measure over different models and their parameters allows for both model selection and parameter estimation. Then, within the framework of particle filtering, we consider the area of hand gesture analysis, and present a method for simultaneous tracking and recognition of hand models under variations in the position, orientation, size and posture of the hand. In this way, qualitative hand states and quantitative hand motions can be captured, and be used for controlling different types of computerised equipment.

  • 13.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Velocity adaptation of space-time interest points2004In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004 / [ed] Kittler, J; Petrou, M; Nixon, M, IEEE conference proceedings, 2004, p. 52-56Conference paper (Refereed)
    Abstract [en]

    The notion of local features in space-time has recently been proposed to capture and describe local events in video. When computing space-time descriptors, however, the result may strongly depend on the relative motion between the object and the camera. To compensate for this variation, we present a method that automatically adapts the features to the local velocity of the image pattern and, hence, results in a video representation that is stable with respect to different amounts of camera motion. Experimentally we show that the use of velocity adaptation substantially increases the repeatability of interest points as well as the stability of their associated descriptors. Moreover for an application to human action recognition we demonstrate how velocity adapted features enable recognition of human actions in situations with unknown camera motion and complex, nonstationary backgrounds.

  • 14.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Velocity adaptation of spatio-temporal receptive fields for direct recognition of activities: an experimental study2004In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 22, no 2, p. 105-116Article in journal (Refereed)
    Abstract [en]

    This article presents an experimental study of the influence of velocity adaptation when recognizing spatio-temporal patterns using a histogram-based statistical framework. The basic idea consists of adapting the shapes of the filter kernels to the local direction of motion, so as to allow the computation of image descriptors that are invariant to the relative motion in the image plane between the camera and the objects or events that are studied. Based on a framework of recursive spatio-temporal scale-space, we first outline how a straightforward mechanism for local velocity adaptation can be expressed. Then, for a test problem of recognizing activities, we present an experimental evaluation, which shows the advantages of using velocity-adapted spatio-temporal receptive fields, compared to directional derivatives or regular partial derivatives for which the filter kernels have not been adapted to the local image motion.

  • 15.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lindeberg, Tony
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Velocity-adapted spatio-temporal receptive fields for direct recognition of activities2002In: Proc. ECCV’02 Workshop on Statistical Methods in Video Processing, 2002, p. 61-66Conference paper (Refereed)
    Abstract [en]

    This article presents an experimental study of the influence of velocity adaptation when recognizing spatio-temporal patterns using a histogram-based statistical framework. The basic idea consists of adapting the shapes of the filter kernels to the local direction of motion, so as to allow the computation of image descriptors that are invariant to the relative motion in the image plane between the camera and the objects or events that are studied. Based on a framework of recursive spatio-temporal scale-space, we first outline how a straightforward mechanism for local velocity adaptation can be expressed. Then, for a test problem of recognizing activities, we present an experimental evaluation, which shows the advantages of using velocity-adapted spatio-temporal receptive fields, compared to directional derivatives or regular partial derivatives for which the filter kernels have not been adapted to the local image motion.

  • 16.
    Laptev, Ivan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Mayer, H.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Eckstein, W.
    Steger, C.
    Baumgartner, A.
    Automatic extraction of roads from aerial images based on scale space and snakes2000In: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 12, no 1, p. 23-31Article in journal (Refereed)
    Abstract [en]

    We propose a new approach for automatic road extraction from aerial imagery with a model and a strategy mainly based on the multi-scale detection of roads in combination with geometry-constrained edge extraction using snakes. A main advantage of our approach is, that it allows for the first time a bridging of shadows and partially occluded areas using the heavily disturbed evidence in the image. Additionally, it has only few parameters to be adjusted. The road network is constructed after extracting crossings with varying shape and topology. We show the feasibility of the approach not only by presenting reasonable results but also by evaluating them quantitatively based on ground truth.

  • 17.
    Lindeberg, Tony
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Akbarzadeh, A.
    Laptev, Ivan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Galilean-corrected spatio-temporal interest operators2004Report (Other academic)
    Abstract [en]

    This paper presents a set of image operators for detecting regions in space-time where interesting events occur. To define such regions of interest, we compute a spatio-temporal secondmoment matrix from a spatio-temporal scale-space representation, and diagonalize this matrix locally, using a local Galilean transformation in space-time, optionally combined with a spatial rotation, so as to make the Galilean invariant degrees of freedom explicit. From the Galilean-diagonalized descriptor so obtained, we then formulate different types of space-time interest operators, and illustrate their properties on different types of image sequences.

  • 18.
    Lindeberg, Tony
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Akbarzadeh, Amir
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Laptev, Ivan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Galilean-diagonalized spatio-temporal interest operators2004In: Proc. 17th International Conference on Pattern Recognition (ICPR), 2004, p. 57-62Conference paper (Refereed)
    Abstract [en]

    This paper presents a set of image operators for detecting regions in space-time where interesting events occur. To define such regions of interest, we compute a spatio-temporal second-moment matrix from a spatio-temporal scale-space representation, and diagonalize this matrix locally, using a local Galilean transformation in space-time, optionally combined with a spatial rotation, so as to make the Galilean invariant degrees of freedom explicit. From the Galilean-diagonalized descriptor so obtained, we then formulate different types of space-time interest operators, and illustrate their properties on different types of image sequences.

  • 19.
    Parizi, Sobhan Naderi
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Laptev, Ivan
    Targhi, Alireza Tavakoli
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Modeling Image Context using Object Centered Grid2009In: 2009 DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA 2009), NEW YORK: IEEE , 2009, p. 476-483Conference paper (Refereed)
    Abstract [en]

    Context plays a valuable role in any image understanding task confirmed by numerous studies which have shown the importance of contextual information in computer vision tasks, like object detection, scene classification and image retrieval. Studies of human perception on the tasks of scene classification and visual search have shown that human visual system makes extensive use of contextual information as post-processing in order to index objects. Several recent computer vision approaches use contextual information to improve object recognition performance. They mainly use global information of the whole image by dividing the image into several pre-defined subregions, so called fixed grid. In this paper we propose an alternative approach to retrieval of contextual information, by customizing the location of the grid based on salient objects in the image. We claim this approach to result in more informative contextual features compared to the fixed-grid based strategy. To compare our results with the most relevant and recent papers, we use PASCAL 2007 data set. Our experimental results show an improvement in terms of Mean Average Precision.

  • 20.
    Schüldt, Christian
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Laptev, Ivan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Caputo, Barbara
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Recognizing human actions: A local SVM approach2004In: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 3 / [ed] Kittler, J; Petrou, M; Nixon, M, 2004, p. 32-36Conference paper (Refereed)
    Abstract [en]

    Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition.

1 - 20 of 20
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf