Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bergholm, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Adler, Jeremy
    Parmryd, Ingela
    Analysis of Bias in the Apparent Correlation Coefficient Between Image Pairs Corrupted by Severe Noise2010In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 37, no 3, p. 204-219Article in journal (Refereed)
    Abstract [en]

    The correlation coefficient r is a measure of similarity used to compare regions of interest in image pairs. In fluorescence microscopy there is a basic tradeoff between the degree of image noise and the frequency with which images can be acquired and therefore the ability to follow dynamic events. The correlation coefficient r is commonly used in fluorescence microscopy for colocalization measurements, when the relative distributions of two fluorophores are of interest. Unfortunately, r is known to be biased understating the true correlation when noise is present. A better measure of correlation is needed. This article analyses the expected value of r and comes up with a procedure for evaluating the bias of r, expected value formulas. A Taylor series of so-called invariant factors is analyzed in detail. These formulas indicate ways to correct r and thereby obtain a corrected value free from the influence of noise that is on average accurate (unbiased). One possible correction is the attenuated corrected correlation coefficient R, introduced heuristically by Spearman (in Am. J. Psychol. 15:72-101, 1904). An ideal correction formula in terms of expected values is derived. For large samples R tends towards the ideal correction formula and the true noise-free correlation. Correlation measurements using simulation based on the types of noise found in fluorescence microscopy images illustrate both the power of the method and the variance of R. We conclude that the correction formula is valid and is particularly useful for making correct analyses from very noisy datasets.

  • 2.
    Jansson, Ylva
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Lindeberg, Tony
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Dynamic texture recognition using time-causal and time-recursive spatio-temporal receptive fields2018In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 60, no 9, p. 1369-1398Article in journal (Refereed)
    Abstract [en]

    This work presents a first evaluation of using spatio-temporal receptive fields from a recently proposed time-causal spatiotemporal scale-space framework as primitives for video analysis. We propose a new family of video descriptors based on regional statistics of spatio-temporal receptive field responses and evaluate this approach on the problem of dynamic texture recognition. Our approach generalises a previously used method, based on joint histograms of receptive field responses, from the spatial to the spatio-temporal domain and from object recognition to dynamic texture recognition. The time-recursive formulation enables computationally efficient time-causal recognition. The experimental evaluation demonstrates competitive performance compared to state of the art. In particular, it is shown that binary versions of our dynamic texture descriptors achieve improved performance compared to a large range of similar methods using different primitives either handcrafted or learned from data. Further, our qualitative and quantitative investigation into parameter choices and the use of different sets of receptive fields highlights the robustness and flexibility of our approach. Together, these results support the descriptive power of this family of time-causal spatio-temporal receptive fields, validate our approach for dynamic texture recognition and point towards the possibility of designing a range of video analysis methods based on these new time-causal spatio-temporal primitives.

  • 3.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Discrete Derivative Approximations with Scale-Space Properties: A Basis for Low-Level Feature Extraction1993In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 3, no 4, p. 349-376Article in journal (Refereed)
    Abstract [en]

    This article shows how discrete derivative approximations can be defined so thatscale-space properties hold exactly also in the discrete domain. Starting from a set of natural requirements on the first processing stages of a visual system,the visual front end, it gives an axiomatic derivation of how a multiscale representation of derivative approximations can be constructed from a discrete signal, so that it possesses analgebraic structure similar to that possessed by the derivatives of the traditional scale-space representation in the continuous domain. A family of kernels is derived that constitutediscrete analogues to the continuous Gaussian derivatives.The representation has theoretical advantages over other discretizations of the scale-space theory in the sense that operators that commute before discretizationcommute after discretization. Some computational implications of this are that derivative approximations can be computeddirectly from smoothed data and that this will giveexactly the same result as convolution with the corresponding derivative approximation kernel. Moreover, a number ofnormalization conditions are automatically satisfied.The proposed methodology leads to a scheme of computations of multiscale low-level feature extraction that is conceptually very simple and consists of four basic steps: (i)large support convolution smoothing, (ii)small support difference computations, (iii)point operations for computing differential geometric entities, and (iv)nearest-neighbour operations for feature detection.Applications demonstrate how the proposed scheme can be used for edge detection and junction detection based on derivatives up to order three.

  • 4.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Generalized Gaussian Scale-Space Axiomatics Comprising Linear Scale-Space, Affine Scale-Space and Spatio-Temporal Scale-Space2011In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 40, no 1, p. 36-81Article in journal (Refereed)
    Abstract [en]

    This paper describes a generalized axiomatic scale-space theory that makes it possible to derive the notions of linear scale-space, affine Gaussian scale-space and linear spatio-temporal scale-space using a similar set of assumptions (scale-space axioms). The notion of non-enhancement of local extrema is generalized from previous application over discrete and rotationally symmetric kernels to continuous and more general non-isotropic kernels over both spatial and spatio-temporal image domains. It is shown how a complete classification can be given of the linear (Gaussian) scale-space concepts that satisfy these conditions on isotropic spatial, non-isotropic spatial and spatio-temporal domains, which results in a general taxonomy of Gaussian scale-spaces for continuous image data. The resulting theory allows filter shapes to be tuned from specific context information and provides a theoretical foundation for the recently exploited mechanisms of shape adaptation and velocity adaptation, with highly useful applications in computer vision. It is also shown how time-causal spatio-temporal scale-spaces can be derived from similar assumptions. The mathematical structure of these scale-spaces is analyzed in detail concerning transformation properties over space and time, the temporal cascade structure they satisfy over time as well as properties of the resulting multi-scale spatio-temporal derivative operators. It is also shown how temporal derivatives with respect to transformed time can be defined, leading to the formulation of a novel analogue of scale normalized derivatives for time-causal scale-spaces. The kernels generated from these two types of theories have interesting relations to biological vision. We show how filter kernels generated from the Gaussian spatio-temporal scale-space as well as the time-causal spatio-temporal scale-space relate to spatio-temporal receptive field profiles registered from mammalian vision. Specifically, we show that there are close analogies to space-time separable cells in the LGN as well as to both space-time separable and non-separable cells in the striate cortex. We do also present a set of plausible models for complex cells using extended quasi-quadrature measures expressed in terms of scale normalized spatio-temporal derivatives. The theories presented as well as their relations to biological vision show that it is possible to describe a general set of Gaussian and/or time-causal scale-spaces using a unified framework, which generalizes and complements previously presented scale-space formulations in this area.

  • 5.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Image matching using generalized scale-space interest points2015In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 52, no 1, p. 3-36Article in journal (Refereed)
    Abstract [en]

    The performance of matching and object recognition methods based on interest points depends on both the properties of the underlying interest points and the choice of associated image descriptors. This paper demonstrates advantages of using generalized scale-space interest point detectors in this context for selecting a sparse set of points for computing image descriptors for image-based matching.

    For detecting interest points at any given scale, we make use of the Laplacian, the determinant of the Hessian and four new unsigned or signed Hessian feature strength measures, which are defined by generalizing the definitions of the Harris and Shi-and-Tomasi operators from the second moment matrix to the Hessian matrix. Then, feature selection over different scales is performed either by scale selection from local extrema over scale of scale-normalized derivates or by linking features over scale into feature trajectories and computing a significance measure from an integrated measure of normalized feature strength over scale.

    A theoretical analysis is presented of the robustness of the differential entities underlying these interest points under image deformations, in terms of invariance properties under affine image deformations or approximations thereof. Disregarding the effect of the rotationally symmetric scale-space smoothing operation, the determinant of the Hessian is a truly affine covariant differential entity and two of the new Hessian feature strength measures have a major contribution from the affine covariant determinant of the Hessian, implying that local extrema of these differential entities will bemore robust under affine image deformations than local extrema of the Laplacian operator or the two other new Hessian feature strength measures.

    It is shown how these generalized scale-space interest points allow for a higher ratio of correct matches and a lower ratio of false matches compared to previously known interest point detectors within the same class. The best results are obtained using interest points computed with scale linking and with the new Hessian feature strength measures and the determinant of the Hessian being the differential entities that lead to the best matching performance under perspective image transformations with significant foreshortening, and better than the more commonly used Laplacian operator, its difference-of-Gaussians approximation or the Harris-Laplace operator.

    We propose that these generalized scale-space interest points, when accompanied by associated local scale-invariant image descriptors, should allow for better performance of interest point based methods for image-based matching, object recognition and related visual tasks.

  • 6.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Scale Selection Properties of Generalized Scale-Space Interest Point Detectors2013In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 46, no 2, p. 177-210Article in journal (Refereed)
    Abstract [en]

    Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under revision) and comprising: an enriched set of differential interest operators at a fixed scale including the Laplacian operator, the determinant of the Hessian, the new Hessian feature strength measures I and II and the rescaled level curve curvature operator, as well as an enriched set of scale selection mechanisms including scale selection based on local extrema over scale, complementary post-smoothing after the computation of non-linear differential invariants and scale selection based on weighted averaging of scale values along feature trajectories over scale. A theoretical analysis of the sensitivity to affine image deformations is presented, and it is shown that the scale estimates obtained from the determinant of the Hessian operator are affine covariant for an anisotropic Gaussian blob model. Among the other purely second-order operators, the Hessian feature strength measure I has the lowest sensitivity to non-uniform scaling transformations, followed by the Laplacian operator and the Hessian feature strength measure II. The predictions from this theoretical analysis agree with experimental results of the repeatability properties of the different interest point detectors under affine and perspective transformations of real image data. A number of less complete results are derived for the level curve curvature operator.

  • 7.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Spatio-temporal scale selection in video data2018In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 60, no 4, p. 525-562Article in journal (Refereed)
    Abstract [en]

    This work presents a theory and methodology for simultaneous detection of local spatial and temporal scales in video data. The underlying idea is that if we process video data by spatio-temporal receptive fields at multiple spatial and temporal scales, we would like to generate hypotheses about the spatial extent and the temporal duration of the underlying spatio-temporal image structures that gave rise to the feature responses.

    For two types of spatio-temporal scale-space representations, (i) a non-causal Gaussian spatio-temporal scale space for offline analysis of pre-recorded video sequences and (ii) a time-causal and time-recursive spatio-temporal scale space for online analysis of real-time video streams, we express sufficient conditions for spatio-temporal feature detectors in terms of spatio-temporal receptive fields to deliver scale covariant and scale invariant feature responses.

    We present an in-depth theoretical analysis of the scale selection properties of eight types of spatio-temporal interest point detectors in terms of either: (i)-(ii) the spatial Laplacian applied to the first- and second-order temporal derivatives, (iii)-(iv) the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives, (v) the determinant of the spatio-temporal Hessian matrix, (vi) the spatio-temporal Laplacian and (vii)-(viii) the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix. It is shown that seven of these spatio-temporal feature detectors allow for provable scale covariance and scale invariance. Then, we describe a time-causal and time-recursive algorithm for detecting sparse spatio-temporal interest points from video streams and show that it leads to intuitively reasonable results.

    An experimental quantification of the accuracy of the spatio-temporal scale estimates and the amount of temporal delay obtained these spatio-temporal interest point detectors is given showing that: (i) the spatial and temporal scale selection properties predicted by the continuous theory are well preserved in the discrete implementation and (ii) the spatial Laplacian or the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives lead to much shorter temporal delays in a time-causal implementation compared to the determinant of the spatio-temporal Hessian or the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix.

  • 8.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Temporal scale selection in time-causal scale space2017In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 58, no 1, p. 57-101Article in journal (Refereed)
    Abstract [en]

    When designing and developing scale selection mechanisms for generating hypotheses about characteristic scales in signals, it is essential that the selected scale levels reflect the extent of the underlying structures in the signal.

    This paper presents a theory and in-depth theoretical analysis about the scale selection properties of methods for automatically selecting local temporal scales in time-dependent signals based on local extrema over temporal scales of scale-normalized temporal derivative responses. Specifically, this paper develops a novel theoretical framework for performing such temporal scale selection over a time-causal and time-recursive temporal domain as is necessary when processing continuous video or audio streams in real time or when modelling biological perception.

    For a recently developed time-causal and time-recursive scale-space concept defined by convolution with a scale-invariant limit kernel, we show that it is possible to transfer a large number of the desirable scale selection properties that hold for the Gaussian scale-space concept over a non-causal temporal domain to this temporal scale-space concept over a truly time-causal domain. Specifically, we show that for this temporal scale-space concept, it is possible to achieve true temporal scale invariance although the temporal scale levels have to be discrete, which is a novel theoretical construction.

    The analysis starts from a detailed comparison of different temporal scale-space concepts and their relative advantages and disadvantages, leading the focus to a class of recently extended time-causal and time-recursive temporal scale-space concepts based on first-order integrators or equivalently truncated exponential kernels coupled in cascade. Specifically, by the discrete nature of the temporal scale levels in this class of time-causal scale-space concepts, we study two special cases of distributing the intermediate temporal scale levels, by using either a uniform distribution in terms of the variance of the composed temporal scale-space kernel or a logarithmic distribution.

    In the case of a uniform distribution of the temporal scale levels, we show that scale selection based on local extrema of scale-normalized derivatives over temporal scales makes it possible to estimate the temporal duration of sparse local features defined in terms of temporal extrema of first- or second-order temporal derivative responses. For dense features modelled as a sine wave, the lack of temporal scale invariance does, however, constitute a major limitation for handling dense temporal structures of different temporal duration in a uniform manner.

    In the case of a logarithmic distribution of the temporal scale levels, specifically taken to the limit of a time-causal limit kernel with an infinitely dense distribution of the temporal scale levels towards zero temporal scale, we show that it is possible to achieve true temporal scale invariance to handle dense features modelled as a sine wave in a uniform manner over different temporal durations of the temporal structures as well to achieve more general temporal scale invariance for any signal over any temporal scaling transformation with a temporal scaling factor that is an integer power of the distribution parameter of the time-causal limit kernel.

    It is shown how these temporal scale selection properties developed for a pure temporal domain carry over to feature detectors defined over time-causal spatio-temporal and spectro-temporal domains.

  • 9.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Time-causal and time-recursive spatio-temporal receptive fields2016In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 55, no 1, p. 50-88Article in journal (Refereed)
    Abstract [en]

    We present an improved model and theory for time-causal and time-recursive spatio-temporal receptive fields, obtained by a combination of Gaussian receptive fields over the spatial domain and first-order integrators or equivalently truncated exponential filters coupled in cascade over the temporal domain. 

    Compared to previous spatio-temporal scale-space formulations in terms of non-enhancement of local extrema or scale invariance, these receptive fields are based on different scale-space axiomatics over time by ensuring non-creation of new local extrema or zero-crossings with increasing temporal scale. Specifically, extensions are presented about (i) parameterizing the intermediate temporal scale levels, (ii) analysing the resulting temporal dynamics, (iii) transferring the theory to a discrete implementation in terms of recursive filters over time, (iv) computing scale-normalized spatio-temporal derivative expressions for spatio-temporal feature detection and (v) computational modelling of receptive fields in the lateral geniculate nucleus (LGN) and the primary visual cortex (V1) in biological vision. 

    We show that by distributing the intermediate temporal scale levels according to a logarithmic distribution, we obtain a new family of temporal scale-space kernels with better temporal characteristics compared to a more traditional approach of using a uniform distribution of the intermediate temporal scale levels. Specifically, the new family of time-causal kernels has much faster temporal response properties (shorter temporal delays) compared to the kernels obtained from a uniform distribution. When increasing the number of temporal scale levels, the temporal scale-space kernels in the new family do also converge very rapidly to a limit kernel possessing true self-similar scale-invariant properties over temporal scales. Thereby, the new representation allows for true scale invariance over variations in the temporal scale, although the underlying temporal scale-space representation is based on a discretized temporal scale parameter. 

    We show how scale-normalized temporal derivatives can be defined for these time-causal scale-space kernels and how the composed theory can be used for computing basic types of scale-normalized spatio-temporal derivative expressions in a computationally efficient manner.

  • 10. Shao, Wen-Ze
    et al.
    Ge, Qi
    Deng, Hai-Song
    Wei, Zhi-Hui
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Motion Deblurring Using Non-stationary Image Modeling2015In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 52, no 2, p. 234-248Article in journal (Refereed)
    Abstract [en]

    It is well-known that shaken cameras or mobile phones during exposure usually lead to motion blurry photographs. Therefore, camera shake deblurring or motion deblurring is required and requested in many practical scenarios. The contribution of this paper is the proposal of a simple yet effective approach for motion blur kernel estimation, i.e., blind motion deblurring. Though there have been proposed severalmethods formotion blur kernel estimation in the literature, we impose a type of non-stationary Gaussian prior on the gradient fields of sharp images, in order to automatically detect and purse the salient edges of images as the important clues to blur kernel estimation. On one hand, the prior is able to promote sparsity inherited in the non-stationarity of the precision parameters (inverse of variances). On the other hand, since the prior is in a Gaussian form, there exists a great possibility of deducing a conceptually simple and computationally tractable inference scheme. Specifically, the well-known expectation-maximization algorithm is used to alternatingly estimate the motion blur kernels, the salient edges of images as well as the precision parameters in the image prior. In difference from many existing methods, no hyperpriors are imposed on any parameters in this paper; there are not any pre-processing steps involved in the proposed method, either, such as explicit suppression of random noise or prediction of salient edge structures. With estimated motion blur kernels, the deblurred images are finally generated using an off-the-shelf non-blind deconvolution method proposed by Krishnan and Fergus (Adv Neural Inf Process Syst 22:1033-1041, 2009). The rationality and effectiveness of our proposed method have been well demonstrated by the experimental results on both synthetic and realistic motion blurry images, showing state-of-the-art blind motion deblurring performance of the proposed approach in the term of quantitative metric as well as visual perception.

  • 11.
    Vynnycky, Michael
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Casting of Metals. University of Limerick, Ireland .
    Kanev, K.
    Mathematical Analysis of the Multisolution Phenomenon in the P3P Problem2015In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 51, no 2, p. 326-337Article in journal (Refereed)
    Abstract [en]

    The perspective 3-point problem, also known as pose estimation, has its origins in camera calibration and is of importance in many fields: for example, computer animation, automation, image analysis and robotics. One line of activity involves formulating it mathematically in terms of finding the solution to a quartic equation. However, in general, the equation does not have a unique solution, and in some situations there are no solutions at all. Here, we present a new approach to the solution of the problem; this involves closer scrutiny of the coefficients of the polynomial, in order to understand how many solutions there will be for a given set of problem parameters. We find that, if the control points are equally spaced, there are four positive solutions to the problem at 25 % of all available spatial locations for the control-point combinations, and two positive solutions at the remaining 75 %.

1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf