Please wait ... |

Refine search result

CiteExportLink to result list
http://kth.diva-portal.org/smash/resultList.jsf?query=&language=en&searchType=SIMPLE&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all&aq=%5B%5B%7B%22journalId%22%3A%227837%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&af=%5B%5D $(function(){PrimeFaces.cw("InputTextarea","widget_formSmash_upper_j_idt482_recordPermLink",{id:"formSmash:upper:j_idt482:recordPermLink",widgetVar:"widget_formSmash_upper_j_idt482_recordPermLink",autoResize:true});}); $(function(){PrimeFaces.cw("OverlayPanel","widget_formSmash_upper_j_idt482_j_idt484",{id:"formSmash:upper:j_idt482:j_idt484",widgetVar:"widget_formSmash_upper_j_idt482_j_idt484",target:"formSmash:upper:j_idt482:permLink",showEffect:"blind",hideEffect:"fade",my:"right top",at:"right bottom",showCloseIcon:true});});

Permanent link

Cite

Citation styleapa harvard1 ieee modern-language-association-8th-edition vancouver Other style $(function(){PrimeFaces.cw("SelectOneMenu","widget_formSmash_upper_j_idt500",{id:"formSmash:upper:j_idt500",widgetVar:"widget_formSmash_upper_j_idt500",behaviors:{change:function(ext) {PrimeFaces.ab({s:"formSmash:upper:j_idt500",e:"change",f:"formSmash",p:"formSmash:upper:j_idt500",u:"formSmash:upper:otherStyle"},ext);}}});});

- apa
- harvard1
- ieee
- modern-language-association-8th-edition
- vancouver
- Other style

Languagede-DE en-GB en-US fi-FI nn-NO nn-NB sv-SE Other locale $(function(){PrimeFaces.cw("SelectOneMenu","widget_formSmash_upper_j_idt511",{id:"formSmash:upper:j_idt511",widgetVar:"widget_formSmash_upper_j_idt511",behaviors:{change:function(ext) {PrimeFaces.ab({s:"formSmash:upper:j_idt511",e:"change",f:"formSmash",p:"formSmash:upper:j_idt511",u:"formSmash:upper:otherLanguage"},ext);}}});});

- de-DE
- en-GB
- en-US
- fi-FI
- nn-NO
- nn-NB
- sv-SE
- Other locale

Output formathtml text asciidoc rtf $(function(){PrimeFaces.cw("SelectOneMenu","widget_formSmash_upper_j_idt521",{id:"formSmash:upper:j_idt521",widgetVar:"widget_formSmash_upper_j_idt521"});});

- html
- text
- asciidoc
- rtf

Rows per page

- 5
- 10
- 20
- 50
- 100
- 250

Sort

- Standard (Relevance)
- Author A-Ö
- Author Ö-A
- Title A-Ö
- Title Ö-A
- Publication type A-Ö
- Publication type Ö-A
- Issued (Oldest first)
- Issued (Newest first)
- Created (Oldest first)
- Created (Newest first)
- Last updated (Oldest first)
- Last updated (Newest first)
- Disputation date (earliest first)
- Disputation date (latest first)

- Standard (Relevance)
- Author A-Ö
- Author Ö-A
- Title A-Ö
- Title Ö-A
- Publication type A-Ö
- Publication type Ö-A
- Issued (Oldest first)
- Issued (Newest first)
- Created (Oldest first)
- Created (Newest first)
- Last updated (Oldest first)
- Last updated (Newest first)
- Disputation date (earliest first)
- Disputation date (latest first)

Select

The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.

1. Bergholm, Fredrik PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_0_j_idt584",{id:"formSmash:items:resultList:0:j_idt584",widgetVar:"widget_formSmash_items_resultList_0_j_idt584",onLabel:"Bergholm, Fredrik ",offLabel:"Bergholm, Fredrik ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); et al. PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_0_j_idt587",{id:"formSmash:items:resultList:0:j_idt587",widgetVar:"widget_formSmash_items_resultList_0_j_idt587",onLabel:"et al.",offLabel:"et al.",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:0:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Adler, JeremyParmryd, IngelaPrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:0:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Analysis of Bias in the Apparent Correlation Coefficient Between Image Pairs Corrupted by Severe Noise2010In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 37, no 3, p. 204-219Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_0_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:0:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_0_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); The correlation coefficient r is a measure of similarity used to compare regions of interest in image pairs. In fluorescence microscopy there is a basic tradeoff between the degree of image noise and the frequency with which images can be acquired and therefore the ability to follow dynamic events. The correlation coefficient r is commonly used in fluorescence microscopy for colocalization measurements, when the relative distributions of two fluorophores are of interest. Unfortunately, r is known to be biased understating the true correlation when noise is present. A better measure of correlation is needed. This article analyses the expected value of r and comes up with a procedure for evaluating the bias of r, expected value formulas. A Taylor series of so-called invariant factors is analyzed in detail. These formulas indicate ways to correct r and thereby obtain a corrected value free from the influence of noise that is on average accurate (unbiased). One possible correction is the attenuated corrected correlation coefficient R, introduced heuristically by Spearman (in Am. J. Psychol. 15:72-101, 1904). An ideal correction formula in terms of expected values is derived. For large samples R tends towards the ideal correction formula and the true noise-free correlation. Correlation measurements using simulation based on the types of noise found in fluorescence microscopy images illustrate both the power of the method and the variance of R. We conclude that the correction formula is valid and is particularly useful for making correct analyses from very noisy datasets.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:0:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 2. Jansson, Ylva PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_1_j_idt584",{id:"formSmash:items:resultList:1:j_idt584",widgetVar:"widget_formSmash_items_resultList_1_j_idt584",onLabel:"Jansson, Ylva ",offLabel:"Jansson, Ylva ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); et al. PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_1_j_idt587",{id:"formSmash:items:resultList:1:j_idt587",widgetVar:"widget_formSmash_items_resultList_1_j_idt587",onLabel:"et al.",offLabel:"et al.",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:1:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Lindeberg, TonyKTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:1:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Dynamic texture recognition using time-causal and time-recursive spatio-temporal receptive fields2018In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 60, no 9, p. 1369-1398Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_1_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:1:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_1_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); This work presents a first evaluation of using spatio-temporal receptive fields from a recently proposed time-causal spatiotemporal scale-space framework as primitives for video analysis. We propose a new family of video descriptors based on regional statistics of spatio-temporal receptive field responses and evaluate this approach on the problem of dynamic texture recognition. Our approach generalises a previously used method, based on joint histograms of receptive field responses, from the spatial to the spatio-temporal domain and from object recognition to dynamic texture recognition. The time-recursive formulation enables computationally efficient time-causal recognition. The experimental evaluation demonstrates competitive performance compared to state of the art. In particular, it is shown that binary versions of our dynamic texture descriptors achieve improved performance compared to a large range of similar methods using different primitives either handcrafted or learned from data. Further, our qualitative and quantitative investigation into parameter choices and the use of different sets of receptive fields highlights the robustness and flexibility of our approach. Together, these results support the descriptive power of this family of time-causal spatio-temporal receptive fields, validate our approach for dynamic texture recognition and point towards the possibility of designing a range of video analysis methods based on these new time-causal spatio-temporal primitives.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:1:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 3. Lindeberg, Tony PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_2_j_idt584",{id:"formSmash:items:resultList:2:j_idt584",widgetVar:"widget_formSmash_items_resultList_2_j_idt584",onLabel:"Lindeberg, Tony ",offLabel:"Lindeberg, Tony ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:2:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:2:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Discrete Derivative Approximations with Scale-Space Properties: A Basis for Low-Level Feature Extraction1993In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 3, no 4, p. 349-376Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_2_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:2:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_2_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); This article shows how discrete derivative approximations can be defined so that

*scale-space properties hold exactly also in the discrete domain.*Starting from a set of natural requirements on the first processing stages of a visual system,*the visual front end*, it gives an axiomatic derivation of how a multiscale representation of derivative approximations can be constructed from a discrete signal, so that it possesses an*algebraic structure similar*to that possessed by the derivatives of the traditional scale-space representation in the continuous domain. A family of kernels is derived that constitute*discrete analogues*to the continuous Gaussian derivatives.The representation has theoretical advantages over other discretizations of the scale-space theory in the sense that operators that commute before discretization*commute after discretization.*Some computational implications of this are that derivative approximations can be computed*directly*from smoothed data and that this will give*exactly*the same result as convolution with the corresponding derivative approximation kernel. Moreover, a number of*normalization*conditions are automatically satisfied.The proposed methodology leads to a scheme of computations of multiscale low-level feature extraction that is conceptually very simple and consists of four basic steps: (i)*large support*convolution smoothing, (ii)*small support*difference computations, (iii)*point operations*for computing differential geometric entities, and (iv)*nearest-neighbour operations*for feature detection.Applications demonstrate how the proposed scheme can be used for edge detection and junction detection based on derivatives up to order three.PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:2:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 4. Lindeberg, Tony PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_3_j_idt584",{id:"formSmash:items:resultList:3:j_idt584",widgetVar:"widget_formSmash_items_resultList_3_j_idt584",onLabel:"Lindeberg, Tony ",offLabel:"Lindeberg, Tony ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:3:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:3:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Generalized Gaussian Scale-Space Axiomatics Comprising Linear Scale-Space, Affine Scale-Space and Spatio-Temporal Scale-Space2011In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 40, no 1, p. 36-81Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_3_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:3:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_3_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); This paper describes a generalized axiomatic scale-space theory that makes it possible to derive the notions of linear scale-space, affine Gaussian scale-space and linear spatio-temporal scale-space using a similar set of assumptions (scale-space axioms). The notion of non-enhancement of local extrema is generalized from previous application over discrete and rotationally symmetric kernels to continuous and more general non-isotropic kernels over both spatial and spatio-temporal image domains. It is shown how a complete classification can be given of the linear (Gaussian) scale-space concepts that satisfy these conditions on isotropic spatial, non-isotropic spatial and spatio-temporal domains, which results in a general taxonomy of Gaussian scale-spaces for continuous image data. The resulting theory allows filter shapes to be tuned from specific context information and provides a theoretical foundation for the recently exploited mechanisms of shape adaptation and velocity adaptation, with highly useful applications in computer vision. It is also shown how time-causal spatio-temporal scale-spaces can be derived from similar assumptions. The mathematical structure of these scale-spaces is analyzed in detail concerning transformation properties over space and time, the temporal cascade structure they satisfy over time as well as properties of the resulting multi-scale spatio-temporal derivative operators. It is also shown how temporal derivatives with respect to transformed time can be defined, leading to the formulation of a novel analogue of scale normalized derivatives for time-causal scale-spaces. The kernels generated from these two types of theories have interesting relations to biological vision. We show how filter kernels generated from the Gaussian spatio-temporal scale-space as well as the time-causal spatio-temporal scale-space relate to spatio-temporal receptive field profiles registered from mammalian vision. Specifically, we show that there are close analogies to space-time separable cells in the LGN as well as to both space-time separable and non-separable cells in the striate cortex. We do also present a set of plausible models for complex cells using extended quasi-quadrature measures expressed in terms of scale normalized spatio-temporal derivatives. The theories presented as well as their relations to biological vision show that it is possible to describe a general set of Gaussian and/or time-causal scale-spaces using a unified framework, which generalizes and complements previously presented scale-space formulations in this area.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:3:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 5. Lindeberg, Tony PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_4_j_idt584",{id:"formSmash:items:resultList:4:j_idt584",widgetVar:"widget_formSmash_items_resultList_4_j_idt584",onLabel:"Lindeberg, Tony ",offLabel:"Lindeberg, Tony ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:4:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:4:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Image matching using generalized scale-space interest points2015In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 52, no 1, p. 3-36Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_4_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:4:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_4_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); The performance of matching and object recognition methods based on interest points depends on both the properties of the underlying interest points and the choice of associated image descriptors. This paper demonstrates advantages of using generalized scale-space interest point detectors in this context for selecting a sparse set of points for computing image descriptors for image-based matching.

For detecting interest points at any given scale, we make use of the Laplacian, the determinant of the Hessian and four new unsigned or signed Hessian feature strength measures, which are defined by generalizing the definitions of the Harris and Shi-and-Tomasi operators from the second moment matrix to the Hessian matrix. Then, feature selection over different scales is performed either by scale selection from local extrema over scale of scale-normalized derivates or by linking features over scale into feature trajectories and computing a significance measure from an integrated measure of normalized feature strength over scale.

A theoretical analysis is presented of the robustness of the differential entities underlying these interest points under image deformations, in terms of invariance properties under affine image deformations or approximations thereof. Disregarding the effect of the rotationally symmetric scale-space smoothing operation, the determinant of the Hessian is a truly affine covariant differential entity and two of the new Hessian feature strength measures have a major contribution from the affine covariant determinant of the Hessian, implying that local extrema of these differential entities will bemore robust under affine image deformations than local extrema of the Laplacian operator or the two other new Hessian feature strength measures.

It is shown how these generalized scale-space interest points allow for a higher ratio of correct matches and a lower ratio of false matches compared to previously known interest point detectors within the same class. The best results are obtained using interest points computed with scale linking and with the new Hessian feature strength measures and the determinant of the Hessian being the differential entities that lead to the best matching performance under perspective image transformations with significant foreshortening, and better than the more commonly used Laplacian operator, its difference-of-Gaussians approximation or the Harris-Laplace operator.

We propose that these generalized scale-space interest points, when accompanied by associated local scale-invariant image descriptors, should allow for better performance of interest point based methods for image-based matching, object recognition and related visual tasks.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:4:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 6. Lindeberg, Tony PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_5_j_idt584",{id:"formSmash:items:resultList:5:j_idt584",widgetVar:"widget_formSmash_items_resultList_5_j_idt584",onLabel:"Lindeberg, Tony ",offLabel:"Lindeberg, Tony ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:5:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:5:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Scale Selection Properties of Generalized Scale-Space Interest Point Detectors2013In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 46, no 2, p. 177-210Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_5_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:5:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_5_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under revision) and comprising: an enriched set of differential interest operators at a fixed scale including the Laplacian operator, the determinant of the Hessian, the new Hessian feature strength measures I and II and the rescaled level curve curvature operator, as well as an enriched set of scale selection mechanisms including scale selection based on local extrema over scale, complementary post-smoothing after the computation of non-linear differential invariants and scale selection based on weighted averaging of scale values along feature trajectories over scale. A theoretical analysis of the sensitivity to affine image deformations is presented, and it is shown that the scale estimates obtained from the determinant of the Hessian operator are affine covariant for an anisotropic Gaussian blob model. Among the other purely second-order operators, the Hessian feature strength measure I has the lowest sensitivity to non-uniform scaling transformations, followed by the Laplacian operator and the Hessian feature strength measure II. The predictions from this theoretical analysis agree with experimental results of the repeatability properties of the different interest point detectors under affine and perspective transformations of real image data. A number of less complete results are derived for the level curve curvature operator.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:5:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 7. Lindeberg, Tony PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_6_j_idt584",{id:"formSmash:items:resultList:6:j_idt584",widgetVar:"widget_formSmash_items_resultList_6_j_idt584",onLabel:"Lindeberg, Tony ",offLabel:"Lindeberg, Tony ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:6:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:6:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Spatio-temporal scale selection in video data2018In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 60, no 4, p. 525-562Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_6_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:6:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_6_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); This work presents a theory and methodology for simultaneous detection of local spatial and temporal scales in video data. The underlying idea is that if we process video data by spatio-temporal receptive fields at multiple spatial and temporal scales, we would like to generate hypotheses about the spatial extent and the temporal duration of the underlying spatio-temporal image structures that gave rise to the feature responses.

For two types of spatio-temporal scale-space representations, (i) a non-causal Gaussian spatio-temporal scale space for offline analysis of pre-recorded video sequences and (ii) a time-causal and time-recursive spatio-temporal scale space for online analysis of real-time video streams, we express sufficient conditions for spatio-temporal feature detectors in terms of spatio-temporal receptive fields to deliver scale covariant and scale invariant feature responses.

We present an in-depth theoretical analysis of the scale selection properties of eight types of spatio-temporal interest point detectors in terms of either: (i)-(ii) the spatial Laplacian applied to the first- and second-order temporal derivatives, (iii)-(iv) the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives, (v) the determinant of the spatio-temporal Hessian matrix, (vi) the spatio-temporal Laplacian and (vii)-(viii) the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix. It is shown that seven of these spatio-temporal feature detectors allow for provable scale covariance and scale invariance. Then, we describe a time-causal and time-recursive algorithm for detecting sparse spatio-temporal interest points from video streams and show that it leads to intuitively reasonable results.

An experimental quantification of the accuracy of the spatio-temporal scale estimates and the amount of temporal delay obtained these spatio-temporal interest point detectors is given showing that: (i) the spatial and temporal scale selection properties predicted by the continuous theory are well preserved in the discrete implementation and (ii) the spatial Laplacian or the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives lead to much shorter temporal delays in a time-causal implementation compared to the determinant of the spatio-temporal Hessian or the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:6:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 8. Lindeberg, Tony PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_7_j_idt584",{id:"formSmash:items:resultList:7:j_idt584",widgetVar:"widget_formSmash_items_resultList_7_j_idt584",onLabel:"Lindeberg, Tony ",offLabel:"Lindeberg, Tony ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:7:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:7:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Temporal scale selection in time-causal scale space2017In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 58, no 1, p. 57-101Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_7_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:7:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_7_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); When designing and developing scale selection mechanisms for generating hypotheses about characteristic scales in signals, it is essential that the selected scale levels reflect the extent of the underlying structures in the signal.

This paper presents a theory and in-depth theoretical analysis about the scale selection properties of methods for automatically selecting local temporal scales in time-dependent signals based on local extrema over temporal scales of scale-normalized temporal derivative responses. Specifically, this paper develops a novel theoretical framework for performing such temporal scale selection over a time-causal and time-recursive temporal domain as is necessary when processing continuous video or audio streams in real time or when modelling biological perception.

For a recently developed time-causal and time-recursive scale-space concept defined by convolution with a scale-invariant limit kernel, we show that it is possible to transfer a large number of the desirable scale selection properties that hold for the Gaussian scale-space concept over a non-causal temporal domain to this temporal scale-space concept over a truly time-causal domain. Specifically, we show that for this temporal scale-space concept, it is possible to achieve true temporal scale invariance although the temporal scale levels have to be discrete, which is a novel theoretical construction.

The analysis starts from a detailed comparison of different temporal scale-space concepts and their relative advantages and disadvantages, leading the focus to a class of recently extended time-causal and time-recursive temporal scale-space concepts based on first-order integrators or equivalently truncated exponential kernels coupled in cascade. Specifically, by the discrete nature of the temporal scale levels in this class of time-causal scale-space concepts, we study two special cases of distributing the intermediate temporal scale levels, by using either a uniform distribution in terms of the variance of the composed temporal scale-space kernel or a logarithmic distribution.

In the case of a uniform distribution of the temporal scale levels, we show that scale selection based on local extrema of scale-normalized derivatives over temporal scales makes it possible to estimate the temporal duration of sparse local features defined in terms of temporal extrema of first- or second-order temporal derivative responses. For dense features modelled as a sine wave, the lack of temporal scale invariance does, however, constitute a major limitation for handling dense temporal structures of different temporal duration in a uniform manner.

In the case of a logarithmic distribution of the temporal scale levels, specifically taken to the limit of a time-causal limit kernel with an infinitely dense distribution of the temporal scale levels towards zero temporal scale, we show that it is possible to achieve true temporal scale invariance to handle dense features modelled as a sine wave in a uniform manner over different temporal durations of the temporal structures as well to achieve more general temporal scale invariance for any signal over any temporal scaling transformation with a temporal scaling factor that is an integer power of the distribution parameter of the time-causal limit kernel.

It is shown how these temporal scale selection properties developed for a pure temporal domain carry over to feature detectors defined over time-causal spatio-temporal and spectro-temporal domains.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:7:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 9. Lindeberg, Tony PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_8_j_idt584",{id:"formSmash:items:resultList:8:j_idt584",widgetVar:"widget_formSmash_items_resultList_8_j_idt584",onLabel:"Lindeberg, Tony ",offLabel:"Lindeberg, Tony ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:8:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:8:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Time-causal and time-recursive spatio-temporal receptive fields2016In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 55, no 1, p. 50-88Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_8_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:8:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_8_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); We present an improved model and theory for time-causal and time-recursive spatio-temporal receptive fields, obtained by a combination of Gaussian receptive fields over the spatial domain and first-order integrators or equivalently truncated exponential filters coupled in cascade over the temporal domain.

Compared to previous spatio-temporal scale-space formulations in terms of non-enhancement of local extrema or scale invariance, these receptive fields are based on different scale-space axiomatics over time by ensuring non-creation of new local extrema or zero-crossings with increasing temporal scale. Specifically, extensions are presented about (i) parameterizing the intermediate temporal scale levels, (ii) analysing the resulting temporal dynamics, (iii) transferring the theory to a discrete implementation in terms of recursive filters over time, (iv) computing scale-normalized spatio-temporal derivative expressions for spatio-temporal feature detection and (v) computational modelling of receptive fields in the lateral geniculate nucleus (LGN) and the primary visual cortex (V1) in biological vision.

We show that by distributing the intermediate temporal scale levels according to a logarithmic distribution, we obtain a new family of temporal scale-space kernels with better temporal characteristics compared to a more traditional approach of using a uniform distribution of the intermediate temporal scale levels. Specifically, the new family of time-causal kernels has much faster temporal response properties (shorter temporal delays) compared to the kernels obtained from a uniform distribution. When increasing the number of temporal scale levels, the temporal scale-space kernels in the new family do also converge very rapidly to a limit kernel possessing true self-similar scale-invariant properties over temporal scales. Thereby, the new representation allows for true scale invariance over variations in the temporal scale, although the underlying temporal scale-space representation is based on a discretized temporal scale parameter.

We show how scale-normalized temporal derivatives can be defined for these time-causal scale-space kernels and how the composed theory can be used for computing basic types of scale-normalized spatio-temporal derivative expressions in a computationally efficient manner.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:8:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 10. Shao, Wen-Ze et al. PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_9_j_idt587",{id:"formSmash:items:resultList:9:j_idt587",widgetVar:"widget_formSmash_items_resultList_9_j_idt587",onLabel:"et al.",offLabel:"et al.",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:9:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Ge, QiDeng, Hai-SongWei, Zhi-HuiLi, HaiboKTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:9:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Motion Deblurring Using Non-stationary Image Modeling2015In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 52, no 2, p. 234-248Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_9_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:9:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_9_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); It is well-known that shaken cameras or mobile phones during exposure usually lead to motion blurry photographs. Therefore, camera shake deblurring or motion deblurring is required and requested in many practical scenarios. The contribution of this paper is the proposal of a simple yet effective approach for motion blur kernel estimation, i.e., blind motion deblurring. Though there have been proposed severalmethods formotion blur kernel estimation in the literature, we impose a type of non-stationary Gaussian prior on the gradient fields of sharp images, in order to automatically detect and purse the salient edges of images as the important clues to blur kernel estimation. On one hand, the prior is able to promote sparsity inherited in the non-stationarity of the precision parameters (inverse of variances). On the other hand, since the prior is in a Gaussian form, there exists a great possibility of deducing a conceptually simple and computationally tractable inference scheme. Specifically, the well-known expectation-maximization algorithm is used to alternatingly estimate the motion blur kernels, the salient edges of images as well as the precision parameters in the image prior. In difference from many existing methods, no hyperpriors are imposed on any parameters in this paper; there are not any pre-processing steps involved in the proposed method, either, such as explicit suppression of random noise or prediction of salient edge structures. With estimated motion blur kernels, the deblurred images are finally generated using an off-the-shelf non-blind deconvolution method proposed by Krishnan and Fergus (Adv Neural Inf Process Syst 22:1033-1041, 2009). The rationality and effectiveness of our proposed method have been well demonstrated by the experimental results on both synthetic and realistic motion blurry images, showing state-of-the-art blind motion deblurring performance of the proposed approach in the term of quantitative metric as well as visual perception.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:9:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 11. Shao, Wen-Ze PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_10_j_idt584",{id:"formSmash:items:resultList:10:j_idt584",widgetVar:"widget_formSmash_items_resultList_10_j_idt584",onLabel:"Shao, Wen-Ze ",offLabel:"Shao, Wen-Ze ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); et al. PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_10_j_idt587",{id:"formSmash:items:resultList:10:j_idt587",widgetVar:"widget_formSmash_items_resultList_10_j_idt587",onLabel:"et al.",offLabel:"et al.",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); NUPT, Coll Telecommun & Informat Engn, Nanjing, Jiangsu, Peoples R China.;NUPT, Natl Engn Res Ctr Commun & Networking, Nanjing, Jiangsu, Peoples R China..PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:10:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Ge, QiNUPT, Coll Telecommun & Informat Engn, Nanjing, Jiangsu, Peoples R China..Wang, Li-QianNUPT, Coll Telecommun & Informat Engn, Nanjing, Jiangsu, Peoples R China..Lin, Yun-ZhiGeorgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA.;Southeast Univ, Sch Automat, Nanjing, Jiangsu, Peoples R China..Deng, Hai-SongNanjing Audit Univ, Sch Sci, Nanjing, Jiangsu, Peoples R China..Li, HaiboKTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID. NUPT, Coll Telecommun & Informat Engn, Nanjing, Jiangsu, Peoples R China.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:10:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Nonparametric Blind Super-Resolution Using Adaptive Heavy-Tailed Priors2019In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 61, no 6, p. 885-917Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_10_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:10:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_10_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); Single-image nonparametric blind super-resolution is a fundamental image restoration problem yet largely ignored in the past decades among the computational photography and computer vision communities. An interesting phenomenon is observed that learning-based single-image super-resolution (SR) has been experiencing a rapid development since the boom of the sparse representation in 2005s and especially the representation learning in 2010s, wherein the high-res image is generally blurred by a supposed bicubic or Gaussian blur kernel. However, the parametric assumption on the form of blur kernels does not hold in most practical applications because in real low-res imaging a high-res image can undergo complex blur processes, e.g., Gaussian-shaped kernels of varying sizes, ellipse-shaped kernels of varying orientations, curvilinear kernels of varying trajectories. The paper is mainly motivated by one of our previous works: Shao and Elad (in: Zhang (ed) ICIG 2015, Part III, Lecture notes in computer science, Springer, Cham, 2015). Specifically, we take one step further in this paper and present a type of adaptive heavy-tailed image priors, which result in a new regularized formulation for nonparametric blind super-resolution. The new image priors can be expressed and understood as a generalized integration of the normalized sparsity measure and relative total variation. Although it seems that the proposed priors are simple, the core merit of the priors is their practical capability for the challenging task of nonparametric blur kernel estimation for both super-resolution and deblurring. Harnessing the priors, a higher-quality intermediate high-res image becomes possible and therefore more accurate blur kernel estimation can be accomplished. A great many experiments are performed on both synthetic and real-world blurred low-res images, demonstrating the comparative or even superior performance of the proposed algorithm convincingly. Meanwhile, the proposed priors are demonstrated quite applicable to blind image deblurring which is a degenerated problem of nonparametric blind SR.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:10:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); 12. Vynnycky, Michael PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_11_j_idt584",{id:"formSmash:items:resultList:11:j_idt584",widgetVar:"widget_formSmash_items_resultList_11_j_idt584",onLabel:"Vynnycky, Michael ",offLabel:"Vynnycky, Michael ",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); et al. PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_11_j_idt587",{id:"formSmash:items:resultList:11:j_idt587",widgetVar:"widget_formSmash_items_resultList_11_j_idt587",onLabel:"et al.",offLabel:"et al.",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Casting of Metals. University of Limerick, Ireland .PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:11:orgPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Kanev, K.PrimeFaces.cw("Panel","testPanel",{id:"formSmash:items:resultList:11:etAlPanel",widgetVar:"testPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500}); Mathematical Analysis of the Multisolution Phenomenon in the P3P Problem2015In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 51, no 2, p. 326-337Article in journal (Refereed)Abstract [en] PrimeFaces.cw("SelectBooleanButton","widget_formSmash_items_resultList_11_j_idt622_0_j_idt623",{id:"formSmash:items:resultList:11:j_idt622:0:j_idt623",widgetVar:"widget_formSmash_items_resultList_11_j_idt622_0_j_idt623",onLabel:"Abstract [en]",offLabel:"Abstract [en]",onIcon:"ui-icon-triangle-1-s",offIcon:"ui-icon-triangle-1-e"}); The perspective 3-point problem, also known as pose estimation, has its origins in camera calibration and is of importance in many fields: for example, computer animation, automation, image analysis and robotics. One line of activity involves formulating it mathematically in terms of finding the solution to a quartic equation. However, in general, the equation does not have a unique solution, and in some situations there are no solutions at all. Here, we present a new approach to the solution of the problem; this involves closer scrutiny of the coefficients of the polynomial, in order to understand how many solutions there will be for a given set of problem parameters. We find that, if the control points are equally spaced, there are four positive solutions to the problem at 25 % of all available spatial locations for the control-point combinations, and two positive solutions at the remaining 75 %.

PrimeFaces.cw("Panel","tryPanel",{id:"formSmash:items:resultList:11:j_idt622:0:abstractPanel",widgetVar:"tryPanel",toggleable:true,toggleSpeed:500,collapsed:false,toggleOrientation:"vertical",closable:true,closeSpeed:500});

CiteExportLink to result list
http://kth.diva-portal.org/smash/resultList.jsf?query=&language=en&searchType=SIMPLE&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all&aq=%5B%5B%7B%22journalId%22%3A%227837%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&af=%5B%5D $(function(){PrimeFaces.cw("InputTextarea","widget_formSmash_lower_j_idt902_recordPermLink",{id:"formSmash:lower:j_idt902:recordPermLink",widgetVar:"widget_formSmash_lower_j_idt902_recordPermLink",autoResize:true});}); $(function(){PrimeFaces.cw("OverlayPanel","widget_formSmash_lower_j_idt902_j_idt904",{id:"formSmash:lower:j_idt902:j_idt904",widgetVar:"widget_formSmash_lower_j_idt902_j_idt904",target:"formSmash:lower:j_idt902:permLink",showEffect:"blind",hideEffect:"fade",my:"right top",at:"right bottom",showCloseIcon:true});});

Permanent link

Cite

Citation styleapa harvard1 ieee modern-language-association-8th-edition vancouver Other style $(function(){PrimeFaces.cw("SelectOneMenu","widget_formSmash_lower_j_idt920",{id:"formSmash:lower:j_idt920",widgetVar:"widget_formSmash_lower_j_idt920",behaviors:{change:function(ext) {PrimeFaces.ab({s:"formSmash:lower:j_idt920",e:"change",f:"formSmash",p:"formSmash:lower:j_idt920",u:"formSmash:lower:otherStyle"},ext);}}});});

- apa
- harvard1
- ieee
- modern-language-association-8th-edition
- vancouver
- Other style

Languagede-DE en-GB en-US fi-FI nn-NO nn-NB sv-SE Other locale $(function(){PrimeFaces.cw("SelectOneMenu","widget_formSmash_lower_j_idt931",{id:"formSmash:lower:j_idt931",widgetVar:"widget_formSmash_lower_j_idt931",behaviors:{change:function(ext) {PrimeFaces.ab({s:"formSmash:lower:j_idt931",e:"change",f:"formSmash",p:"formSmash:lower:j_idt931",u:"formSmash:lower:otherLanguage"},ext);}}});});

- de-DE
- en-GB
- en-US
- fi-FI
- nn-NO
- nn-NB
- sv-SE
- Other locale

Output formathtml text asciidoc rtf $(function(){PrimeFaces.cw("SelectOneMenu","widget_formSmash_lower_j_idt941",{id:"formSmash:lower:j_idt941",widgetVar:"widget_formSmash_lower_j_idt941"});});

- html
- text
- asciidoc
- rtf