CiteExport$(function(){PrimeFaces.cw("TieredMenu","widget_formSmash_upper_j_idt144",{id:"formSmash:upper:j_idt144",widgetVar:"widget_formSmash_upper_j_idt144",autoDisplay:true,overlay:true,my:"left top",at:"left bottom",trigger:"formSmash:upper:exportLink",triggerEvent:"click"});}); $(function(){PrimeFaces.cw("OverlayPanel","widget_formSmash_upper_j_idt145_j_idt147",{id:"formSmash:upper:j_idt145:j_idt147",widgetVar:"widget_formSmash_upper_j_idt145_j_idt147",target:"formSmash:upper:j_idt145:permLink",showEffect:"blind",hideEffect:"fade",my:"right top",at:"right bottom",showCloseIcon:true});});

Scale-space theoryPrimeFaces.cw("AccordionPanel","widget_formSmash_some",{id:"formSmash:some",widgetVar:"widget_formSmash_some",multiple:true}); PrimeFaces.cw("AccordionPanel","widget_formSmash_all",{id:"formSmash:all",widgetVar:"widget_formSmash_all",multiple:true});
function selectAll()
{
var panelSome = $(PrimeFaces.escapeClientId("formSmash:some"));
var panelAll = $(PrimeFaces.escapeClientId("formSmash:all"));
panelAll.toggle();
toggleList(panelSome.get(0).childNodes, panelAll);
toggleList(panelAll.get(0).childNodes, panelAll);
}
/*Toggling the list of authorPanel nodes according to the toggling of the closeable second panel */
function toggleList(childList, panel)
{
var panelWasOpen = (panel.get(0).style.display == 'none');
// console.log('panel was open ' + panelWasOpen);
for (var c = 0; c < childList.length; c++) {
if (childList[c].classList.contains('authorPanel')) {
clickNode(panelWasOpen, childList[c]);
}
}
}
/*nodes have styleClass ui-corner-top if they are expanded and ui-corner-all if they are collapsed */
function clickNode(collapse, child)
{
if (collapse && child.classList.contains('ui-corner-top')) {
// console.log('collapse');
child.click();
}
if (!collapse && child.classList.contains('ui-corner-all')) {
// console.log('expand');
child.click();
}
}
PrimeFaces.cw("AccordionPanel","widget_formSmash_responsibleOrgs",{id:"formSmash:responsibleOrgs",widgetVar:"widget_formSmash_responsibleOrgs",multiple:true}); 2001 (English)In: Encyclopaedia of Mathematics / [ed] Michiel Hazewinkel, Springer , 2001Chapter in book (Refereed)
##### Abstract [en]

# Scale-space theory

##### Place, publisher, year, edition, pages

Springer , 2001.
##### National Category

Mathematics Computer Sciences Computer Vision and Robotics (Autonomous Systems)
##### Identifiers

URN: urn:nbn:se:kth:diva-40393ISBN: 1402006098 (print)OAI: oai:DiVA.org:kth-40393DiVA, id: diva2:440969
#####

PrimeFaces.cw("AccordionPanel","widget_formSmash_j_idt467",{id:"formSmash:j_idt467",widgetVar:"widget_formSmash_j_idt467",multiple:true});
#####

PrimeFaces.cw("AccordionPanel","widget_formSmash_j_idt473",{id:"formSmash:j_idt473",widgetVar:"widget_formSmash_j_idt473",multiple:true});
#####

PrimeFaces.cw("AccordionPanel","widget_formSmash_j_idt479",{id:"formSmash:j_idt479",widgetVar:"widget_formSmash_j_idt479",multiple:true});
##### Note

A theory of multi-scale representation of sensory data developed by the image processing and computer vision communities. The purpose is to represent signals at multiple scales in such a way that fine scale structures are successively suppressed, and a scale parameter is associated with each level in the multi-scale representation.

For a given signal , a linear scale-space representation is a family of derived signals , defined by and

for some family of convolution kernels [a1], [a2] (cf. also Integral equation of convolution type). An essential requirement on the scale-space family is that the representation at a coarse scale constitutes a simplification of the representations at finer scales. Several different ways of formalizing this requirement about non-creation of new structures with increasing scales show that the Gaussian kernel

constitutes a canonical choice for generating a scale-space representation [a3], [a4], [a5], [a6]. Equivalently, the scale-space family satisfies the diffusion equation

The motivation for generating a scale-space representation of a given data set originates from the basic fact that real-world objects are composed of different structures at different scales and may appear in different ways depending on the scale of observation. For example, the concept of a "tree" is appropriate at the scale of meters, while concepts such as leaves and molecules are more appropriate at finer scales. For a machine vision system analyzing an unknown scene, there is no way to know what scales are appropriate for describing the data. Thus, the only reasonable approach is to consider descriptions at all scales simultaneously [a1], [a2].

From the scale-space representation, at any level of scale one can define scale-space derivatives by

where and constitute multi-index notation for the derivative operator . Such Gaussian derivative operators provide a compact way to characterize the local image structure around a certain image point at any scale. Specifically, the output from scale-space derivatives can be combined into multi-scale differential invariants, to serve as feature detectors (see Edge detection and Corner detection for two examples).

More generally, a scale-space representation with its Gaussian derivative operators can serve as a basis for expressing a large number of early visual operations, including feature detection, stereo matching, computation of motion descriptors and the computation of cues to surface shape [a3], [a4]. Neuro-physiological studies have shown that there are receptive field profiles in the mammalian retina and visual cortex, which can be well modeled by the scale-space framework [a7].

Pyramid representation [a8] is a predecessor to scale-space representation, constructed by simultaneously smoothing and subsampling a given signal. In this way, computationally highly efficient algorithms can be obtained. A problem noted with pyramid representations, however, is that it is usually algorithmically hard to relate structures at different scales, due to the discrete nature of the scale levels. In a scale-space representation, the existence of a continuous scale parameter makes it conceptually much easier to express this deep structure [a2]. For features defined as zero-crossings of differential invariants, the implicit function theorem (cf. Implicit function) directly defines trajectories across scales, and at those scales where a bifurcation occurs, the local behaviour can be modeled by singularity theory [a3], [a5].

Extensions of linear scale-space theory concern the formulation of non-linear scale-space concepts more committed to specific purposes [a9]. There are strong relations between scale-space theory and wavelet theory (cf. also Wavelet analysis), although these two notions of multi-scale representation have been developed from slightly different premises.

References

[a1] A.P. Witkin, "Scale-space filtering" , *Proc. 8th Internat. Joint Conf. Art. Intell. Karlsruhe, West Germany Aug. 1983* (1983) pp. 1019–1022

[a2] J.J. Koenderink, "The structure of images" *Biological Cybernetics* , **50** (1984) pp. 363–370

[a3] T. Lindeberg, "Scale-space theory in computer vision" , Kluwer Acad. Publ. (1994)

[a4] L.M.J. Florack, "Image structure" , Kluwer Acad. Publ. (1997)[a5]J. Sporring, et al., "Gaussian scale-space theory" , Kluwer Acad. Publ. (1997)

[a6] B.M ter Haar Romeny, et al., "Proc. First Internat. Conf. scale-space" , *Lecture Notes Computer Science* , **1252** , Springer (1997)

[a7] R.A. Young, "The Gaussian derivative model for spatial vision: Retinal mechanisms" *Spatial Vision* , **2** (1987) pp. 273–293

[a8] P.J. Burt, E.H. Adelson, "The Laplacian Pyramid as a Compact Image Code" *IEEE Trans. Commun.* , **9** : 4 (1983) pp. 532–540

[a9] "Geometry-driven diffusion in computer vision" B.M ter Haar Romeny (ed.) , Kluwer Acad. Publ. (1994)

QC 20111005

NR 20140804Available from: 2011-09-14 Created: 2011-09-14 Last updated: 2018-01-12Bibliographically approved
isbn
urn-nbn$(function(){PrimeFaces.cw("Tooltip","widget_formSmash_j_idt1256",{id:"formSmash:j_idt1256",widgetVar:"widget_formSmash_j_idt1256",showEffect:"fade",hideEffect:"fade",showDelay:500,hideDelay:300,target:"formSmash:altmetricDiv"});});

CiteExport$(function(){PrimeFaces.cw("TieredMenu","widget_formSmash_lower_j_idt1309",{id:"formSmash:lower:j_idt1309",widgetVar:"widget_formSmash_lower_j_idt1309",autoDisplay:true,overlay:true,my:"left top",at:"left bottom",trigger:"formSmash:lower:exportLink",triggerEvent:"click"});}); $(function(){PrimeFaces.cw("OverlayPanel","widget_formSmash_lower_j_idt1310_j_idt1312",{id:"formSmash:lower:j_idt1310:j_idt1312",widgetVar:"widget_formSmash_lower_j_idt1310_j_idt1312",target:"formSmash:lower:j_idt1310:permLink",showEffect:"blind",hideEffect:"fade",my:"right top",at:"right bottom",showCloseIcon:true});});