Endre søk
Begrens søket
1 - 8 of 8
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. Almansa, A.
    et al.
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Fingerprint enhancement by shape adaptation of scale-space operators with automatic scale selection2000Inngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 9, nr 12, s. 2027-2042Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This work presents two mechanisms for processing fingerprint images; shape-adapted smoothing based on second moment descriptors and automatic scale selection based on normalized derivatives. The shape adaptation procedure adapts the smoothing operation to the local ridge structures, which allows interrupted ridges to be joined without destroying essential singularities such as branching points and enforces continuity of their directional fields. The Scale selection procedure estimates local ridge width and adapts the amount of smoothing to the local amount of noise. In addition, a ridgeness measure is defined, which reflects how well the local image structure agrees with a qualitative ridge model, and is used for spreading the results of shape adaptation into noisy areas. The combined approach makes it possible to resolve fine scale structures in clear areas while reducing the risk of enhancing noise in blurred or fragmented areas. The result is a reliable and adaptively detailed estimate of the ridge orientation field and ridge width, as well as a Smoothed grey-level version of the input image. We propose that these general techniques should be of interest to developers of automatic fingerprint identification systems as well as in other applications of processing related types of imagery.

  • 2. Averbuch, A. Z.
    et al.
    Meyer, F.
    Strömberg, Jan-Olov
    KTH, Tidigare Institutioner                               , Matematik.
    Coifman, R.
    Vassiliou, A.
    Low bit-rate efficient compression for seismic data2001Inngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 10, nr 12, s. 1801-1814Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Compression is a relatively new introduced technique for seismic data operations. The main drive behind the use of data compression in seismic data is the very large size of seismic data acquired. Some of the most recent acquired marine seismic data sets exceed 10 Tbytes, and in fact there are currently seismic surveys planned with a volume of around 120 Tbytes. Thus, the need to compress these very large seismic data riles is imperative. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. Up to now some of the algorithms used for seismic data compression were based on some form of wavelet or local cosine transform. while using a uniform or quasiuniform quantization scheme and they finally employ a Huffman coding scheme. Using this family of compression algorithms we achieve compression results which are acceptable to geophysicists, only at low to moderate compression ratios. For higher compression ratios or higher decibel quality, significant compression artifacts are introduced in the reconstructed images, even with high-dimensional transforms. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40 dB in the decompressed seismic data sets. Several established compression algorithms are reviewed, and some new compression algorithms are introduced. All of these compression techniques are applied to a good representation of seismic data sets, and their results are documented in this paper. One of the conclusions is that adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression.

  • 3. Ge, Qi
    et al.
    Jing, Xiao-Yuan
    Wu, Fei
    Wei, Zhi-Hui
    Xiao, Liang
    Shao, Wen-Ze
    Yue, Dong
    Li, Hai-Bo
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal2017Inngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 26, nr 7, s. 3098-3112Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  • 4.
    Liu, Du
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Teknisk informationsvetenskap.
    Flierl, Markus
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Teknisk informationsvetenskap.
    Fractional-Pel Accurate Motion-Adaptive Transforms2019Inngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 28, nr 6, s. 2731-2742, artikkel-id 8590746Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Fractional-pel accurate motion is widely used in video coding. For subband coding, fractional-pel accuracy is challenging since it is difficult to handle the complex motion field with temporal transforms. In our previous work, we designed integer accurate motion-adaptive transforms (MAT) which can transform integer accurate motion-connected coefficients. In this paper, we extend the integer MAT to fractional-pel accuracy. The integer MAT allows only one reference coefficient to be the lowhand coefficient. In this paper, we design the transform such that it permits multiple references and generates multiple low-band coefficients. In addition, our fractional-pel MAT can incorporate a general interpolation filter into the basis vector, such that the highband coefficient produced by the transform is the same as the prediction error from the interpolation filter. The fractional-pel MAT is always orthonormal. Thus, the energy is preserved by the transform. We compare the proposed fractional-pel MAT, the integer MAT, and the half-pel motion-compensated orthogonal transform (MCOT), while HEVC intra coding is used to encode the temporal subbands. The experimental results show that the proposed fractional-pel MAT outperforms the integer MAT and the half-pel MCOT. The gain achieved by the proposed MAT over the integer MAT can reach up to 1 dB in PSNR.

  • 5. Meyer, F. G.
    et al.
    Averbuch, A. Z.
    Strömberg, Jan-Olov
    KTH, Tidigare Institutioner                               , Matematik.
    Fast adaptive wavelet packet image compression2000Inngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 9, nr 5, s. 792-800Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Wavelets are ill-suited to represent oscillatory patterns: rapid variations of intensity can only be described by the small scale wavelet coefficients, which are often quantized to zero, even at high bit rates. Our goal in this paper is to provide a fast numerical implementation of the best wavelet packet algorithm [1] in order to demonstrate that an advantage can be gained by constructing a basis adapted to a target image. Emphasis in this paper has been placed on developing algorithms that are computationally efficient. We developed a new fast two-dimensional (2-D) convolution-decimation algorithm with factorized nonseparable 2-D filters. The algorithm is four times faster than a standard convolution-decimation, An extensive evaluation of the algorithm was performed on a large class of textured images. Because of its ability to reproduce textures so well, the wavelet packet coder significantly out performs one of the best wavelet coder [2] on images such as Barbara and fingerprints, both visually and in term of PSNR.

  • 6.
    Siadat, Medya
    et al.
    Azarbaijan Shahid Madani Univ, Dept Appl Math, Tabriz 5375171379, Iran..
    Aghazadeh, Nasser
    Azarbaijan Shahid Madani Univ, Dept Appl Math, Tabriz 5375171379, Iran..
    Akbarifard, Farideh
    Azarbaijan Shahid Madani Univ, Dept Appl Math, Tabriz 5375171379, Iran..
    Brismar, Hjalmar
    KTH, Skolan för teknikvetenskap (SCI), Tillämpad fysik, Biofysik. KTH, Centra, Science for Life Laboratory, SciLifeLab. SciLifeLab, Adv Light Microscopy Facil, S-17165 Solna, Sweden..
    Öktem, Ozan
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Matematik (Avd.). KTH, Centra, Science for Life Laboratory, SciLifeLab.
    Joint Image Deconvolution and Separation Using Mixed Dictionaries2019Inngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 28, nr 8, s. 3936-3945Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The task of separating an image into distinct components that represent different features plays an important role in many applications. Traditionally, such separation techniques are applied once the image in question has been reconstructed from measured data. We propose an efficient iterative algorithm, where reconstruction is performed jointly with the task of separation. A key assumption is that the image components have different sparse representations. The algorithm is based on a scheme that minimizes a functional composed of a data discrepancy term and the l(1)-norm of the coefficients of the different components with respect to their corresponding dictionaries. The performance is demonstrated for joint 2D deconvolution and separation into curve- and point-like components, and tests are performed on synthetic data as well as experimental stimulated emission depletion and confocal microscopy data. Experiments show that such a joint approach outperforms a sequential approach, where one first deconvolves data and then applies image separation.

  • 7.
    Su, Rong
    et al.
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion, Mätteknik och optik.
    Ekberg, Peter
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion, Mätteknik och optik.
    Leitner, Michael
    Mattsson, Lars
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion, Mätteknik och optik.
    Accurate and automated image segmentation of 3D optical coherence tomography data suffering from low signal to noise levelsInngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    Optical coherence tomography (OCT) has proven to be a useful tool for investigating internal structures in ceramic tapes and the technique is expected to be important for roll-to-roll manufacturing. However, because of high scattering in ceramic materials, noise and speckles deteriorate the image quality which makes automated quantitative measurements of internal interfaces difficult. To overcome this difficulty we present in this paper a new image analysis approach based on volumetric OCT data. The engine in the analysis is a 3D image processing and analysis algorithm. It is dedicated for boundary segmentation and dimensional measurement in volumetric OCT images, and offers high accuracy, efficiency, robustness, sub-pixel resolution and a fully automated operation. The method relies on the correlation property of a physical interface and eliminates effectively pixels caused by noise and speckles. The remaining pixels being stored are the ones confirmed to be related to the target interfaces. Segmentation of tilted and curved internal interfaces separated by ~10 μm in z-direction is demonstrated. The algorithm also extracts full-field top-view intensity maps of the target interfaces for high-accuracy measurements in x- and y- directions. The methodology developed here may also be adopted in other similar 3D imaging and measurement technologies, e.g. ultrasound imaging, and for various materials.

  • 8. Wong, Kwan-Yee Kenneth
    et al.
    Zhang, Guoqiang
    Chen, Zhihu
    KTH, Skolan för elektro- och systemteknik (EES), Ljud- och bildbehandling (Stängd 130101).
    A Stratified Approach for Camera Calibration Using Spheres2011Inngår i: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 20, nr 2, s. 305-316Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper proposes a stratified approach for camera calibration using spheres. Previous works have exploited epipolar tangents to locate frontier points on spheres for estimating the epipolar geometry. It is shown in this paper that other than the frontier points, two additional point features can be obtained by considering the bitangent envelopes of a pair of spheres. A simple method for locating the images of such point features and the sphere centers is presented. An algorithm for recovering the fundamental matrix in a plane plus parallax representation using these recovered image points and the epipolar tangents from three spheres is developed. A new formulation of the absolute dual quadric as a cone tangent to a dual sphere with the plane at infinity being its vertex is derived. This allows the recovery of the absolute dual quadric, which is used to upgrade the weak calibration to a full calibration. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and the high precision achieved by our proposed algorithm.

1 - 8 of 8
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf