kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Petrosyan, Vahan
Publications (4 of 4) Show all publications
Jeon, S. & Petrosyan, V. (2023). Regularity of almost minimizers for the parabolic thin obstacle problem. Nonlinear Analysis, 237, 113386, Article ID 113386.
Open this publication in new window or tab >>Regularity of almost minimizers for the parabolic thin obstacle problem
2023 (English)In: Nonlinear Analysis, ISSN 0362-546X, E-ISSN 1873-5215, Vol. 237, p. 113386-, article id 113386Article in journal (Refereed) Published
Abstract [en]

In this paper, we study almost minimizers for the parabolic thin obstacle (or Signorini) problem with zero obstacle. We establish their Hσ,σ/2-regularity for every 0<σ<1, as well as Hβ,β/2-regularity of their spatial gradients on the either side of the thin space for some 0<β<1. A similar result is also obtained for almost minimizers for the Signorini problem with variable Hölder coefficients.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
Almost minimizers, Parabolic A-Signorini problem, Parabolic thin obstacle (or Signorini) problem, Regularity of solutions
National Category
Mathematical Analysis
Identifiers
urn:nbn:se:kth:diva-337421 (URN)10.1016/j.na.2023.113386 (DOI)001084962500001 ()2-s2.0-85171848097 (Scopus ID)
Note

QC 20231003

Available from: 2023-10-03 Created: 2023-10-03 Last updated: 2023-11-07Bibliographically approved
Liu, Y., Jiang, P.-T., Petrosyan, V., Li, S.-J., Bian, J., Zhang, L. & Cheng, M.-M. (2018). DEL: Deep embedding learning for efficient image segmentation. In: Lang, J (Ed.), Proceedings Of The Twenty-Seventh International Joint Conference On Artificial Intelligence: . Paper presented at 27th International Joint Conference on Artificial Intelligence, IJCAI 2018, 13 July 2018 through 19 July 2018 (pp. 864-870). International Joint Conferences on Artificial Intelligence
Open this publication in new window or tab >>DEL: Deep embedding learning for efficient image segmentation
Show others...
2018 (English)In: Proceedings Of The Twenty-Seventh International Joint Conference On Artificial Intelligence / [ed] Lang, J, International Joint Conferences on Artificial Intelligence , 2018, p. 864-870Conference paper, Published paper (Refereed)
Abstract [en]

Image segmentation has been explored for many years and still remains a crucial vision problem. Some efficient or accurate segmentation algorithms have been widely used in many vision applications. However, it is difficult to design a both efficient and accurate image segmenter. In this paper, we propose a novel method called DEL (deep embedding learning) which can efficiently transform superpixels into image segmentation. Starting with the SLIC superpixels, we train a fully convolutional network to learn the feature embedding space for each superpixel. The learned feature embedding corresponds to a similarity measure that measures the similarity between two adjacent superpixels. With the deep similarities, we can directly merge the superpixels into large segments. The evaluation results on BSDS500 and PASCAL Context demonstrate that our approach achieves a good tradeoff between efficiency and effectiveness. Specifically, our DEL algorithm can achieve comparable segments when compared with MCG but is much faster than it, i.e. 11.4fps vs. 0.07fps. 

Place, publisher, year, edition, pages
International Joint Conferences on Artificial Intelligence, 2018
Keywords
Artificial intelligence, Deep learning, Pixels, Superpixels, Convolutional networks, Evaluation results, Feature embedding, Segmentation algorithms, Segmenter, Similarity measure, Vision applications, Vision problems, Image segmentation
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-246576 (URN)10.24963/ijcai.2018/120 (DOI)000764175400120 ()2-s2.0-85054182458 (Scopus ID)
Conference
27th International Joint Conference on Artificial Intelligence, IJCAI 2018, 13 July 2018 through 19 July 2018
Note

Part of proceedings. ISBN 978-0-9992411-2-7

QC 20190611

Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2024-03-18Bibliographically approved
Petrosyan, V. & Proutiere, A. (2017). Viral initialization for spectral clustering. In: ESANN 2017 - Proceedings, 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning: . Paper presented at 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2017, 26 April 2017 through 28 April 2017, Bruge, Belgium (pp. 275-280). i6doc.com publication, Article ID ES2017-49.
Open this publication in new window or tab >>Viral initialization for spectral clustering
2017 (English)In: ESANN 2017 - Proceedings, 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, i6doc.com publication , 2017, p. 275-280, article id ES2017-49Conference paper, Published paper (Refereed)
Abstract [en]

Spectral Clustering is one of the most widely used cluster- ing algorithms. To find k clusters, it runs the K-means algorithm on the top k eigenvectors of a Laplacian matrix constructed from the data. As a consequence, it inherits the initialization issues of K-means. In this paper, we propose Viral Initialization (VI), a novel initialization procedure im- plemented in the Spectral Clustering algorithm before K-means is applied. VI is designed so that the resulting clusterings exhibit low normalized cut (Ncuts) values. This design principle is aligned with the recent observation that "good" clusterings have low Ncuts values. We show, through exten- sive numerical experiments, that the Spectral Clustering algorithm with VI consistently outperforms other state-of-the-art clustering techniques.

Place, publisher, year, edition, pages
i6doc.com publication, 2017
Keywords
Machine learning, Matrix algebra, Neural networks, Clustering techniques, Design Principles, Initialization procedures, Normalized cuts, Numerical experiments, Spectral clustering, Spectral clustering algorithms, State of the art, K-means clustering
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-262459 (URN)2-s2.0-85069455171 (Scopus ID)9782875870391 (ISBN)
Conference
25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2017, 26 April 2017 through 28 April 2017, Bruge, Belgium
Note

QC 20191017

Available from: 2019-10-17 Created: 2019-10-17 Last updated: 2022-09-13Bibliographically approved
Petrosyan, V. & Proutiere, A. (2016). Viral Clustering: A Robust Method to Extract Structures in Heterogeneous Datasets. In: : . Paper presented at The Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), February 12-17, Phoenix, USA (pp. 1986-1992). AAAI Press
Open this publication in new window or tab >>Viral Clustering: A Robust Method to Extract Structures in Heterogeneous Datasets
2016 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Cluster validation constitutes one of the most challenging problems in unsupervised cluster analysis. For example, identifying the true number of clusters present in a dataset has been investigated for decades, and is still puzzling researchers today. The difficulty stems from the high variety of the dataset characteristics. Some datasets exhibit a strong structure with a few well-separated and normally distributed clusters, but most often real-world datasets contain possibly many overlapping non-gaussian clusters with heterogeneous variances and shapes. This calls for the design of robust clustering algorithms that could adapt to the structure of the data and in particular accurately guess the true number of clusters. They have recently been interesting attempts to design such algorithms, e.g. based on involved non-parametric statistical inference techniques. In this paper, we develop Viral Clustering (VC), a simple algorithm that jointly estimates the number of clusters and outputs clusters. The VC algorithm relies on two antagonist and interacting components. The first component tends to regroup neighbouring samples together, while the second component tends to spread samples in various clusters. This spreading component is performed using an analogy with the way virus spread over networks. We present extensive numerical experiments illustrating the robustness of the VC algorithm, and its superiority compared to existing algorithms.

Place, publisher, year, edition, pages
AAAI Press, 2016
Keywords
Clustering, K-means, Cluster Validation, Number of Clusters
National Category
Computer Sciences
Research subject
Mathematics
Identifiers
urn:nbn:se:kth:diva-181109 (URN)000485474202004 ()2-s2.0-85007251785 (Scopus ID)
Conference
The Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), February 12-17, Phoenix, USA
Note

QC 20211018

Available from: 2016-01-29 Created: 2016-01-29 Last updated: 2024-03-18Bibliographically approved
Organisations

Search in DiVA

Show all publications