kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Razavian, Ali Sharif
Publications (10 of 11) Show all publications
Razavian, A. S., Sullivan, J., Carlsson, S. & Maki, A. (2019). Visual Instance Retrieval with Deep Convolutional Networks. Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, 73(5), 956-964
Open this publication in new window or tab >>Visual Instance Retrieval with Deep Convolutional Networks
2019 (English)In: Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, ISSN 1342-6907, Vol. 73, no 5, p. 956-964Article in journal (Refereed) Published
Abstract [en]

This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval.Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariancc into explicit account, i.e.positions, scales and spatial consistency.In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. 

Place, publisher, year, edition, pages
Institute of Image Information and Television Engineers, 2019
Keywords
Convolutional network, Learning representation, Multi-resolution search, Visual instance retrieval, Convolution, Image retrieval, Convolutional networks, Image representations, Instance retrieval, Local feature, Multi-scales, Spatial consistency, Standard images, Image representation
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-328992 (URN)10.3169/ITEJ.73.956 (DOI)2-s2.0-85142357643 (Scopus ID)
Note

QC 20230614

Available from: 2023-06-14 Created: 2023-06-14 Last updated: 2025-02-07Bibliographically approved
Olczak, J., Fahlberg, N., Maki, A., Razavian, A. S., Jilert, A., Stark, A., . . . Gordon, M. (2017). Artificial intelligence for analyzing orthopedic trauma radiographs Deep learning algorithms-are they on par with humans for diagnosing fractures?. Acta Orthopaedica, 88(6), 581-586
Open this publication in new window or tab >>Artificial intelligence for analyzing orthopedic trauma radiographs Deep learning algorithms-are they on par with humans for diagnosing fractures?
Show others...
2017 (English)In: Acta Orthopaedica, ISSN 1745-3674, E-ISSN 1745-3682, Vol. 88, no 6, p. 581-586Article in journal (Refereed) Published
Abstract [en]

Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.

National Category
Orthopaedics
Identifiers
urn:nbn:se:kth:diva-220304 (URN)10.1080/17453674.2017.1344459 (DOI)000416605900005 ()28681679 (PubMedID)2-s2.0-85021907834 (Scopus ID)
Note

QC 20171221

Available from: 2017-12-21 Created: 2017-12-21 Last updated: 2022-06-26Bibliographically approved
Carlsson, S., Azizpour, H., Razavian, A. S., Sullivan, J. & Smith, K. (2017). The Preimage of Rectifier Network Activities. In: International Conference on Learning Representations (ICLR): . Paper presented at 5th International Conference on Learning Representations, ICLR 2017, 24-26 April 2017, Toulon, France. International Conference on Learning Representations, ICLR
Open this publication in new window or tab >>The Preimage of Rectifier Network Activities
Show others...
2017 (English)In: International Conference on Learning Representations (ICLR), International Conference on Learning Representations, ICLR , 2017Conference paper, Published paper (Refereed)
Abstract [en]

The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

Place, publisher, year, edition, pages
International Conference on Learning Representations, ICLR, 2017
Keywords
Heuristic algorithms, Piecewise linear techniques, General structures, Input space, Network activities, Optimisations, Piecewise linear, Preimages, Regularization algorithms, Rectifying circuits
National Category
Computer graphics and computer vision Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-259164 (URN)2-s2.0-85093029593 (Scopus ID)
Conference
5th International Conference on Learning Representations, ICLR 2017, 24-26 April 2017, Toulon, France
Note

QC 20230609

Available from: 2019-09-11 Created: 2019-09-11 Last updated: 2025-02-07Bibliographically approved
Azizpour, H., Sharif Razavian, A., Sullivan, J., Maki, A. & Carlssom, S. (2016). Factors of Transferability for a Generic ConvNet Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9), 1790-1802, Article ID 7328311.
Open this publication in new window or tab >>Factors of Transferability for a Generic ConvNet Representation
Show others...
2016 (English)In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 38, no 9, p. 1790-1802, article id 7328311Article in journal (Refereed) Published
Abstract [en]

Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.

Place, publisher, year, edition, pages
IEEE Computer Society Digital Library, 2016
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-177033 (URN)10.1109/TPAMI.2015.2500224 (DOI)000381432700006 ()26584488 (PubMedID)2-s2.0-84981266620 (Scopus ID)
Note

QC 20161208

Available from: 2015-11-13 Created: 2015-11-13 Last updated: 2025-02-07Bibliographically approved
Razavian, A. S., Sullivan, J., Carlsson, S. & Maki, A. (2016). Visual instance retrieval with deep convolutional networks. ITE Transactions on Media Technology and Applications, 4(3), 251-258
Open this publication in new window or tab >>Visual instance retrieval with deep convolutional networks
2016 (English)In: ITE Transactions on Media Technology and Applications, ISSN 2186-7364, Vol. 4, no 3, p. 251-258Article in journal (Refereed) Published
Abstract [en]

This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately.

Place, publisher, year, edition, pages
Institute of Image Information and Television Engineers, 2016
Keywords
Convolutional network, Learning representation, Multi-resolution search, Visual instance retrieval
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-195472 (URN)10.3169/mta.4.251 (DOI)2-s2.0-84979503481 (Scopus ID)
Note

QC 20211129

Available from: 2016-11-25 Created: 2016-11-03 Last updated: 2025-02-07Bibliographically approved
Sharif Razavian, A., Sullivan, J., Maki, A. & Carlsson, S. (2015). A Baseline for Visual Instance Retrieval with Deep Convolutional Networks. In: : . Paper presented at International Conference on Learning Representations,May 7 - 9, 2015, San Diego, CA. San Diego, US: ICLR
Open this publication in new window or tab >>A Baseline for Visual Instance Retrieval with Deep Convolutional Networks
2015 (English)Conference paper, Poster (with or without abstract) (Refereed)
Place, publisher, year, edition, pages
San Diego, US: ICLR, 2015
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-165765 (URN)
Conference
International Conference on Learning Representations,May 7 - 9, 2015, San Diego, CA
Note

QC 20150522

Available from: 2015-04-29 Created: 2015-04-29 Last updated: 2024-03-15Bibliographically approved
Azizpour, H., Razavian, A. S., Sullivan, J., Maki, A. & Carlsson, S. (2015). From Generic to Specific Deep Representations for Visual Recognition. In: Proceedings of CVPR 2015: . Paper presented at CVPRW DeepVision Workshop, 7-12 June 2015, Boston, MA, USA. IEEE conference proceedings
Open this publication in new window or tab >>From Generic to Specific Deep Representations for Visual Recognition
Show others...
2015 (English)In: Proceedings of CVPR 2015, IEEE conference proceedings, 2015Conference paper, Published paper (Refereed)
Abstract [en]

Evidence is mounting that ConvNets are the best representation learning method for recognition. In the common scenario, a ConvNet is trained on a large labeled dataset and the feed-forward units activation, at a certain layer of the network, is used as a generic representation of an input image. Recent studies have shown this form of representation to be astoundingly effective for a wide range of recognition tasks. This paper thoroughly investigates the transferability of such representations w.r.t. several factors. It includes parameters for training the network such as its architecture and parameters of feature extraction. We further show that different visual recognition tasks can be categorically ordered based on their distance from the source task. We then show interesting results indicating a clear correlation between the performance of tasks and their distance from the source task conditioned on proposed factors. Furthermore, by optimizing these factors, we achieve stateof-the-art performances on 16 visual recognition tasks.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015
Series
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, ISSN 2160-7508
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-164527 (URN)10.1109/CVPRW.2015.7301270 (DOI)000378887900005 ()2-s2.0-84951960494 (Scopus ID)978-1-4673-6759-2 (ISBN)
Conference
CVPRW DeepVision Workshop, 7-12 June 2015, Boston, MA, USA
Note

QC 20150507. QC 20200701

Available from: 2015-04-17 Created: 2015-04-17 Last updated: 2025-02-07Bibliographically approved
Sharif Razavian, A., Azizpour, H., Maki, A., Sullivan, J., Ek, C. H. & Carlsson, S. (2015). Persistent Evidence of Local Image Properties in Generic ConvNets. In: Paulsen, Rasmus R., Pedersen, Kim S. (Ed.), Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings. Paper presented at Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015 (pp. 249-262). Springer Publishing Company
Open this publication in new window or tab >>Persistent Evidence of Local Image Properties in Generic ConvNets
Show others...
2015 (English)In: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Paulsen, Rasmus R., Pedersen, Kim S., Springer Publishing Company, 2015, p. 249-262Conference paper, Published paper (Refereed)
Abstract [en]

Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or thevariation within the object class. Does this happen in practice? Although this seems to pertain to the very final layers in the network, if we look at earlier layers we find that this is not the case. Surprisingly, strong spatial information is implicit. This paper addresses this, in particular, exploiting the image representation at the first fully connected layer,i.e. the global image descriptor which has been recently shown to be most effective in a range of visual recognition tasks. We empirically demonstrate evidences for the finding in the contexts of four different tasks: 2d landmark detection, 2d object keypoints prediction, estimation of the RGB values of input image, and recovery of semantic label of each pixel. We base our investigation on a simple framework with ridge rigression commonly across these tasks,and show results which all support our insight. Such spatial information can be used for computing correspondence of landmarks to a good accuracy, but should potentially be useful for improving the training of the convolutional nets for classification purposes.

Place, publisher, year, edition, pages
Springer Publishing Company, 2015
Series
Image Processing, Computer Vision, Pattern Recognition, and Graphics ; 9127
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-172140 (URN)10.1007/978-3-319-19665-7_21 (DOI)2-s2.0-84947982864 (Scopus ID)
Conference
Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015
Note

Qc 20150828

Available from: 2015-08-13 Created: 2015-08-13 Last updated: 2024-03-15Bibliographically approved
Razavian, A. S., Sullivan, J., Carlsson, S. & Maki, A. (2015). Visual instance retrieval with deep convolutional networks. In: 3rd International Conference on Learning Representations, ICLR 2015 - Workshop Track Proceedings: . Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, 7 May 2015 through 9 May 2015. International Conference on Learning Representations, ICLR
Open this publication in new window or tab >>Visual instance retrieval with deep convolutional networks
2015 (English)In: 3rd International Conference on Learning Representations, ICLR 2015 - Workshop Track Proceedings, International Conference on Learning Representations, ICLR , 2015Conference paper, Published paper (Refereed)
Abstract [en]

This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. 

Place, publisher, year, edition, pages
International Conference on Learning Representations, ICLR, 2015
Keywords
Convolutional network, Learning representation, Multi-resolution search, Visual instance retrieval, Image retrieval, Convolutional networks, Geometric invariance, Image representations, Instance retrieval, Spatial consistency, State-of-the-art methods, Convolution
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-276537 (URN)2-s2.0-85083954456 (Scopus ID)
Conference
3rd International Conference on Learning Representations, ICLR 2015, 7 May 2015 through 9 May 2015
Note

QC 20200616

Available from: 2020-06-16 Created: 2020-06-16 Last updated: 2023-11-24Bibliographically approved
Sharif Razavian, A., Azizpour, H., Sullivan, J. & Carlsson, S. (2014). CNN features off-the-shelf: An Astounding Baseline for Recognition. In: Proceedings of CVPR 2014: . Paper presented at Computer Vision and Pattern Recognition (CVPR) 2014, DeepVision workshop,June 28, 2014, Columbus, Ohio.
Open this publication in new window or tab >>CNN features off-the-shelf: An Astounding Baseline for Recognition
2014 (English)In: Proceedings of CVPR 2014, 2014Conference paper, Published paper (Refereed)
Abstract [en]

Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.

National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-149178 (URN)10.1109/CVPRW.2014.131 (DOI)000349552300079 ()2-s2.0-84908537903 (Scopus ID)
Conference
Computer Vision and Pattern Recognition (CVPR) 2014, DeepVision workshop,June 28, 2014, Columbus, Ohio
Note

Best Paper Runner-up Award.

QC 20140825

Available from: 2014-08-16 Created: 2014-08-16 Last updated: 2024-03-15Bibliographically approved
Organisations

Search in DiVA

Show all publications