kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Interpreting Video Features: A Comparison of 3D Convolutional Networks and Convolutional LSTM Networks
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2171-1429
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5458-3473
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-7796-1438
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-5750-9655
2021 (English)In: 15th Asian Conference on Computer Vision, ACCV 2020, Springer Science and Business Media Deutschland GmbH , 2021, p. 411-426Conference paper, Published paper (Refereed)
Abstract [en]

A number of techniques for interpretability have been presented for deep learning in computer vision, typically with the goal of understanding what the networks have based their classification on. However, interpretability for deep video architectures is still in its infancy and we do not yet have a clear concept of how to decode spatiotemporal features. In this paper, we present a study comparing how 3D convolutional networks and convolutional LSTM networks learn features across temporally dependent frames. This is the first comparison of two video models that both convolve to learn spatial features but have principally different methods of modeling time. Additionally, we extend the concept of meaningful perturbation introduced by [1] to the temporal dimension, to identify the temporal part of a sequence most meaningful to the network for a classification decision. Our findings indicate that the 3D convolutional model concentrates on shorter events in the input sequence, and places its spatial focus on fewer, contiguous areas. 

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH , 2021. p. 411-426
Keywords [en]
3D modeling, Computer vision, Convolution, Convolutional neural networks, Deep learning, Classification decision, Convolutional model, Convolutional networks, Interpretability, Spatial features, Spatio temporal features, Temporal dimensions, Video architecture, Long short-term memory
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-308515DOI: 10.1007/978-3-030-69541-5_25Scopus ID: 2-s2.0-85103359402OAI: oai:DiVA.org:kth-308515DiVA, id: diva2:1636262
Conference
30 November 2020 through 4 December 2020
Note

Part of proceedings: ISBN 9783030695408, QC 20220209

Available from: 2022-02-09 Created: 2022-02-09 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Mänttäri, JoonatanBroomé, SofiaFolkesson, JohnKjellström, Hedvig

Search in DiVA

By author/editor
Mänttäri, JoonatanBroomé, SofiaFolkesson, JohnKjellström, Hedvig
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 113 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf