kth.sePublications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Time-varying Normalizing Flow for Generative Modeling of Dynamical Signals
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. (Digital Futures)ORCID iD: 0000-0001-6612-6923
KTH, Centres, Nordic Institute for Theoretical Physics NORDITA. KTH, School of Electrical Engineering and Computer Science (EECS). (Digital Futures)
Uppsala Univ, Dept Informat Technol, Div Syst & Control, Uppsala, Sweden..
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. (Digital Futures)ORCID iD: 0000-0003-2638-6047
2022 (English)In: 2022 30Th European Signal Processing Conference (EUSIPCO 2022), IEEE , 2022, p. 1492-1496Conference paper, Published paper (Refereed)
Abstract [en]

We develop a time-varying normalizing flow (TVNF) for explicit generative modeling of dynamical signals. Being explicit, it can generate samples of dynamical signals, and compute the likelihood of a (given) dynamical signal sample. In the proposed model, signal flow in the layers of the normalizing flow is a function of time, which is realized using an encoded representation that is the output of a recurrent neural network (RNN). Given a set of dynamical signals, the parameters of TVNF are learned according to maximum-likelihood approach in conjunction with gradient descent (backpropagation). Use of the proposed model is illustrated for a toy application scenario - maximum-likelihood based speech-phone classification task.

Place, publisher, year, edition, pages
IEEE , 2022. p. 1492-1496
Series
European Signal Processing Conference, ISSN 2076-1465
Keywords [en]
Generative learning, recurrent neural networks, neural networks, normalizing flows
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-324330ISI: 000918827600293Scopus ID: 2-s2.0-85141010789OAI: oai:DiVA.org:kth-324330DiVA, id: diva2:1740015
Conference
30th European Signal Processing Conference (EUSIPCO), AUG 29-SEP 02, 2022, Belgrade, SERBIA
Note

QC 20230228

Available from: 2023-02-28 Created: 2023-02-28 Last updated: 2023-06-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Ghosh, AnubhabFontcuberta, Aleix EspunaChatterjee, Saikat

Search in DiVA

By author/editor
Ghosh, AnubhabFontcuberta, Aleix EspunaChatterjee, Saikat
By organisation
Information Science and EngineeringNordic Institute for Theoretical Physics NORDITASchool of Electrical Engineering and Computer Science (EECS)
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 72 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf