kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 16) Show all publications
Liang, X., Cumlin, F., Ungureanu, V., Reddy, C. K. A., Schuldt, C. & Chatterjee, S. (2024). DeePMOS-B: Deep Posterior Mean-Opinion-Score using Beta Distribution. In: 32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024: . Paper presented at 32nd European Signal Processing Conference (EUSIPCO), AUG 26-30, 2024, Lyon, FRANCE (pp. 416-420). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>DeePMOS-B: Deep Posterior Mean-Opinion-Score using Beta Distribution
Show others...
2024 (English)In: 32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 416-420Conference paper, Published paper (Refereed)
Abstract [en]

Mean opinion score (MOS) is a bounded speech quality measure, ranging between 1 and 5. We propose using a Beta distribution to model the posterior of the bounded MOS for a given speech clip. We use a deep neural network (DNN), trained using a maximum-likelihood principle, providing the parameters of the posterior Beta distribution. A self-teacher learning setup is used to achieve robustness against the inherent challenge of training on a noisy dataset. The dataset noise comes from the subjective nature of the MOS labels, and only a handful of quality score ratings are provided for each speech clip. To compare with existing state-of-the-art methods, we use the mean of Beta posterior as a point estimate of the MOS. The proposed method shows competitive performance vis-a-vis several existing DNN-based methods that provide MOS point estimates, and an ablation study shows the importance of various components of the proposed method.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Series
European Signal Processing Conference, ISSN 2076-1465
Keywords
speech quality assessment, deep neural network, maximum-likelihood, Bayesian estimation
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-358710 (URN)10.23919/EUSIPCO63174.2024.10715351 (DOI)001349787000083 ()
Conference
32nd European Signal Processing Conference (EUSIPCO), AUG 26-30, 2024, Lyon, FRANCE
Note

Part of ISBN 9789464593617, 9798331519773

QC 20250923

Available from: 2025-01-21 Created: 2025-01-21 Last updated: 2025-09-23Bibliographically approved
Cumlin, F., Liang, X. & Chatterjee, S. (2024). Generalization Ability of End-to-End Non-Intrusive Speech Quality Models. In: 2024 IEEE 21st India Council International Conference, INDICON 2024: . Paper presented at 21st IEEE India Council International Conference, INDICON 2024, Kharagpur, India, Dec 19 2024 - Dec 21 2024. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Generalization Ability of End-to-End Non-Intrusive Speech Quality Models
2024 (English)In: 2024 IEEE 21st India Council International Conference, INDICON 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024Conference paper, Published paper (Refereed)
Abstract [en]

This study investigates the generalization ability of non-intrusive speech quality assessment models on unseen datasets, focusing on architectural and training strategies. We evaluate two neural network designs: convolutional neural networks (CNNs) with global pooling layers, and with more complex recurrent neural networks (RNNs). Our findings reveal three key insights: (1) CNNs with global pooling layers adapt better to unseen data compared to CNN-RNN architecture, demonstrating stronger generalization ability; (2) student-teacher learning, where a student model learns from a teacher model, enhances generalization performance; and (3) smaller models are both efficient and effective, showing robust behavior across diverse datasets and test runs. These results underscore the potential of lightweight architectures and advanced training frameworks for improving the reliability of the mean-opinion-score (MOS) prediction models.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
deep neural network, mean opinion score, non-intrusive speech quality assessment, speech quality assessment
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-363753 (URN)10.1109/INDICON63790.2024.10958371 (DOI)2-s2.0-105004550561 (Scopus ID)
Conference
21st IEEE India Council International Conference, INDICON 2024, Kharagpur, India, Dec 19 2024 - Dec 21 2024
Note

Part of ISBN 9798350391282

QC 20250528

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-28Bibliographically approved
Liang, X. & Ma, X. (2023). AVIATOR: fAst Visual Perception and Analytics for Drone-Based Traffic Operations. In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023: . Paper presented at 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023, Bilbao, Spain, Sep 24 2023 - Sep 28 2023 (pp. 2959-2964). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>AVIATOR: fAst Visual Perception and Analytics for Drone-Based Traffic Operations
2023 (English)In: 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2959-2964Conference paper, Published paper (Refereed)
Abstract [en]

Drone-based system is an emerging technology for advanced applications in Intelligent Transport Systems (ITS). This paper presents our latest developments of a visual perception and analysis system, called AVIATOR, for drone-based road traffic management. The system advances from the previous SeeFar system in several aspects. For visual perception, deep-learning based computer vision models still play the central role but the current system development focuses on fast and efficient detection and tracking performance during real-time image processing. To achieve that, YOLOv7 and ByteTrack models have replaced the previous perception modules to gain better computational performance. Meanwhile, a lane-based traffic steam detection module is added for recognizing detailed traffic flow per lane, enabling more detailed estimation of traffic flow patterns. The traffic analytics module has been modified to estimate traffic states using lane-based data collection. This includes detailed lane-based traffic flow counting as well as traffic density estimation according to vehicle arrival patterns per lane.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, ISSN 2153-0009
National Category
Transport Systems and Logistics
Identifiers
urn:nbn:se:kth:diva-344359 (URN)10.1109/ITSC57777.2023.10422260 (DOI)001178996702144 ()2-s2.0-85186513153 (Scopus ID)
Conference
26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023, Bilbao, Spain, Sep 24 2023 - Sep 28 2023
Note

QC 20240314

Part of ISBN 979-835039946-2

Available from: 2024-03-13 Created: 2024-03-13 Last updated: 2025-12-05Bibliographically approved
Liang, X., Cumlin, F., Schüldt, C. & Chatterjee, S. (2023). DeePMOS: Deep Posterior Mean-Opinion-Score of Speech. In: Interspeech 2023: . Paper presented at 24th International Speech Communication Association, Interspeech 2023, Dublin, Ireland, August 20-24, 2023 (pp. 526-530). International Speech Communication Association
Open this publication in new window or tab >>DeePMOS: Deep Posterior Mean-Opinion-Score of Speech
2023 (English)In: Interspeech 2023, International Speech Communication Association , 2023, p. 526-530Conference paper, Published paper (Refereed)
Abstract [en]

We propose a deep neural network (DNN) based method that provides a posterior distribution of mean-opinion-score (MOS) for an input speech signal. The DNN outputs parameters of the posterior, mainly the posterior's mean and variance. The proposed method is referred to as deep posterior MOS (DeePMOS). The relevant training data is inherently limited in size (limited number of labeled samples) and noisy due to the subjective nature of human listeners. For robust training of DeePMOS, we use a combination of maximum-likelihood learning, stochastic gradient noise, and a student-teacher learning setup. Using the mean of the posterior as a point estimate, we evaluate standard performance measures of the proposed DeePMOS. The results show comparable performance with existing DNN-based methods that only provide point estimates of the MOS. Then we provide an ablation study showing the importance of various components in DeePMOS.

Place, publisher, year, edition, pages
International Speech Communication Association, 2023
Keywords
deep neural network, maximum-likelihood, Speech quality assessment, voice conversion challenge
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-337876 (URN)10.21437/Interspeech.2023-1436 (DOI)001186650300107 ()2-s2.0-85171537160 (Scopus ID)
Conference
24th International Speech Communication Association, Interspeech 2023, Dublin, Ireland, August 20-24, 2023
Note

QC 20250923

Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2025-09-23Bibliographically approved
Liang, X., Javid, A. M., Skoglund, M. & Chatterjee, S. (2022). Decentralized learning of randomization-based neural networks with centralized equivalence. Applied Soft Computing, 115, Article ID 108030.
Open this publication in new window or tab >>Decentralized learning of randomization-based neural networks with centralized equivalence
2022 (English)In: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 115, article id 108030Article in journal (Refereed) Published
Abstract [en]

We consider a decentralized learning problem where training data samples are distributed over agents (processing nodes) of an underlying communication network topology without any central (master) node. Due to information privacy and security issues in a decentralized setup, nodes are not allowed to share their training data and only parameters of the neural network are allowed to be shared. This article investigates decentralized learning of randomization-based neural networks that provides centralized equivalent performance as if the full training data are available at a single node. We consider five randomization-based neural networks that use convex optimization for learning. Two of the five neural networks are shallow, and the others are deep. The use of convex optimization is the key to apply alternating-direction-method-of-multipliers with decentralized average consensus. This helps us to establish decentralized learning with centralized equivalence. For the underlying communication network topology, we use a doubly-stochastic network policy matrix and synchronous communications. Experiments with nine benchmark datasets show that the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning. The performance rankings of five neural networks using Friedman rank are also enclosed in the results, which are ELM < RVFL< dRVFL < edRVFL < SSFN.

Place, publisher, year, edition, pages
Elsevier BV, 2022
Keywords
Randomized neural network, Distributed learning, Multi-layer feedforward neural network, Alternating direction method of multipliers
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-307316 (URN)10.1016/j.asoc.2021.108030 (DOI)000736977500005 ()2-s2.0-85120883070 (Scopus ID)
Note

QC 20250923

Available from: 2022-01-20 Created: 2022-01-20 Last updated: 2025-09-23Bibliographically approved
Ma, X., Liang, X., Ning, M. & Radu, A. (2022). METRIC: Toward a Drone-based Cyber-Physical Traffic Management System. In: Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics: . Paper presented at 2022 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2022, Prague, Czech Republic, 9-12 October 2022 (pp. 3324-3329). Institute of Electrical and Electronics Engineers (IEEE), 2022-October
Open this publication in new window or tab >>METRIC: Toward a Drone-based Cyber-Physical Traffic Management System
2022 (English)In: Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, Institute of Electrical and Electronics Engineers (IEEE) , 2022, Vol. 2022-October, p. 3324-3329Conference paper, Published paper (Refereed)
Abstract [en]

Drone-based system has a big potential to be applied for traffic monitoring and other advanced applications in Intelligent Transport Systems (ITS). This paper introduces our latest efforts of digitalising road traffic by various types of sensing systems, among which visual detection by drones provides a promising technical solution. A platform, called METRIC, is under recent development to carry out real-time traffic measurement and prediction using drone-based data collection. The current system is designed as a cyber-physical system (CPS) with essential functions aiming for visual traffic detection and analysis, real-time traffic estimation and prediction as well as decision supports based on simulation. In addition to the computer vision functions developed in the earlier stage, this paper also presents the CPS system architecture and the current implementation of the drone front-end system and a simulation-based system being used for further drone operations.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
National Category
Robotics and automation Transport Systems and Logistics
Identifiers
urn:nbn:se:kth:diva-329627 (URN)10.1109/SMC53654.2022.9945433 (DOI)2-s2.0-85142738221 (Scopus ID)
Conference
2022 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2022, Prague, Czech Republic, 9-12 October 2022
Note

QC 20230622

Available from: 2023-06-22 Created: 2023-06-22 Last updated: 2025-02-05Bibliographically approved
Liang, X., Javid, A. M., Skoglund, M. & Chatterjee, S. (2021). Asynchronous Decentralized Learning of Randomization-based Neural Networks. In: : . Paper presented at International Joint Conference on Neural Networks (IJCNN).
Open this publication in new window or tab >>Asynchronous Decentralized Learning of Randomization-based Neural Networks
2021 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In a communication network, decentralized learning refers to the knowledge collaboration between the different local agents (processing nodes) to improve the local estimation performance without sharing private data. The ideal case is that the decentralized solution approximates the centralized solution, as if all the data are available at a single node, and requires low computational power and communication overhead. In this work, we propose a decentralized learning of randomization-based neural networks with asynchronous communication and achieve centralized equivalent performance. We propose an ARock-based alternating-direction-method-of-multipliers (ADMM) algorithm that enables individual node activation and one-sided communication in an undirected connected network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm reduces the computational cost and communication overhead due to its asynchronous nature. We study the proposed algorithm on different randomization-based neural networks, including ELM, SSFN, RVFL, and its variants, to achieve the centralized equivalent performance under efficient computation and communication costs. We also show that the proposed asynchronous decentralized learning algorithm can outperform a synchronous learning algorithm regarding computational complexity, especially when the network connections are sparse.

Keywords
decentralized learning, neural networks, asynchronous communication, ADMM
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-295431 (URN)10.1109/IJCNN52387.2021.9533574 (DOI)000722581702035 ()2-s2.0-85116479449 (Scopus ID)
Conference
International Joint Conference on Neural Networks (IJCNN)
Note

QC 20210520

Available from: 2021-05-20 Created: 2021-05-20 Last updated: 2022-09-23Bibliographically approved
Liang, X. (2021). Decentralized Learning of Randomization-based Neural Networks. (Doctoral dissertation). Stockholm, Sweden: KTH Royal Institute of Technology
Open this publication in new window or tab >>Decentralized Learning of Randomization-based Neural Networks
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Machine learning and artificial intelligence have been wildly explored and developed very fast to adapt to the expanding need for almost every aspect of human development. When stepping into the big data era, siloed data localization has become a big challenge for machine learning. Restricted by scattered locations and privacy regulations of information sharing, recent studies aim to develop collaborated machine learning techniques for local models to approximate the centralized performance without sharing real data. Privacy preservation is as important as the model performance and the model complexity. This thesis aims to investigate the scopes of the low computational complexity learning model, randomization-based feed-forward neural networks (RFNs). As a class of artificial neural networks (ANNs), RFNs enjoy the favorable balance between low computational complexity and satisfying performance, especially for non-image data. Driven by the advantages of RFNs and the need for distributed learning resolutions, we aim to study the potential and applicability of RFNs and distributed optimization methods that may lead to the design of the decentralized variant of RFNs to deliver desired results.

Firstly, we provide the decentralized learning algorithms based on RFN architectures for undirected network topology using synchronous communication. We investigate decentralized learning of five RFNs that provides centralized equivalent performance as if the total training data samples are available at a single node. Two of the five neural networks are shallow, and the others are deep. Experiments with nine benchmark datasets show that the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning. 

Then we are motivated to design an asynchronous decentralized learning application that achieves centralized equivalent performance with low computational complexity and communication overhead. We propose an asynchronous decentralized learning algorithm using ARock-based ADMM to realize the decentralized variants of a variety of RFNs. The proposed algorithm enables single node activation and one-sided communication in an undirected communication network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm obtains the centralized solution with reduced computational cost and improved communication efficiency. 

Finally, We consider the problem of training a neural net over a decentralized scenario with a high sparsity level in connections. The issue is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting.' While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them, and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario.

Abstract [sv]

Maskininlärning och artificiell intelligens har utforskats vilt och utvecklats mycket snabbt för att anpassa sig till det växande behovet av nästan alla aspekter av mänsklig utveckling. När man går in i big data-eran har lokaliserad datalokalisering blivit en stor utmaning för maskininlärning. Begränsat av spridda platser och sekretessregler för informationsdelning, syftar nya studier till att utveckla samarbetade maskininlärningstekniker för lokala modeller för att approximera den centraliserade prestandan utan att dela verkliga data. Sekretessbevarande är lika viktigt som modellens prestanda och modellens komplexitet. Denna avhandling syftar till att undersöka omfattningen av den inlärningsmodell med låg beräkningskomplexitet, randomiseringsbaserade feed-forward neurala nätverk (RFN). Som en klass av artificiella neurala nätverk (ANN) har RFN: er den gynnsamma balansen mellan låg beräkningskomplexitet och tillfredsställande prestanda, särskilt för icke-bilddata. Drivs av RFN: s fördelar och behovet av distribuerade inlärningsupplösningar, syftar vi till att studera RFN: s potential och användbarhet och distribuerade optimeringsmetoder som kan leda till utformningen av den decentraliserade varianten av RFN för att leverera önskade resultat.

För det första tillhandahåller vi de decentraliserade inlärningsalgoritmerna baserade på RFN-arkitekturer för oriktad nätverkstopologi med synkron kommunikation. Vi undersöker decentraliserad inlärning av fem RFN som ger centraliserad ekvivalent prestanda som om de totala träningsdataproverna är tillgängliga i en enda nod. Två av de fem neurala nätverken är grunda, och de andra är djupa. Experiment med nio benchmarkdatauppsättningar visar att de fem neurala nätverken ger bra prestanda samtidigt som de kräver låg beräknings- och kommunikationskomplexitet för decentraliserat lärande.

Då är vi motiverade att designa en asynkron decentraliserad inlärningsapplikation som uppnår central motsvarande prestanda med låg beräkningskomplexitet och kommunikationsomkostnader. Vi föreslår en asynkron decentraliserad inlärningsalgoritm med ARock-baserad ADMM för att förverkliga de decentraliserade varianterna av en mängd olika RFN. Den föreslagna algoritmen möjliggör aktivering av enstaka noder och ensidig kommunikation i ett oriktat kommunikationsnätverk, kännetecknat av en dubbelstokastisk nätverkspolitisk matris. Dessutom erhåller den föreslagna algoritmen den centraliserade lösningen med minskad beräkningskostnad och förbättrad kommunikationseffektivitet.

Slutligen betraktar vi problemet med att träna ett neuralt nät över ett decentraliserat scenario med hög sparsitetsnivå i anslutningar. Frågan hanteras genom att anpassa en nyligen föreslagen inkrementell inlärningsmetod, kallad 'lärande utan att glömma.' Medan en inkrementell inlärningsmetod antar datatillgänglighet i en sekvens, kan noder i det decentraliserade scenariot inte dela data mellan dem, och det finns ingen masternod. Noder kan kommunicera information om modellparametrar bland grannar. Kommunikation av modellparametrar är nyckeln till att anpassa inlärningsmetoden till det decentraliserade scenariot.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2021. p. xv, 69
Series
TRITA-EECS-AVL ; 2021:40
National Category
Communication Systems Telecommunications
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-295433 (URN)978-91-7873-904-2 (ISBN)
Public defence
2021-06-11, https://kth-se.zoom.us/j/64005034683, U1, Brinellvägen 28A, Undervisningshuset, våningsplan 6, KTH Campus, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20210520

Available from: 2021-05-20 Created: 2021-05-20 Last updated: 2026-01-08Bibliographically approved
Liang, X., Skoglund, M. & Chatterjee, S. (2021). Feature Reuse For A Randomization Based Neural Network. In: 2021 Ieee International Conference On Acoustics, Speech And Signal Processing (ICASSP 2021): . Paper presented at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), JUN 06-11, 2021, ELECTR NETWORK (pp. 2805-2809). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Feature Reuse For A Randomization Based Neural Network
2021 (English)In: 2021 Ieee International Conference On Acoustics, Speech And Signal Processing (ICASSP 2021), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 2805-2809Conference paper, Published paper (Refereed)
Abstract [en]

We propose a feature reuse approach for an existing multi-layer randomization based feedforward neural network. The feature representation is directly linked among all the necessary hidden layers. For the feature reuse at a particular layer, we concatenate features from the previous layers to construct a large-dimensional feature for the layer. The large-dimensional concatenated feature is then efficiently used to learn a limited number of parameters by solving a convex optimization problem. Experiments show that the proposed model improves the performance in comparison with the original neural network without a significant increase in computational complexity.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Multi-layer neural network, randomization based neural network, convex optimization, feature reuse
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-305415 (URN)10.1109/ICASSP39728.2021.9413424 (DOI)000704288403012 ()2-s2.0-85114863008 (Scopus ID)
Conference
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), JUN 06-11, 2021, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-1-7281-7605-5, QC 20230118

Available from: 2021-12-01 Created: 2021-12-01 Last updated: 2023-01-18Bibliographically approved
Liang, X., Javid, A. M., Skoglund, M. & Chatterjee, S. (2021). Learning without Forgetting for Decentralized Neural Nets with Low Communication Overhead. In: 2020 28th European Signal Processing Conference (EUSIPCO): . Paper presented at 28th European Signal Processing Conference (EUSIPCO), Amsterdam (pp. 2185-2189). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Learning without Forgetting for Decentralized Neural Nets with Low Communication Overhead
2021 (English)In: 2020 28th European Signal Processing Conference (EUSIPCO), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 2185-2189Conference paper, Published paper (Refereed)
Abstract [en]

We consider the problem of training a neural net over a decentralized scenario with a low communication over-head. The problem is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting'. While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario. We use random walk based communication to handle a highly limited communication resource.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Decentralized learning, feedforward neural net, learning without forgetting, low communication overhead
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-295432 (URN)10.23919/Eusipco47968.2020.9287777 (DOI)000632622300440 ()2-s2.0-85099303579 (Scopus ID)
Conference
28th European Signal Processing Conference (EUSIPCO), Amsterdam
Note

QC 20210621

Available from: 2021-05-20 Created: 2021-05-20 Last updated: 2022-06-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4406-536X

Search in DiVA

Show all publications