kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Maguire Jr., Gerald Q., professor emeritusORCID iD iconorcid.org/0000-0002-6066-746X
Alternative names
Publications (10 of 335) Show all publications
Verardo, G., Boman, M., Bruchfeld, S., Chiesa, M., Koch, S., Maguire Jr., G. Q. & Kostic, D. (2025). FMM-Head: Enhancing Autoencoder-Based ECG Anomaly Detection with Prior Knowledge. In: Pattern Recognition and Artificial Intelligence - 4th International Conference, ICPRAI 2024, Proceedings: . Paper presented at 4th International Conference on Pattern Recognition and Artificial Intelligence, ICPRAI 2024, Jeju Island, Korea, Jul 3 2024 - Jul 6 2024 (pp. 18-32). Springer Nature
Open this publication in new window or tab >>FMM-Head: Enhancing Autoencoder-Based ECG Anomaly Detection with Prior Knowledge
Show others...
2025 (English)In: Pattern Recognition and Artificial Intelligence - 4th International Conference, ICPRAI 2024, Proceedings, Springer Nature , 2025, p. 18-32Conference paper, Published paper (Refereed)
Abstract [en]

Detecting anomalies in electrocardiogram data is crucial to identify deviations from normal heartbeat patterns and provide timely intervention to at-risk patients. Various AutoEncoder models (AE) have been proposed to tackle the anomaly detection task with machine learning (ML). However, these models do not explicitly consider the specific patterns of ECG leads, thus compromising learning efficiency. In contrast, we replace the decoding part of the AE with a reconstruction head (namely, FMM-Head) based on prior knowledge of the ECG shape. Our model consistently achieves higher anomaly detection capabilities than state-of-the-art models, up to 0.31 increase in area under the ROC curve (AUROC), with as little as half the original model size and explainable extracted features. The processing time of our model is four orders of magnitude lower than solving an optimization problem to obtain the same parameters, thus making it suitable for real-time ECG parameters extraction and anomaly detection. The code is available at: https://github.com/giacomoverardo/FMM-Head.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
AutoEncoders, ECG anomaly detection, Machine Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-361152 (URN)10.1007/978-981-97-8702-9_2 (DOI)2-s2.0-85219192392 (Scopus ID)
Conference
4th International Conference on Pattern Recognition and Artificial Intelligence, ICPRAI 2024, Jeju Island, Korea, Jul 3 2024 - Jul 6 2024
Note

Part of ISBN 9789819787012

QC 20250313

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-03-13Bibliographically approved
Verardo, G., Perez-Ramirez, D. F., Bruchfeld, S., Boman, M., Chiesa, M., Koch, S., . . . Kostic, D. (2025). Reducing the Number of Leads for ECG Imaging with Graph Neural Networks and Meaningful Latent Space. In: Statistical Atlases and Computational Models of the Heart. Workshop, CMRxRecon and MBAS Challenge Papers. - 15th International Workshop, STACOM 2024, Held in Conjunction with MICCAI 2024, Revised Selected Papers: . Paper presented at 15th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024 (pp. 301-312). Springer Nature
Open this publication in new window or tab >>Reducing the Number of Leads for ECG Imaging with Graph Neural Networks and Meaningful Latent Space
Show others...
2025 (English)In: Statistical Atlases and Computational Models of the Heart. Workshop, CMRxRecon and MBAS Challenge Papers. - 15th International Workshop, STACOM 2024, Held in Conjunction with MICCAI 2024, Revised Selected Papers, Springer Nature , 2025, p. 301-312Conference paper, Published paper (Refereed)
Abstract [en]

ECG Imaging (ECGI) is a technique for cardiac electrophysiology that allows reconstructing the electrical propagation through different parts of the heart using electrodes on the body surface. Although ECGI is non-invasive, it has not become clinically routine due to the large number of leads required to produce a fine-grained estimate of the cardiac activation map. Using fewer leads could make ECGI practical for clinical patient care. We propose to tackle the lead reduction problem by enhancing Neural Network (NN) models with Graph Neural Network (GNN)-enhanced gating. Our approach encodes the leads into a meaningful representation and then gates the latent space with a GNN. In our evaluation with a state-of-the-art dataset, we show that keeping only the most important leads does not increase the cardiac reconstruction and onset detection error. Despite dropping almost 140 leads out of 260, our model achieves the same performance as another NN baseline while reducing the number of leads. Our code is available at github.com/giacomoverardo/ecg-imaging.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Deep Learning, ECG Imaging, Graph Neural Networks
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-363463 (URN)10.1007/978-3-031-87756-8_30 (DOI)2-s2.0-105004252914 (Scopus ID)
Conference
15th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024
Note

Part of ISBN 9783031877551

QC 20250516

Available from: 2025-05-15 Created: 2025-05-15 Last updated: 2025-05-16Bibliographically approved
Ghasemirahni, H., Farshin, A., Scazzariello, M., Maguire Jr., G. Q., Kostic, D. & Chiesa, M. (2024). FAJITA: Stateful Packet Processing at 100 Million pps. Proceedings of the ACM on Networking, 2(CoNEXT3), 1-22
Open this publication in new window or tab >>FAJITA: Stateful Packet Processing at 100 Million pps
Show others...
2024 (English)In: Proceedings of the ACM on Networking, E-ISSN 2834-5509, Vol. 2, no CoNEXT3, p. 1-22Article in journal (Refereed) Published
Abstract [en]

Data centers increasingly utilize commodity servers to deploy low-latency Network Functions (NFs). However, the emergence of multi-hundred-gigabit-per-second network interface cards (NICs) has drastically increased the performance expected from commodity servers. Additionally, recently introduced systems that store packet payloads in temporary off-CPU locations (e.g., programmable switches, NICs, and RDMA servers) further increase the load on NF servers, making packet processing even more challenging. This paper demonstrates existing bottlenecks and challenges of state-of-the-art stateful packet processing frameworks and proposes a system, called FAJITA, to tackle these challenges & accelerate stateful packet processing on commodity hardware. FAJITA proposes an optimized processing pipeline for stateful network functions to minimize memory accesses and overcome the overheads of accessing shared data structures while ensuring efficient batch processing at every stage of the pipeline. Furthermore, FAJITA provides a performant architecture to deploy high-performance network functions service chains containing stateful elements with different state granularities. FAJITA improves the throughput and latency of high-speed stateful network functions by ~2.43x compared to the most performant state-of-the-art solutions, enabling commodity hardware to process up to ~178 Million 64-B packets per second (pps) using 16 cores.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
packet processing frameworks, stateful network functions
National Category
Communication Systems Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-357087 (URN)10.1145/3676861 (DOI)
Projects
ULTRA
Funder
EU, Horizon 2020, 770889Swedish Research Council, 2021-04212Vinnova, 2023-03003
Note

QC 20241206

Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2024-12-06Bibliographically approved
Girondi, M., Scazzariello, M., Maguire Jr., G. Q. & Kostic, D. (2024). Toward GPU-centric Networking on Commodity Hardware. In: 7th International Workshop on Edge Systems, Analytics and Networking (EdgeSys 2024),  April 22, 2024, Athens, Greece: . Paper presented at 7th International Workshop on Edge Systems, Analytics and Networking (EdgeSys 2024), April 22, 2024, Athens, Greece . New York: ACM Digital Library
Open this publication in new window or tab >>Toward GPU-centric Networking on Commodity Hardware
2024 (English)In: 7th International Workshop on Edge Systems, Analytics and Networking (EdgeSys 2024),  April 22, 2024, Athens, Greece, New York: ACM Digital Library, 2024Conference paper, Published paper (Refereed)
Abstract [en]

GPUs are emerging as the most popular accelerator for many applications, powering the core of machine learning applications. In networked GPU-accelerated applications input & output data typically traverse the CPU and the OS network stack multiple times, getting copied across the system’s main memory. These transfers increase application latency and require expensive CPU cycles, reducing the system’s efficiency, and increasing the overall response times. These inefficiencies become of greater importance in latency-bounded deployments, or with high throughput, where copy times could quickly inflate the response time of modern GPUs.We leverage the efficiency and kernel-bypass benefits of RDMA to transfer data in and out of GPUs without using any CPU cycles or synchronization. We demonstrate the ability of modern GPUs to saturate a 100-Gbps link, and evaluate the network processing timein the context of an inference serving application.

Place, publisher, year, edition, pages
New York: ACM Digital Library, 2024
Keywords
GPUs, Commodity Hardware, Inference Serving, RDMA
National Category
Communication Systems Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-345624 (URN)10.1145/3642968.3654820 (DOI)001234771200008 ()2-s2.0-85192024363 (Scopus ID)
Conference
7th International Workshop on Edge Systems, Analytics and Networking (EdgeSys 2024), April 22, 2024, Athens, Greece 
Note

QC 20240415

Part of ISBN 979-8-4007-0539-7

Available from: 2024-04-15 Created: 2024-04-15 Last updated: 2024-08-28Bibliographically approved
Verardo, G., Barreira, D., Chiesa, M., Kostic, D. & Maguire Jr., G. Q. (2023). Fast Server Learning Rate Tuning for Coded Federated Dropout. In: Goebel, R Yu, H Faltings, B Fan, L Xiong, Z (Ed.), FL 2022: Trustworthy Federated Learning. Paper presented at 1st International Workshop on Trustworthy Federated Learning (FL), JUL 23, 2022, Vienna, AUSTRIA (pp. 84-99). Springer Nature, 13448
Open this publication in new window or tab >>Fast Server Learning Rate Tuning for Coded Federated Dropout
Show others...
2023 (English)In: FL 2022: Trustworthy Federated Learning / [ed] Goebel, R Yu, H Faltings, B Fan, L Xiong, Z, Springer Nature , 2023, Vol. 13448, p. 84-99Conference paper, Published paper (Refereed)
Abstract [en]

In Federated Learning (FL), clients with low computational power train a common machine model by exchanging parameters via updates instead of transmitting potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a subset of model parameters to be updated in each training round. However, compared to standard FL, FD produces considerably lower accuracy and faces a longer convergence time. In this chapter, we leverage coding theory to enhance FD by allowing different sub-models to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed while also reaching up to the same final accuracy as the no dropout case. Evaluations on the EMNIST dataset show that our mechanism achieves 99.6% of the final accuracy of the no dropout case while requiring 2.43x less bandwidth to achieve this level of accuracy.

Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Lecture Notes in Artificial Intelligence, ISSN 2945-9133
Keywords
Federated Learning, Hyper-parameters tuning, Coding Theory
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-330513 (URN)10.1007/978-3-031-28996-5_7 (DOI)000999818400007 ()2-s2.0-85152560522 (Scopus ID)
Conference
1st International Workshop on Trustworthy Federated Learning (FL), JUL 23, 2022, Vienna, AUSTRIA
Note

QC 20230630

Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-06-30Bibliographically approved
Barbette, T., Wu, E., Kostic, D., Maguire Jr., G. Q., Papadimitratos, P. & Chiesa, M. (2022). Cheetah: A High-Speed Programmable Load-Balancer Framework with Guaranteed Per-Connection-Consistency. IEEE/ACM Transactions on Networking, 30(1), 354-367
Open this publication in new window or tab >>Cheetah: A High-Speed Programmable Load-Balancer Framework with Guaranteed Per-Connection-Consistency
Show others...
2022 (English)In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 30, no 1, p. 354-367Article in journal (Refereed) Published
Abstract [en]

Large service providers use load balancers to dispatch millions of incoming connections per second towards thousands of servers. There are two basic yet critical requirements for a load balancer: uniform load distribution of the incoming connections across the servers, which requires to support advanced load balancing mechanisms, and per-connection-consistency (PCC), i.e, the ability to map packets belonging to the same connection to the same server even in the presence of changes in the number of active servers and load balancers. Yet, simultaneously meeting these requirements has been an elusive goal. Today's load balancers minimize PCC violations at the price of non-uniform load distribution. This paper presents Cheetah, a load balancer that supports advanced load balancing mechanisms and PCC while being scalable, memory efficient, fast at processing packets, and offers comparable resilience to clogging attacks as with today's load balancers. The Cheetah LB design guarantees PCC for any realizable server selection load balancing mechanism and can be deployed in both stateless and stateful manners, depending on operational needs. We implemented Cheetah on both a software and a Tofino-based hardware switch. Our evaluation shows that a stateless version of Cheetah guarantees PCC, has negligible packet processing overheads, and can support load balancing mechanisms that reduce the flow completion time by a factor of 2-3 ×.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
Cloud networks, Layer 4 load balancing, P4, Per-connection-consistency, Programmable networks, QUIC, Stateful classification, Stateless load balancing, TCP, Electric power plant loads, Network layers, Servers, Load modeling, Load-Balancing, Programmable network, QUIC., Resilience, Hash functions
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-312304 (URN)10.1109/TNET.2021.3113370 (DOI)000732385800001 ()2-s2.0-85116873307 (Scopus ID)
Note

QC 20220530

Available from: 2022-05-30 Created: 2022-05-30 Last updated: 2022-06-25Bibliographically approved
Ghasemirahni, H., Barbette, T., Katsikas, G. P., Farshin, A., Roozbeh, A., Girondi, M., . . . Kostic, D. (2022). Packet Order Matters! Improving Application Performance by Deliberately Delaying Packets. In: Proceedings of the 19th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2022: . Paper presented at 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI), APR 04-06, 2022, Renton, WA (pp. 807-827). USENIX - The Advanced Computing Systems Association
Open this publication in new window or tab >>Packet Order Matters! Improving Application Performance by Deliberately Delaying Packets
Show others...
2022 (English)In: Proceedings of the 19th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2022, USENIX - The Advanced Computing Systems Association, 2022, p. 807-827Conference paper, Published paper (Refereed)
Abstract [en]

Data centers increasingly deploy commodity servers with high-speed network interfaces to enable low-latency communication. However, achieving low latency at high data rates crucially depends on how the incoming traffic interacts with the system's caches. When packets that need to be processed in the same way are consecutive, i.e., exhibit high temporal and spatial locality, caches deliver great benefits.

In this paper, we systematically study the impact of temporal and spatial traffic locality on the performance of commodity servers equipped with high-speed network interfaces. Our results show that (i) the performance of a variety of widely deployed applications degrades substantially with even the slightest lack of traffic locality, and (ii) a traffic trace from our organization reveals poor traffic locality as networking protocols, drivers, and the underlying switching/routing fabric spread packets out in time (reducing locality). To address these issues, we built Reframer, a software solution that deliberately delays packets and reorders them to increase traffic locality. Despite introducing μs-scale delays of some packets, we show that Reframer increases the throughput of a network service chain by up to 84% and reduces the flow completion time of a web server by 11% while improving its throughput by 20%.

Place, publisher, year, edition, pages
USENIX - The Advanced Computing Systems Association, 2022
Keywords
packet ordering, spatial and temporal locality, packet scheduling, batch processing, high-speed networking
National Category
Communication Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-304656 (URN)000876762200046 ()2-s2.0-85140983450 (Scopus ID)
Conference
19th USENIX Symposium on Networked Systems Design and Implementation (NSDI), APR 04-06, 2022, Renton, WA
Projects
ULTRAWASPTime-Critical Clouds
Funder
Swedish Foundation for Strategic ResearchKnut and Alice Wallenberg FoundationEU, European Research Council
Note

QC 20230619

Available from: 2021-11-09 Created: 2021-11-09 Last updated: 2023-06-19Bibliographically approved
Katsikas, G. P., Barbette, T., Kostic, D., Maguire Jr., G. Q. & Steinert, R. (2021). Metron: High-Performance NFV Service Chaining Even in the Presence of Blackboxes. ACM Transactions on Computer Systems, 38(1-2), 1-45, Article ID 3.
Open this publication in new window or tab >>Metron: High-Performance NFV Service Chaining Even in the Presence of Blackboxes
Show others...
2021 (English)In: ACM Transactions on Computer Systems, ISSN 0734-2071, E-ISSN 1557-7333, Vol. 38, no 1-2, p. 1-45, article id 3Article in journal (Refereed) Published
Abstract [en]

Deployment of 100 Gigabit Ethernet (GbE) links challenges the packet processing limits of commodity hardware used for Network Functions Virtualization (NFV). Moreover, realizing chained network functions (i.e., service chains) necessitates the use of multiple CPU cores, or even multiple servers, to process packets from such high speed links.

Our system Metron jointly exploits the underlying network and commodity servers' resources: (i) to offload part of the packet processing logic to the network, (ii) by using smart tagging to setup and exploit the affinity of traffic classes, and (iii) by using tag-based hardware dispatching to carry out the remaining packet processing at the speed of the servers' cores, with zero inter-core communication. Moreover, Metron transparently integrates, manages, and load balances proprietary "blackboxes" together with Metron service chains.

Metron realizes stateful network functions at the speed of 100 GbE network cards on a single server, while elastically and rapidly adapting to changing workload volumes. Our experiments demonstrate that Metron service chains can coexist with heterogeneous blackboxes, while still leveraging Metron's accurate dispatching and load balancing. In summary, Metron has (i) 2.75-8× better efficiency, up to (ii) 4.7× lower latency, and (iii) 7.8× higher throughput than OpenBox, a state-of-the-art NFV system.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2021
Keywords
elasticity, service chains, hardware offloading, accurate dispatching, 100 GbE, load balancing, tagging, blackboxes, NFV
National Category
Communication Systems Computer Sciences
Identifiers
urn:nbn:se:kth:diva-298691 (URN)10.1145/3465628 (DOI)000679809300003 ()2-s2.0-85111657554 (Scopus ID)
Projects
European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 770889)Swedish Foundation for Strategic Research (SSF)
Note

QC 20210712

Available from: 2021-07-11 Created: 2021-07-11 Last updated: 2024-03-15
Farshin, A., Barbette, T., Roozbeh, A., Maguire Jr., G. Q. & Kostic, D. (2021). PacketMill: Toward Per-Core 100-Gbps Networking. In: Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS): . Paper presented at 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’21), 19–23 April, 2021, Virtual/Online. ACM Digital Library
Open this publication in new window or tab >>PacketMill: Toward Per-Core 100-Gbps Networking
Show others...
2021 (English)In: Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), ACM Digital Library, 2021Conference paper, Published paper (Refereed)
Abstract [en]

We present PacketMill, a system for optimizing software packet processing, which (i) introduces a new model to efficiently manage packet metadata and (ii) employs code-optimization techniques to better utilize commodity hardware. PacketMill grinds the whole packet processing stack, from the high-level network function configuration file to the low-level userspace network (specifically DPDK) drivers, to mitigate inefficiencies and produce a customized binary for a given network function. Our evaluation results show that PacketMill increases throughput (up to 36.4 Gbps -- 70%) & reduces latency (up to 101 us -- 28%) and enables nontrivial packet processing (e.g., router) at ~100 Gbps, when new packets arrive >10× faster than main memory access times, while using only one processing core.

Place, publisher, year, edition, pages
ACM Digital Library, 2021
Keywords
PacketMill, X-Change, Packet Processing, Metadata Management, 100-Gbps Networking, Middleboxes, Commodity Hardware, LLVM, Compiler Optimizations, Full-Stack Optimization, FastClick, DPDK.
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-289665 (URN)10.1145/3445814.3446724 (DOI)000829871000001 ()2-s2.0-85104694209 (Scopus ID)
Conference
26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’21), 19–23 April, 2021, Virtual/Online
Projects
Time-Critical CloudsULTRAWASP
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic ResearchEU, Horizon 2020, 770889
Note

Part of proceedings: ISBN 978-1-4503-8317-2

QC 20210210

Available from: 2021-02-10 Created: 2021-02-10 Last updated: 2024-03-15Bibliographically approved
Katsikas, G. P., Barbette, T., Chiesa, M., Kostic, D. & Maguire Jr., G. Q. (2021). What you need to know about (Smart) Network Interface Cards. In: Springer International Publishing (Ed.), Proceedings Passive and Active Measurement - 22nd International Conference, PAM 2021: . Paper presented at Passive and Active Measurement - 22nd International Conference, PAM 2021, Virtual Event, March 29 - April 1, 2021. Springer Nature
Open this publication in new window or tab >>What you need to know about (Smart) Network Interface Cards
Show others...
2021 (English)In: Proceedings Passive and Active Measurement - 22nd International Conference, PAM 2021 / [ed] Springer International Publishing, Springer Nature , 2021Conference paper, Published paper (Refereed)
Abstract [en]

Network interface cards (NICs) are fundamental componentsof modern high-speed networked systems, supporting multi-100 Gbpsspeeds and increasing programmability. Offloading computation from aserver’s CPU to a NIC frees a substantial amount of the server’s CPU resources, making NICs key to offer competitive cloud services.

Therefore, understanding the performance benefits and limitations of offloading anetworking application to a NIC is of paramount importance.In this paper, we measure the performance of four different NICs fromone of the largest NIC vendors worldwide, supporting 100 Gbps and200 Gbps. We show that while today’s NICs can easily support multihundred-gigabit throughputs, performing frequent update operations ofa NIC’s packet classifier — as network address translators (NATs) andload balancers would do for each incoming connection — results in adramatic throughput reduction of up to 70 Gbps or complete denial ofservice. Our conclusion is that all tested NICs cannot support high-speednetworking applications that require keeping track of a large number offrequently arriving incoming connections. Furthermore, we show a variety of counter-intuitive performance artefacts including the performanceimpact of using multiple tables to classify flows of packets.

Place, publisher, year, edition, pages
Springer Nature, 2021
Series
Lecture Notes in Computer Science ; 12671
Keywords
Network interface cards, hardware classifier, offloading, rule operations, performance, benchmarking, 100 GbE
National Category
Computer Systems Communication Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-292353 (URN)10.1007/978-3-030-72582-2_19 (DOI)000788003900019 ()2-s2.0-85107297942 (Scopus ID)
Conference
Passive and Active Measurement - 22nd International Conference, PAM 2021, Virtual Event, March 29 - April 1, 2021
Funder
European Commission, 770889Swedish Foundation for Strategic Research, TCC
Note

QC 20220524

Available from: 2021-03-30 Created: 2021-03-30 Last updated: 2022-06-25
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6066-746X

Search in DiVA

Show all publications