kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 12) Show all publications
Ghasemirahni, H., Farshin, A., Scazzariello, M., Maguire Jr., G. Q., Kostic, D. & Chiesa, M. (2024). FAJITA: Stateful Packet Processing at 100 Million pps. Proceedings of the ACM on Networking, 2(CoNEXT3), 1-22
Open this publication in new window or tab >>FAJITA: Stateful Packet Processing at 100 Million pps
Show others...
2024 (English)In: Proceedings of the ACM on Networking, E-ISSN 2834-5509, Vol. 2, no CoNEXT3, p. 1-22Article in journal (Refereed) Published
Abstract [en]

Data centers increasingly utilize commodity servers to deploy low-latency Network Functions (NFs). However, the emergence of multi-hundred-gigabit-per-second network interface cards (NICs) has drastically increased the performance expected from commodity servers. Additionally, recently introduced systems that store packet payloads in temporary off-CPU locations (e.g., programmable switches, NICs, and RDMA servers) further increase the load on NF servers, making packet processing even more challenging. This paper demonstrates existing bottlenecks and challenges of state-of-the-art stateful packet processing frameworks and proposes a system, called FAJITA, to tackle these challenges & accelerate stateful packet processing on commodity hardware. FAJITA proposes an optimized processing pipeline for stateful network functions to minimize memory accesses and overcome the overheads of accessing shared data structures while ensuring efficient batch processing at every stage of the pipeline. Furthermore, FAJITA provides a performant architecture to deploy high-performance network functions service chains containing stateful elements with different state granularities. FAJITA improves the throughput and latency of high-speed stateful network functions by ~2.43x compared to the most performant state-of-the-art solutions, enabling commodity hardware to process up to ~178 Million 64-B packets per second (pps) using 16 cores.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
packet processing frameworks, stateful network functions
National Category
Communication Systems Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-357087 (URN)10.1145/3676861 (DOI)
Projects
ULTRA
Funder
EU, Horizon 2020, 770889Swedish Research Council, 2021-04212Vinnova, 2023-03003
Note

QC 20241206

Available from: 2024-12-04 Created: 2024-12-04 Last updated: 2024-12-06Bibliographically approved
Farshin, A., Rizzo, L., Elmeleegy, K. & Kostic, D. (2023). Overcoming the IOTLB wall for multi-100-Gbps Linux-based networking. PeerJ Computer Science, 9, e1385, Article ID cs-1385.
Open this publication in new window or tab >>Overcoming the IOTLB wall for multi-100-Gbps Linux-based networking
2023 (English)In: PeerJ Computer Science, E-ISSN 2376-5992, Vol. 9, p. e1385-, article id cs-1385Article in journal (Refereed) Published
Abstract [en]

This article explores opportunities to mitigate the performance impact of IOMMU on high-speed network traffic, as used in the Linux kernel. We first characterize IOTLB behavior and its effects on recent Intel Xeon Scalable & AMD EPYC processors at 200 Gbps, by analyzing the impact of different factors contributing to IOTLB misses and causing throughput drop (up to 20% compared to the no-IOMMU case in our experiments). Secondly, we discuss and analyze possible mitigations, including proposals and evaluation of a practical hugepage-aware memory allocator for the network device drivers to employ hugepage IOTLB entries in the Linux kernel. Our evaluation shows that using hugepage-backed buffers can completely recover the throughput drop introduced by IOMMU. Moreover, we formulate a set of guidelines that enable network developers to tune their systems to avoid the “IOTLB wall”, i.e., the point where excessive IOTLB misses cause throughput drop. Our takeaways signify the importance of having a call to arms to rethink Linux-based I/O management at higher data rates.

Place, publisher, year, edition, pages
PeerJ, 2023
Keywords
200 Gbps, Hugepages, iPerf, IOMMU, IOTLB, Linux kernel, Packet processing
National Category
Computer Systems Communication Systems
Research subject
Computer Science; Information and Communication Technology
Identifiers
urn:nbn:se:kth:diva-326986 (URN)10.7717/peerj-cs.1385 (DOI)000996372500001 ()37346709 (PubMedID)2-s2.0-85160438174 (Scopus ID)
Projects
Time-Critical Clouds (TCC)ULTRA
Funder
Swedish Foundation for Strategic ResearchEU, Horizon 2020, 770889Google
Note

QC 20230620

Available from: 2023-05-16 Created: 2023-05-16 Last updated: 2023-09-21Bibliographically approved
Farshin, A. (2023). Realizing Low-Latency Packet Processing on Multi-Hundred-Gigabit-Per-Second Commodity Hardware: Exploit Caching to Improve Performance. (Doctoral dissertation). Stockholm, Sweden: KTH Royal Institute of Technology
Open this publication in new window or tab >>Realizing Low-Latency Packet Processing on Multi-Hundred-Gigabit-Per-Second Commodity Hardware: Exploit Caching to Improve Performance
2023 (English)Doctoral thesis, monograph (Other academic)
Alternative title[sv]
Realisering av Pakethantering med Låg Fördröjning på Tillgänglig Hårdvara med Stöd för Flera Hundra Gigabit PerSekund : Utnyttjande av Cacheteknik för att Förbättra Prestanda
Abstract [en]

By virtue of the recent technological developments in cloud computing, more applications are deployed in the cloud. Among these modern cloud-based applications, many societal applications require bounded and predictable low-latency responses. However, the current cloud infrastructure is unsuitable for these applications since it cannot satisfy these requirements due to many limitations in both hardware and software.

This doctoral dissertation describes our attempts to reduce the latency of Internet services by carefully studying the multi-hundred-gigabit-per-second commodity hardware, optimizing it, and improving its performance. The main focus is to improve the performance of packet processing done by the network functions deployed on commodity hardware, known as network functions virtualization (NFV), which is one of the significant sources of latency for Internet services.

The first contribution of this dissertation takes a step toward optimizing the cache performance of time-critical NFV service chains. By doing so, we reduce the tail latencies of such systems running at 100 Gbps. This is an important achievement as it increases the probability of realizing bounded and predictable latency for Internet services.

The second contribution of this dissertation performs whole-stack optimizations on software-based network functions deployed on top of modular packet processing frameworks to further enhance the effectiveness of cache memories. We build a system to efficiently handle metadata and produce a customized binary of NFVservice chains. Our system improves both throughput & latency of per-core hundred-gigabit-per-second packet processing on commodity hardware.

The third contribution of this dissertation studies the efficiency of I/O security solutions provided by commodity hardware at multi-hundred-gigabit-per-second rates. We characterize the performance of IOMMU & IOTLB (i.e., I/O virtual address translation cache) at 200 Gbps and explore the possible opportunities to mitigate its performance overheads in the Linux kernel.

Abstract [sv]

Tack vare den senaste tekniska utvecklingen inom molntjänster används allt fler tillämpningar i molnet. Bland dessa moderna molnbaserade tillämpningar kräver många samhällsorienterade tillämpningar svarstider med låg latens, som är förutsägbara och ligger inom givna gränser. Den nuvarande molninfrastrukturen är dock otillräcklig för sådana tillämpningar eftersom den inte kan uppfylla dessa krav på grund av olika begränsningar i både hårdvara och mjukvara.

I denna doktorsavhandling beskrivs våra försök att minska latenstiden för Internettjänster genom att noggrant studera tillänglig hårdvara med stöd för flera hundra gigabit per sekund, optimera denna och förbättra dess prestanda. Huvudfokus ligger på att förbättra prestandan för den paketbearbetning som utförs av nätverksfunktioner som installeras på allmänt tillgänglig hårdvara, så kallad nätverksfunktionsvirtualisering (NFV), som är en av de betydande källorna till latens för Internettjänster. 

Det första bidraget i den här avhandlingen tar ett steg mot att optimera cache-prestanda för tidskritiska kedjor av NFV-tjänster. Genom att göra detta minskar vi de långa latenstiderna för sådana system som körs vid 100 Gbps. Detta är ett viktigt resultat eftersom det ökar sannolikheten för att uppnå en begränsad och förutsägbar fördröjning hos internettjänster. 

Det andra bidraget i den här avhandlingen är optimeringar av hela stacken av mjukvarubaserade nätverksfunktioner som används ovanpå modulära ramverk för paketbearbetning för att ytterligare förbättra effektiviteten hos cacheminnen. Vi bygger ett system för att effektivt hantera metadata och producera anpassade binärversioner av NFV-tjänstekedjor. Vårt system förbättrar både genomströmning och latens för tillgänglig hårdvara där varje CPU-kärna har kapacitet för paketbearbetning i storleksordningen 100 Gbps. .

I det tredje bidraget i denna avhandling studeras effektiviteten hos I/O-säkerhetslösningar som tillhandahålls av allmänt tillgänglig hårdvara i hastigheter på flera hundra gigabit per sekund. Vi karakteriserar prestandan hos IOMMU and IOTLB (dvs. “I/O memory management unit” och “I/O virtual address translation cache”) vid 200 Gbps och undersöker möjligheterna att minska dess prestanda-overhead i kärnan av operativsystemet Linux.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2023. p. xxi,178
Series
TRITA-EECS-AVL ; 2023:9
Keywords
Low-Latency Internet Services, Packet Processing, Network Functions Virtualization, Middle Boxes, Commodity Hardware, Multi-Hundred-Gigabit-Per-Second, Low-Level Optimization, Internettjänster med Låg Fördröjning, Paketbearbetning, Virtualisering av Nätverksfunktioner, Mellanutrustning, Tillgänglig Datorhårdvara, Flera-Hundra-Gigabit-Per-Sekund, Lågnivå-Optimering
National Category
Communication Systems Computer Systems
Research subject
Computer Science; Information and Communication Technology
Identifiers
urn:nbn:se:kth:diva-323599 (URN)978-91-8040-464-8 (ISBN)
Public defence
2023-03-06, Sal C (Sven-Olof Öhrvik), Zoom seminar: https://kth-se.zoom.us/j/66604578251, Electrum, Kistagången 16, Kista, 17:00 (English)
Opponent
Supervisors
Projects
Time-Critical Clouds (TCC)ULTRA
Funder
Swedish Foundation for Strategic ResearchGoogleEU, Horizon 2020, 770889
Note

QC 20230206

Available from: 2023-02-06 Created: 2023-02-06 Last updated: 2023-02-08Bibliographically approved
Ghasemirahni, H., Barbette, T., Katsikas, G. P., Farshin, A., Roozbeh, A., Girondi, M., . . . Kostic, D. (2022). Packet Order Matters! Improving Application Performance by Deliberately Delaying Packets. In: Proceedings of the 19th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2022: . Paper presented at 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI), APR 04-06, 2022, Renton, WA (pp. 807-827). USENIX - The Advanced Computing Systems Association
Open this publication in new window or tab >>Packet Order Matters! Improving Application Performance by Deliberately Delaying Packets
Show others...
2022 (English)In: Proceedings of the 19th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2022, USENIX - The Advanced Computing Systems Association, 2022, p. 807-827Conference paper, Published paper (Refereed)
Abstract [en]

Data centers increasingly deploy commodity servers with high-speed network interfaces to enable low-latency communication. However, achieving low latency at high data rates crucially depends on how the incoming traffic interacts with the system's caches. When packets that need to be processed in the same way are consecutive, i.e., exhibit high temporal and spatial locality, caches deliver great benefits.

In this paper, we systematically study the impact of temporal and spatial traffic locality on the performance of commodity servers equipped with high-speed network interfaces. Our results show that (i) the performance of a variety of widely deployed applications degrades substantially with even the slightest lack of traffic locality, and (ii) a traffic trace from our organization reveals poor traffic locality as networking protocols, drivers, and the underlying switching/routing fabric spread packets out in time (reducing locality). To address these issues, we built Reframer, a software solution that deliberately delays packets and reorders them to increase traffic locality. Despite introducing μs-scale delays of some packets, we show that Reframer increases the throughput of a network service chain by up to 84% and reduces the flow completion time of a web server by 11% while improving its throughput by 20%.

Place, publisher, year, edition, pages
USENIX - The Advanced Computing Systems Association, 2022
Keywords
packet ordering, spatial and temporal locality, packet scheduling, batch processing, high-speed networking
National Category
Communication Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-304656 (URN)000876762200046 ()2-s2.0-85140983450 (Scopus ID)
Conference
19th USENIX Symposium on Networked Systems Design and Implementation (NSDI), APR 04-06, 2022, Renton, WA
Projects
ULTRAWASPTime-Critical Clouds
Funder
Swedish Foundation for Strategic ResearchKnut and Alice Wallenberg FoundationEU, European Research Council
Note

QC 20230619

Available from: 2021-11-09 Created: 2021-11-09 Last updated: 2023-06-19Bibliographically approved
Farshin, A., Barbette, T., Roozbeh, A., Maguire Jr., G. Q. & Kostic, D. (2021). PacketMill: Toward Per-Core 100-Gbps Networking. In: Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS): . Paper presented at 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’21), 19–23 April, 2021, Virtual/Online. ACM Digital Library
Open this publication in new window or tab >>PacketMill: Toward Per-Core 100-Gbps Networking
Show others...
2021 (English)In: Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), ACM Digital Library, 2021Conference paper, Published paper (Refereed)
Abstract [en]

We present PacketMill, a system for optimizing software packet processing, which (i) introduces a new model to efficiently manage packet metadata and (ii) employs code-optimization techniques to better utilize commodity hardware. PacketMill grinds the whole packet processing stack, from the high-level network function configuration file to the low-level userspace network (specifically DPDK) drivers, to mitigate inefficiencies and produce a customized binary for a given network function. Our evaluation results show that PacketMill increases throughput (up to 36.4 Gbps -- 70%) & reduces latency (up to 101 us -- 28%) and enables nontrivial packet processing (e.g., router) at ~100 Gbps, when new packets arrive >10× faster than main memory access times, while using only one processing core.

Place, publisher, year, edition, pages
ACM Digital Library, 2021
Keywords
PacketMill, X-Change, Packet Processing, Metadata Management, 100-Gbps Networking, Middleboxes, Commodity Hardware, LLVM, Compiler Optimizations, Full-Stack Optimization, FastClick, DPDK.
National Category
Communication Systems
Identifiers
urn:nbn:se:kth:diva-289665 (URN)10.1145/3445814.3446724 (DOI)000829871000001 ()2-s2.0-85104694209 (Scopus ID)
Conference
26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’21), 19–23 April, 2021, Virtual/Online
Projects
Time-Critical CloudsULTRAWASP
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic ResearchEU, Horizon 2020, 770889
Note

Part of proceedings: ISBN 978-1-4503-8317-2

QC 20210210

Available from: 2021-02-10 Created: 2021-02-10 Last updated: 2024-03-15Bibliographically approved
Roozbeh, A., Farshin, A., Kostic, D. & Maguire Jr., G. Q. (2020). Methods and devices for controlling memory handling. us US12111768B2.
Open this publication in new window or tab >>Methods and devices for controlling memory handling
2020 (English)Patent (Other (popular science, discussion, etc.))
Abstract [en]

A method and device for controlling memory handling in a processing system comprising a cache shared between a plurality of processing units, wherein the cache comprises a plurality of cache portions. The method comprises obtaining first information pertaining to an allocation of a first memory portion of a memory to a first application, an allocation of a first processing unit of the plurality of processing units to the first application, and an association between a first cache portion of the plurality of cache portions and the first processing unit. The method further comprises reconfiguring a mapping configuration based on the obtained first information, and controlling a providing of first data associated with the first application to the first cache portion from the first memory portion using the reconfigured mapping configuration.

Keywords
memory handling, shared cache
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-358308 (URN)
Patent
US US12111768B2 (2024-10-08)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20250120

Available from: 2025-01-10 Created: 2025-01-10 Last updated: 2025-01-20Bibliographically approved
Farshin, A., Roozbeh, A., Maguire Jr., G. Q. & Kostic, D. (2020). Optimizing Intel Data Direct I/O Technology for Multi-hundred-gigabit Networks. In: Proceedings of the Fifteenth EuroSys Conference (EuroSys'20), Heraklion, Crete, Greece, April 27-30, 2020.: . Paper presented at Fifteenth EuroSys Conference (EuroSys'20), Heraklion, Crete, Greece, April 27-30, 2020..
Open this publication in new window or tab >>Optimizing Intel Data Direct I/O Technology for Multi-hundred-gigabit Networks
2020 (English)In: Proceedings of the Fifteenth EuroSys Conference (EuroSys'20), Heraklion, Crete, Greece, April 27-30, 2020., 2020Conference paper, Poster (with or without abstract) (Refereed) [Artistic work]
Abstract [en]

Digitalization across society is expected to produce a massive amount of data, leading to the introduction of faster network interconnects. In addition, many Internet services require high throughput and low latency. However, having only faster links does not guarantee throughput or low latency. Therefore, it is essential to perform holistic system optimization to fully take advantage of the faster links to provide high-performance services. Intel Data Direct I/O (DDIO) is a recent technology that was introduced to facilitate the deployment of high-performance services based on fast interconnects. We evaluated the effectiveness of DDIO for multi-hundred-gigabit networks. This paper briefly discusses our findings on DDIO, which show the necessity of optimizing/adapting it to address the challenges of multi-hundred-gigabit-per-second links.

Keywords
Data Direct I/O technology, DDIO, Optimizing, Characteristic, Multi-hundred-gigabit networks.
National Category
Communication Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-272720 (URN)
Conference
Fifteenth EuroSys Conference (EuroSys'20), Heraklion, Crete, Greece, April 27-30, 2020.
Projects
Time-Critical CloudsULTRAWASP
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research EU, Horizon 2020, 770889
Note

QC 20200626

Available from: 2020-04-27 Created: 2020-04-27 Last updated: 2022-06-26Bibliographically approved
Farshin, A., Roozbeh, A., Maguire Jr., G. Q. & Kostic, D. (2020). Reexamining Direct Cache Access to Optimize I/O Intensive Applications for Multi-hundred-gigabit Networks. In: 2020 USENIX Annual Technical Conference (USENIX ATC 20): . Paper presented at USENIX ATC'20 (pp. 673-689).
Open this publication in new window or tab >>Reexamining Direct Cache Access to Optimize I/O Intensive Applications for Multi-hundred-gigabit Networks
2020 (English)In: 2020 USENIX Annual Technical Conference (USENIX ATC 20), 2020, p. 673-689Conference paper, Published paper (Refereed)
Abstract [en]

Memory access is the major bottleneck in realizing multi-hundred-gigabit networks with commodity hardware, hence it is essential to make good use of cache memory that is a faster, but smaller memory closer to the processor. Our goal is to study the impact of cache management on the performance of I/O intensive applications. Specifically, this paper looks at one of the bottlenecks in packet processing, i.e., direct cache access (DCA). We systematically studied the current implementation of DCA in Intel processors, particularly Data Direct I/O technology (DDIO), which directly transfers data between I/O devices and the processor's cache. Our empirical study enables system designers/developers to optimize DDIO-enabled systems for I/O intensive applications. We demonstrate that optimizing DDIO could reduce the latency of I/O intensive network functions running at 100 Gbps by up to ~30%. Moreover, we show that DDIO causes a 30% increase in tail latencies when processing packets at 200 Gbps, hence it is crucial to selectively inject data into the cache or to explicitly bypass it.

Keywords
Direct Cache Access (DCA), Data Direct I/O Technology (DDIO), Cache Injection, Tuning, IIO LLC WAYS Register, Bypassing Cache, Characteristic, Multi-hundred-gigabit networks.
National Category
Communication Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-278513 (URN)000696712200046 ()2-s2.0-85091923908 (Scopus ID)
Conference
USENIX ATC'20
Projects
Time-Critical CloudsULTRAWASP
Funder
Swedish Foundation for Strategic ResearchWallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 770889
Note

QC 20200714

Available from: 2020-07-11 Created: 2020-07-11 Last updated: 2024-03-15Bibliographically approved
Roozbeh, A., Kostic, D., Maguire Jr., G. Q. & Farshin, A. (2019). Entities, system and methods performed therein for handling memory operations of anapplication in a computer environment. us US12111766B2.
Open this publication in new window or tab >>Entities, system and methods performed therein for handling memory operations of anapplication in a computer environment
2019 (English)Patent (Other (popular science, discussion, etc.))
Abstract [en]

Embodiments herein relates e.g., to a method performed by a first entity, for handling memory operations of an application in a computer environment, is provided. The first entity obtains position data associated with data of the application being fragmented into a number of positions in a physical memory. The position data indicates one or more positions of the number of positions in the physical memory. The first entity then provides, to a second entity, one or more indications of the one or more positions indicated by the position data for prefetching data from the second entity, using the one or more indications.

Keywords
memory operations, prefetching
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-358307 (URN)
Patent
US US12111766B2 (2024-10-08)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20250120

Available from: 2025-01-10 Created: 2025-01-10 Last updated: 2025-01-20Bibliographically approved
Farshin, A., Roozbeh, A., Maguire Jr., G. Q. & Kostic, D. (2019). Make the Most out of Last Level Cache in Intel Processors. In: Proceedings of the Fourteenth EuroSys Conference (EuroSys'19), Dresden, Germany, 25-28 March 2019.: . Paper presented at EuroSys'19. ACM Digital Library
Open this publication in new window or tab >>Make the Most out of Last Level Cache in Intel Processors
2019 (English)In: Proceedings of the Fourteenth EuroSys Conference (EuroSys'19), Dresden, Germany, 25-28 March 2019., ACM Digital Library, 2019Conference paper, Published paper (Refereed)
Abstract [en]

In modern (Intel) processors, Last Level Cache (LLC) is divided into multiple slices and an undocumented hashing algorithm (aka Complex Addressing) maps different parts of memory address space among these slices to increase the effective memory bandwidth. After a careful study of Intel’s Complex Addressing, we introduce a slice-aware memory management scheme, wherein frequently used data can be accessed faster via the LLC. Using our proposed scheme, we show that a key-value store can potentially improve its average performance ∼12.2% and ∼11.4% for 100% & 95% GET workloads, respectively. Furthermore, we propose CacheDirector, a network I/O solution which extends Direct Data I/O (DDIO) and places the packet’s header in the slice of the LLC that is closest to the relevant processing core. We implemented CacheDirector as an extension to DPDK and evaluated our proposed solution for latency-critical applications in Network Function Virtualization (NFV) systems. Evaluation results show that CacheDirector makes packet processing faster by reducing tail latencies (90-99th percentiles) by up to 119 µs (∼21.5%) for optimized NFV service chains that are running at 100 Gbps. Finally, we analyze the effectiveness of slice-aware memory management to realize cache isolation

Place, publisher, year, edition, pages
ACM Digital Library, 2019
Keywords
Slice-aware Memory Management, Last Level Cache, Non-Uniform Cache Architecture, CacheDirector, DDIO, DPDK, Network Function Virtualization, Cache Partitioning, Cache Allocation Technology, Key-Value Store.
National Category
Communication Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-244750 (URN)10.1145/3302424.3303977 (DOI)000470898700008 ()2-s2.0-85063919722 (Scopus ID)
Conference
EuroSys'19
Projects
Time-Critical CloudsULTRAWASP
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic ResearchEU, Horizon 2020, 770889
Note

QC 20190226

Part of ISBN 9781450362818

Available from: 2019-02-24 Created: 2019-02-24 Last updated: 2024-10-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5083-4052

Search in DiVA

Show all publications