Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 10) Show all publications
Narasimhamurthy, S., Danilov, N., Wu, S., Umanesan, G., Markidis, S., Rivas-Gomez, S., . . . de Witt, S. (2019). SAGE: Percipient Storage for Exascale Data Centric Computing. Parallel Computing, 83, 22-33
Open this publication in new window or tab >>SAGE: Percipient Storage for Exascale Data Centric Computing
Show others...
2019 (English)In: Parallel Computing, ISSN 0167-8191, E-ISSN 1872-7336, Vol. 83, p. 22-33Article in journal (Refereed) Published
Abstract [en]

We aim to implement a Big Data/Extreme Computing (BDEC) capable system infrastructure as we head towards the era of Exascale computing - termed SAGE (Percipient StorAGe for Exascale Data Centric Computing). The SAGE system will be capable of storing and processing immense volumes of data at the Exascale regime, and provide the capability for Exascale class applications to use such a storage infrastructure. SAGE addresses the increasing overlaps between Big Data Analysis and HPC in an era of next-generation data centric computing that has developed due to the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors, whose data needs to be processed, analysed and integrated into simulations to derive scientific and innovative insights. Indeed, Exascale I/O, as a problem that has not been sufficiently dealt with for simulation codes, is appropriately addressed by the SAGE platform. The objective of this paper is to discuss the software architecture of the SAGE system and look at early results we have obtained employing some of its key methodologies, as the system continues to evolve.

Place, publisher, year, edition, pages
Elsevier, 2019
Keywords
SAGE architecture, Object storage, Mero, Clovis, PGAS I/O, MPI I/O, MPI streams
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-254119 (URN)10.1016/j.parco.2018.03.002 (DOI)000469898400003 ()2-s2.0-85044917976 (Scopus ID)
Note

QC 20190624

Available from: 2019-06-24 Created: 2019-06-24 Last updated: 2019-06-24Bibliographically approved
Chien, S. W., Sishtla, C. P., Markidis, S., Jun, Z., Peng, I. B. & Laure, E. (2018). An Evaluation of the TensorFlow Programming Model for Solving Traditional HPC Problems. In: Proceedings of the 5th International Conference on Exascale Applications and Software: . Paper presented at International Conference on Exascale Applications and Software (pp. 34). The University of Edinburgh
Open this publication in new window or tab >>An Evaluation of the TensorFlow Programming Model for Solving Traditional HPC Problems
Show others...
2018 (English)In: Proceedings of the 5th International Conference on Exascale Applications and Software, The University of Edinburgh , 2018, p. 34-Conference paper, Published paper (Refereed)
Abstract [en]

Computational intensive applications such as pattern recognition, and natural language processing, are increasingly popular on HPC systems. Many of these applications use deep-learning, a branch of machine learning, to determine the weights of artificial neural network nodes by minimizing a loss function. Such applications depend heavily on dense matrix multiplications, also called tensorial operations. The use of Graphics Processing Unit (GPU) has considerably speeded up deep-learning computations, leading to a Renaissance of the artificial neural network. Recently, the NVIDIA Volta GPU and the Google Tensor Processing Unit (TPU) have been specially designed to support deep-learning workloads. New programming models have also emerged for convenient expression of tensorial operations and deep-learning computational paradigms. An example of such new programming frameworks is TensorFlow, an open-source deep-learning library released by Google in 2015. TensorFlow expresses algorithms as a computational graph where nodes represent operations and edges between nodes represent data flow. Multi-dimensional data such as vectors and matrices which flows between operations are called Tensors. For this reason, computation problems need to be expressed as a computational graph. In particular, TensorFlow supports distributed computation with flexible assignment of operation and data to devices such as GPU and CPU on different computing nodes. Computation on devices are based on optimized kernels such as MKL, Eigen and cuBLAS. Inter-node communication can be through TCP and RDMA. This work attempts to evaluate the usability and expressiveness of the TensorFlow programming model for traditional HPC problems. As an illustration, we prototyped a distributed block matrix multiplication for large dense matrices which cannot be co-located on a single device and a Conjugate Gradient (CG) solver. We evaluate the difficulty of expressing traditional HPC algorithms using computational graphs and study the scalability of distributed TensorFlow on accelerated systems. Our preliminary result with distributed matrix multiplication shows that distributed computation on TensorFlow is extremely scalable. This study provides an initial investigation of new emerging programming models for HPC.

Place, publisher, year, edition, pages
The University of Edinburgh, 2018
Keywords
TensorFlow, HPC, GPU
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-232985 (URN)978-0-9926615-3-3 (ISBN)
Conference
International Conference on Exascale Applications and Software
Note

Published in Proceedings of the 5th International Conference on Exascale Applications and Software. Edinburgh: The University of Edinburgh (2018), ISBN: 978-0-9926615-3-3, pp.34, Published under license CC BY-ND 4.0.

Available from: 2018-08-07 Created: 2018-08-07 Last updated: 2019-05-10Bibliographically approved
Yu, Y., Delzanno, G. L., Jordanova, V., Peng, I. B. & Markidis, S. (2018). PIC simulations of wave-particle interactions with an initial electron velocity distribution from a kinetic ring current model. Paper presented at 3rd International Symposium on Recent Observations and Simulations of the Sun-Earth System (ISROSES), SEP 11-16, 2016, Golden Sands, BULGARIA. Journal of Atmospheric and Solar-Terrestrial Physics, 177, 169-178
Open this publication in new window or tab >>PIC simulations of wave-particle interactions with an initial electron velocity distribution from a kinetic ring current model
Show others...
2018 (English)In: Journal of Atmospheric and Solar-Terrestrial Physics, ISSN 1364-6826, E-ISSN 1879-1824, Vol. 177, p. 169-178Article in journal (Refereed) Published
Abstract [en]

Whistler wave-particle interactions play an important role in the Earth inner magnetospheric dynamics and have been the subject of numerous investigations. By running a global kinetic ring current model (RAM-SCB) in a storm event occurred on Oct 23-24 2002, we obtain the ring current electron distribution at a selected location at MLT of 9 and L of 6 where the electron distribution is composed of a warm population in the form of a partial ring in the velocity space (with energy around 15 keV) in addition to a cool population with a Maxwellian-like distribution. The warm population is likely from the injected plasma sheet electrons during substorm injections that supply fresh source to the inner magnetosphere. These electron distributions are then used as input in an implicit particle-in-cell code (iPIC3D) to study whistler-wave generation and the subsequent wave-particle interactions. We find that whistler waves are excited and propagate in the quasi-parallel direction along the background magnetic field. Several different wave modes are instantaneously generated with different growth rates and frequencies. The wave mode at the maximum growth rate has a frequency around 0.62 omega(ce), which corresponds to a parallel resonant energy of 2.5 keV. Linear theory analysis of wave growth is in excellent agreement with the simulation results. These waves grow initially due to the injected warm electrons and are later damped due to cyclotron absorption by electrons whose energy is close to the resonant energy and can effectively attenuate waves. The warm electron population overall experiences net energy loss and anisotropy drop while moving along the diffusion surfaces towards regions of lower phase space density, while the cool electron population undergoes heating when the waves grow, suggesting the cross-population interactions.

Place, publisher, year, edition, pages
PERGAMON-ELSEVIER SCIENCE LTD, 2018
Keywords
Wave-particle interactions, Realistic non-Maxwellian electron distribution, Whistler wave generation
National Category
Astronomy, Astrophysics and Cosmology
Identifiers
urn:nbn:se:kth:diva-238145 (URN)10.1016/j.jastp.2017.07.004 (DOI)000447110300019 ()2-s2.0-85025101910 (Scopus ID)
Conference
3rd International Symposium on Recent Observations and Simulations of the Sun-Earth System (ISROSES), SEP 11-16, 2016, Golden Sands, BULGARIA
Note

QC 20181108

Available from: 2018-11-08 Created: 2018-11-08 Last updated: 2018-11-08Bibliographically approved
Ma, Y., Russell, C. T., Toth, G., Chen, Y., Nagy, A. F., Harada, Y., . . . Jakosky, B. M. (2018). Reconnection in the Martian Magnetotail: Hall-MHD With Embedded Particle-in-Cell Simulations. Journal of Geophysical Research - Space Physics, 123(5), 3742-3763
Open this publication in new window or tab >>Reconnection in the Martian Magnetotail: Hall-MHD With Embedded Particle-in-Cell Simulations
Show others...
2018 (English)In: Journal of Geophysical Research - Space Physics, ISSN 2169-9380, E-ISSN 2169-9402, Vol. 123, no 5, p. 3742-3763Article in journal (Refereed) Published
Abstract [en]

Mars Atmosphere and Volatile EvolutioN (MAVEN) mission observations show clear evidence of the occurrence of the magnetic reconnection process in the Martian plasma tail. In this study, we use sophisticated numerical models to help us understand the effects of magnetic reconnection in the plasma tail. The numerical models used in this study are (a) a multispecies global Hall-magnetohydrodynamic (HMHD) model and (b) a global HMHD model two-way coupled to an embedded fully kinetic particle-in-cell code. Comparison with MAVEN observations clearly shows that the general interaction pattern is well reproduced by the global HMHD model. The coupled model takes advantage of both the efficiency of the MHD model and the ability to incorporate kinetic processes of the particle-in-cell model, making it feasible to conduct kinetic simulations for Mars under realistic solar wind conditions for the first time. Results from the coupled model show that the Martian magnetotail is highly dynamic due to magnetic reconnection, and the resulting Mars-ward plasma flow velocities are significantly higher for the lighter ion fluid, which are quantitatively consistent with MAVEN observations. The HMHD with Embedded Particle-in-Cell model predicts that the ion loss rates are more variable but with similar mean values as compared with HMHD model results.

Place, publisher, year, edition, pages
AMER GEOPHYSICAL UNION, 2018
Keywords
magnetic reconnection, MHD EPIC, Martian plasma tail
National Category
Fusion, Plasma and Space Physics
Identifiers
urn:nbn:se:kth:diva-232276 (URN)10.1029/2017JA024729 (DOI)000435943300031 ()2-s2.0-85048864241 (Scopus ID)
Note

QC 20180719

Available from: 2018-07-19 Created: 2018-07-19 Last updated: 2018-07-19Bibliographically approved
Narasimhamurthy, S., Danilov, N., Wu, S., Umanesan, G., Chien, S. W., Rivas-Gomez, S., . . . Markidis, S. (2018). The SAGE project: A storage centric approach for exascale computing. In: 2018 ACM International Conference on Computing Frontiers, CF 2018 - Proceedings: . Paper presented at 15th ACM International Conference on Computing Frontiers, CF 2018, Ischia, Italy, 8 May 2018 through 10 May 2018 (pp. 287-292). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>The SAGE project: A storage centric approach for exascale computing
Show others...
2018 (English)In: 2018 ACM International Conference on Computing Frontiers, CF 2018 - Proceedings, Association for Computing Machinery (ACM), 2018, p. 287-292Conference paper, Published paper (Refereed)
Abstract [en]

SAGE (Percipient StorAGe for Exascale Data Centric Computing) is a European Commission funded project towards the era of Exascale computing. Its goal is to design and implement a Big Data/Extreme Computing (BDEC) capable infrastructure with associated software stack. The SAGE system follows a storage centric approach as it is capable of storing and processing large data volumes at the Exascale regime. SAGE addresses the convergence of Big Data Analysis and HPC in an era of next-generation data centric computing. This convergence is driven by the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors where data needs to be processed, analyzed and integrated into simulations to derive scientific and innovative insights. A first prototype of the SAGE system has been been implemented and installed at the Jülich Supercomputing Center. The SAGE storage system consists of multiple types of storage device technologies in a multi-tier I/O hierarchy, including flash, disk, and non-volatile memory technologies. The main SAGE software component is the Seagate Mero Object Storage that is accessible via the Clovis API and higher level interfaces. The SAGE project also includes scientific applications for the validation of the SAGE concepts. The objective of this paper is to present the SAGE project concepts, the prototype of the SAGE platform and discuss the software architecture of the SAGE system.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2018
Keywords
Exascale Computing, SAGE Project, Storage Centric Computing
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-234121 (URN)10.1145/3203217.3205341 (DOI)2-s2.0-85052213758 (Scopus ID)9781450357616 (ISBN)
Conference
15th ACM International Conference on Computing Frontiers, CF 2018, Ischia, Italy, 8 May 2018 through 10 May 2018
Note

QC 20180903

Available from: 2018-09-03 Created: 2018-09-03 Last updated: 2018-09-03Bibliographically approved
Peng, I. B. (2017). Data Movement on Emerging Large-Scale Parallel Systems. (Doctoral dissertation). KTH Royal Institute of Technology
Open this publication in new window or tab >>Data Movement on Emerging Large-Scale Parallel Systems
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Large-scale HPC systems are an important driver for solving computational problems in scientific communities. Next-generation HPC systems will not only grow in scale but also in heterogeneity. This increased system complexity entails more challenges to data movement in HPC applications. Data movement on emerging HPC systems requires asynchronous fine-grained communication and efficient data placement in the main memory. This thesis proposes an innovative programming model and algorithm to prepare HPC applications for the next computing era: (1) a data streaming model that supports emerging data-intensive applications on supercomputers, (2) a decoupling model that improves parallelism and mitigates the impact of imbalance in applications, (3) a new framework and methodology for predicting the impact of largescale heterogeneous memory systems on HPC applications, and (4) a data placement algorithm that uses a set of rules and a decision tree to determine the data-to-memory mapping in heterogeneous main memory.

The proposed approaches in this thesis are evaluated on multiple supercomputers with different processors and interconnect networks. The evaluation uses a diverse set of applications that represent conventional scientific applications and emerging data-analytic workloads on HPC systems. The experimental results on the petascale testbed show that the approaches obtain increasing performance improvements as system scale increases and this trend supports the approaches as a valuable contribution towards future HPC systems.

Abstract [sv]

Storskaliga HPC-system är en viktig drivkraft för att lösa datorproblem i vetenskapliga samhällen. Nästa generations HPC-system kommer inte bara att växa i skala utan också i heterogenitet. Denna ökade systemkomplexitet medför flera utmaningar för dataförflyttning i HPC-applikationer. Dataförflyttning på nya HPC-system kräver asynkron, finkorrigerad kommunikation och en effektiv dataplacering i huvudminnet.

Denna avhandling föreslår en innovativ programmeringsmodell och algoritm för att förbereda HPC-applikationer för nästa generation: (1) en dataströmningsmodell som stöder nya dataintensiva applikationer på superdatorer, (2) en kopplingsmodell som förbättrar parallelliteten och minskar obalans i applikationer, (3) en ny metologi och struktur för att förutse effekten av storskaliga, heterogena minnessystem på HPC-applikationer, och (4) en datalägesalgoritm som använder en uppsättning av regler och ett beslutsträd för att bestämma kartläggningen av data-till-minnet i det heterogena huvudminnet.

Den föreslagna programmeringsmodellen i denna avhandling är utvärderad på flera superdatorer med olika processorer och sammankopplingsnät. Utvärderingen använder en mängd olika applikationer som representerar konventionella vetenskapliga applikationer och nya dataanalyser på HPC-system. Experimentella resultat på testbädden i petascala visar att programmeringsmodellen förbättrar prestandan när systemskalan ökar. Denna trend indikerar att modellen är ett värdefullt bidrag till framtida HPC-system.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2017. p. 116
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2017:25
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-218338 (URN)978-91-7729-592-1 (ISBN)
Public defence
2017-12-18, F3, Lindstedtsvägen 26, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20171128

Available from: 2017-11-28 Created: 2017-11-27 Last updated: 2018-01-13Bibliographically approved
Rivas-Gomez, S., Markidis, S., Peng, I. B., Laure, E., Kestor, G. & Gioiosa, R. (2017). Extending message passing interface windows to storage. In: Proceedings - 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017: . Paper presented at 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017, 14 May 2017 through 17 May 2017 (pp. 728-730). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Extending message passing interface windows to storage
Show others...
2017 (English)In: Proceedings - 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 728-730Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents an extension to MPI supporting the one-sided communication model and window allocations in storage. Our design transparently integrates with the current MPI implementations, enabling applications to target MPI windows in storage, memory or both simultaneously, without major modifications. Initial performance results demonstrate that the presented MPI window extension could potentially be helpful for a wide-range of use-cases and with low-overhead.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2017
Keywords
MPI, One-sided communication, Parallel computing, Storage, Cluster computing, Distributed computer systems, Energy storage, Grid computing, Parallel processing systems, Low overhead, Message passing interface, Mpi implementations, One sided communication, Message passing
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-216291 (URN)10.1109/CCGRID.2017.44 (DOI)000426912900085 ()2-s2.0-85027443529 (Scopus ID)9781509066100 (ISBN)
Conference
17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017, 14 May 2017 through 17 May 2017
Note

QC 20171211

Available from: 2017-12-11 Created: 2017-12-11 Last updated: 2018-04-03Bibliographically approved
Chen, Y., Toth, G., Cassak, P., Jia, X., Gombosi, T. I., Slavin, J. A., . . . Henderson, M. G. (2017). Global Three-Dimensional Simulation of Earth's Dayside Reconnection Using a Two-Way Coupled Magnetohydrodynamics With Embedded Particle-in-Cell Model: Initial Results. Journal of Geophysical Research - Space Physics, 122(10), 10318-10335
Open this publication in new window or tab >>Global Three-Dimensional Simulation of Earth's Dayside Reconnection Using a Two-Way Coupled Magnetohydrodynamics With Embedded Particle-in-Cell Model: Initial Results
Show others...
2017 (English)In: Journal of Geophysical Research - Space Physics, ISSN 2169-9380, E-ISSN 2169-9402, Vol. 122, no 10, p. 10318-10335Article in journal (Refereed) Published
Abstract [en]

We perform a three-dimensional (3-D) global simulation of Earth's magnetosphere with kinetic reconnection physics to study the flux transfer events (FTEs) and dayside magnetic reconnection with the recently developed magnetohydrodynamics with embedded particle-in-cell model. During the 1 h long simulation, the FTEs are generated quasi-periodically near the subsolar point and move toward the poles. We find that the magnetic field signature of FTEs at their early formation stage is similar to a "crater FTE," which is characterized by a magnetic field strength dip at the FTE center. After the FTE core field grows to a significant value, it becomes an FTE with typical flux rope structure. When an FTE moves across the cusp, reconnection between the FTE field lines and the cusp field lines can dissipate the FTE. The kinetic features are also captured by our model. A crescent electron phase space distribution is found near the reconnection site. A similar distribution is found for ions at the location where the Larmor electric field appears. The lower hybrid drift instability (LHDI) along the current sheet direction also arises at the interface of magnetosheath and magnetosphere plasma. The LHDI electric field is about 8 mV/m, and its dominant wavelength relative to the electron gyroradius agrees reasonably with Magnetospheric Multiscale (MMS) observations.

Place, publisher, year, edition, pages
AMER GEOPHYSICAL UNION, 2017
National Category
Fusion, Plasma and Space Physics
Identifiers
urn:nbn:se:kth:diva-222211 (URN)10.1002/2017JA024186 (DOI)000419937800039 ()2-s2.0-85031747497 (Scopus ID)
Note

QC 20180205

Available from: 2018-02-05 Created: 2018-02-05 Last updated: 2018-02-05Bibliographically approved
Peng, I. B., Markidis, S., Gioiosa, R., Kestor, G. & Laure, E. (2017). MPI Streams for HPC Applications. In: Geoffrey Fox, Vladimir Getov, Lucio Grandinetti, Gerhard Joubert, Thomas Sterling (Ed.), New Frontiers in High Performance Computing and Big Data: . Paper presented at International Research Workshop on Advanced High Performance Computing Systems, JUL, 2016, Cetraro, ITALY (pp. 75-92). IOS Press
Open this publication in new window or tab >>MPI Streams for HPC Applications
Show others...
2017 (English)In: New Frontiers in High Performance Computing and Big Data / [ed] Geoffrey Fox, Vladimir Getov, Lucio Grandinetti, Gerhard Joubert, Thomas Sterling, IOS Press, 2017, p. 75-92Conference paper, Published paper (Refereed)
Abstract [en]

Data streams are a sequence of data flowing between source and destination processes. Streaming is widely used for signal, image and video processing for its efficiency in pipelining and effectiveness in reducing demand for memory. The goal of this work is to extend the use of data streams to support both conventional scientific applications and emerging data analytics applications running on HPC platforms. We introduce an extension called MPIStream to the de-facto programming standard on HPC, MPI. MPIStream supports data streams either within a single application or among multiple applications. We present three use cases using MPI streams in HPC applications together with their parallel performance. We show the convenience of using MPI streams to support the needs from both traditional HPC and emerging data analytics applications running on supercomputers.

Place, publisher, year, edition, pages
IOS Press, 2017
Series
Advances in Parallel Computing, ISSN 0927-5452 ; 30
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-218334 (URN)10.3233/978-1-61499-816-7-75 (DOI)000450329200004 ()2-s2.0-85046361827 (Scopus ID)978-1-61499-815-0 (ISBN)978-1-61499-816-7 (ISBN)
Conference
International Research Workshop on Advanced High Performance Computing Systems, JUL, 2016, Cetraro, ITALY
Note

QCR 2017. QC 20191106

Available from: 2017-11-27 Created: 2017-11-27 Last updated: 2019-11-06Bibliographically approved
Toth, G., Chen, Y., Gombosi, T. I., Cassak, P., Markidis, S. & Peng, I. B. (2017). Scaling the Ion Inertial Length and Its Implications for Modeling Reconnection in Global Simulations. Journal of Geophysical Research - Space Physics, 122(10), 10336-10355
Open this publication in new window or tab >>Scaling the Ion Inertial Length and Its Implications for Modeling Reconnection in Global Simulations
Show others...
2017 (English)In: Journal of Geophysical Research - Space Physics, ISSN 2169-9380, E-ISSN 2169-9402, Vol. 122, no 10, p. 10336-10355Article in journal (Refereed) Published
Abstract [en]

We investigate the use of artificially increased ion and electron kinetic scales in global plasma simulations. We argue that as long as the global and ion inertial scales remain well separated, (1) the overall global solution is not strongly sensitive to the value of the ion inertial scale, while (2) the ion inertial scale dynamics will also be similar to the original system, but it occurs at a larger spatial scale, and (3) structures at intermediate scales, such as magnetic islands, grow in a self-similar manner. To investigate the validity and limitations of our scaling hypotheses, we carry out many simulations of a two-dimensional magnetosphere with the magnetohydrodynamics with embedded particle-in-cell (MHD-EPIC) model. The PIC model covers the dayside reconnection site. The simulation results confirm that the hypotheses are true as long as the increased ion inertial length remains less than about 5% of the magnetopause standoff distance. Since the theoretical arguments are general, we expect these results to carry over to three dimensions. The computational cost is reduced by the third and fourth powers of the scaling factor in two-and three-dimensional simulations, respectively, which can be many orders of magnitude. The present results suggest that global simulations that resolve kinetic scales for reconnection are feasible. This is a crucial step for applications to the magnetospheres of Earth, Saturn, and Jupiter and to the solar corona.

Place, publisher, year, edition, pages
AMER GEOPHYSICAL UNION, 2017
National Category
Fusion, Plasma and Space Physics
Identifiers
urn:nbn:se:kth:diva-222212 (URN)10.1002/2017JA024189 (DOI)000419937800040 ()2-s2.0-85031735145 (Scopus ID)
Note

QC 20180205

Available from: 2018-02-05 Created: 2018-02-05 Last updated: 2018-02-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4158-3583

Search in DiVA

Show all publications