Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Narasimhamurthy, S.
    et al.
    Danilov, N.
    Wu, S.
    Umanesan, G.
    Chien, Steven Wei Der
    KTH.
    Rivas-Gomez, Sergio
    KTH.
    Peng, Ivy Bo
    KTH.
    Laure, Erwin
    KTH.
    De Witt, S.
    Pleiter, D.
    Markidis, Stefano
    KTH.
    The SAGE project: A storage centric approach for exascale computing2018In: 2018 ACM International Conference on Computing Frontiers, CF 2018 - Proceedings, Association for Computing Machinery (ACM), 2018, p. 287-292Conference paper (Refereed)
    Abstract [en]

    SAGE (Percipient StorAGe for Exascale Data Centric Computing) is a European Commission funded project towards the era of Exascale computing. Its goal is to design and implement a Big Data/Extreme Computing (BDEC) capable infrastructure with associated software stack. The SAGE system follows a storage centric approach as it is capable of storing and processing large data volumes at the Exascale regime. SAGE addresses the convergence of Big Data Analysis and HPC in an era of next-generation data centric computing. This convergence is driven by the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors where data needs to be processed, analyzed and integrated into simulations to derive scientific and innovative insights. A first prototype of the SAGE system has been been implemented and installed at the Jülich Supercomputing Center. The SAGE storage system consists of multiple types of storage device technologies in a multi-tier I/O hierarchy, including flash, disk, and non-volatile memory technologies. The main SAGE software component is the Seagate Mero Object Storage that is accessible via the Clovis API and higher level interfaces. The SAGE project also includes scientific applications for the validation of the SAGE concepts. The objective of this paper is to present the SAGE project concepts, the prototype of the SAGE platform and discuss the software architecture of the SAGE system.

  • 2.
    Narasimhamurthy, Sai
    et al.
    Seagate Syst UK, London, England..
    Danilov, Nikita
    Seagate Syst UK, London, England..
    Wu, Sining
    Seagate Syst UK, London, England..
    Umanesan, Ganesan
    Seagate Syst UK, London, England..
    Markidis, Stefano
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Rivas-Gomez, Sergio
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Peng, Ivy Bo
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Laure, Erwin
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for High Performance Computing, PDC.
    Pleiter, Dirk
    Julich Supercomp Ctr, Julich, Germany..
    de Witt, Shaun
    Culham Ctr Fus Energy, Abingdon, Oxon, England..
    SAGE: Percipient Storage for Exascale Data Centric Computing2019In: Parallel Computing, ISSN 0167-8191, E-ISSN 1872-7336, Vol. 83, p. 22-33Article in journal (Refereed)
    Abstract [en]

    We aim to implement a Big Data/Extreme Computing (BDEC) capable system infrastructure as we head towards the era of Exascale computing - termed SAGE (Percipient StorAGe for Exascale Data Centric Computing). The SAGE system will be capable of storing and processing immense volumes of data at the Exascale regime, and provide the capability for Exascale class applications to use such a storage infrastructure. SAGE addresses the increasing overlaps between Big Data Analysis and HPC in an era of next-generation data centric computing that has developed due to the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors, whose data needs to be processed, analysed and integrated into simulations to derive scientific and innovative insights. Indeed, Exascale I/O, as a problem that has not been sufficiently dealt with for simulation codes, is appropriately addressed by the SAGE platform. The objective of this paper is to discuss the software architecture of the SAGE system and look at early results we have obtained employing some of its key methodologies, as the system continues to evolve.

  • 3.
    Rivas Gomez, Sergio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Markidis, Stefano
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Laure, Erwin
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for High Performance Computing, PDC.
    Brabazon, K.
    Perks, O.
    Narasimhamurthy, S.
    Decoupled Strategy for Imbalanced Workloads in MapReduce Frameworks2019In: Proceedings - 20th International Conference on High Performance Computing and Communications, 16th International Conference on Smart City and 4th International Conference on Data Science and Systems, HPCC/SmartCity/DSS 2018, Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 921-927Conference paper (Refereed)
    Abstract [en]

    In this work, we consider the integration of MPI one-sided communication and non-blocking I/O in HPC-centric MapReduce frameworks. Using a decoupled strategy, we aim to overlap the Map and Reduce phases of the algorithm by allowing processes to communicate and synchronize using solely one-sided operations. Hence, we effectively increase the performance in situations where the workload per process becomes unexpectedly unbalanced. Using a Word-Count implementation and a large dataset from the Purdue MapReduce Benchmarks Suite (PUMA), we demonstrate that our approach can provide up to 23% performance improvement on average compared to a reference MapReduce implementation that uses state-of-the-art MPI collective communication and I/O.

  • 4.
    Rivas-Gomez, Sergio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Gioiosa, Roberto
    Oak Ridge Natl Lab, Oak Ridge, TN 37830 USA..
    Peng, Ivy Bo
    Oak Ridge Natl Lab, Oak Ridge, TN 37830 USA..
    Kestor, Gokcen
    Oak Ridge Natl Lab, Oak Ridge, TN 37830 USA..
    Narasimhamurthy, Sai
    Seagate Syst UK, Havant PO9 1SA, England..
    Laure, Erwin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Markidis, Stefano
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    MPI windows on storage for HPC applications2018In: Parallel Computing, ISSN 0167-8191, E-ISSN 1872-7336, Vol. 77, p. 38-56Article in journal (Refereed)
    Abstract [en]

    Upcoming HPC clusters will feature hybrid memories and storage devices per compute node. In this work, we propose to use the MPI one-sided communication model and MPI windows as unique interface for programming memory and storage. We describe the design and implementation of MPI storage windows, and present its benefits for out-of-core execution, parallel I/O and fault-tolerance. In addition, we explore the integration of heterogeneous window allocations, where memory and storage share a unified virtual address space. When performing large, irregular memory operations, we verify that MPI windows on local storage incurs a 55% performance penalty on average. When using a Lustre parallel file system, "asymmetric" performance is observed with over 90% degradation in writing operations. Nonetheless, experimental results of a Distributed Hash Table, the HACC I/O kernel mini-application, and a novel MapReduce implementation based on the use of MPI one-sided communication, indicate that the overall penalty of MPI windows on storage can be negligible in most cases in real-world applications.

  • 5.
    Rivas-Gomez, Sergio
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Markidis, Stefano
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Peng, Ivy Bo
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Laure, E.
    Kestor, G.
    Gioiosa, R.
    Extending message passing interface windows to storage2017In: Proceedings - 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 728-730Conference paper (Refereed)
    Abstract [en]

    This paper presents an extension to MPI supporting the one-sided communication model and window allocations in storage. Our design transparently integrates with the current MPI implementations, enabling applications to target MPI windows in storage, memory or both simultaneously, without major modifications. Initial performance results demonstrate that the presented MPI window extension could potentially be helpful for a wide-range of use-cases and with low-overhead.

  • 6.
    Rivas-Gomez, Sergio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Pena, A. J.
    Moloney, D.
    Laure, Erwin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Markidis, Stefano
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    Exploring the vision processing unit as co-processor for inference2018In: Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 589-598, article id 8425465Conference paper (Refereed)
    Abstract [en]

    The success of the exascale supercomputer is largely debated to remain dependent on novel breakthroughs in technology that effectively reduce the power consumption and thermal dissipation requirements. In this work, we consider the integration of co-processors in high-performance computing (HPC) to enable low-power, seamless computation offloading of certain operations. In particular, we explore the so-called Vision Processing Unit (VPU), a highly-parallel vector processor with a power envelope of less than 1W. We evaluate this chip during inference using a pre-trained GoogLeNet convolutional network model and a large image dataset from the ImageNet ILSVRC challenge. Preliminary results indicate that a multi-VPU configuration provides similar performance compared to reference CPU and GPU implementations, while reducing the thermal-design power (TDP) up to 8x in comparison.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf