A new coordination model is constructed for distributed shared memory parallel programs. It exploits typing of shared resources and formal specification of a priori known synchronization constraints.
Principles for coordination and composition of parallel/distributed programs are discussed. We advocate a synchronizing shared memory model (EDA) for coordination and an algebraic approach to building programs using a linking language (LL) based on module composition, restriction and renaming. A prototype system ErlEda illustrating these principles is described. The system uses the concurrent programming language Erlang and its distributed environment as a basis. We illustrate the approach using the Dirichlet problem.
This article presents mEDA-2, an extension to PVM which provides Virtual Shared Memory, VSM, for inter-task communication and synchronization, mEDAl2 consists of functions to access VSM and a daemon to manage parallel program termination. Access to VSM is based on the semantics of the EDA model. The aim of developing mEDA-2 was to facilitate construction of parallel programs in PVM by providing a unified approach to message passing and shared memory models.
Simulation is a powerful tool for studying behavior of novel architectures and improving their performance. However the time, effort and resources invested in developing a reliable simulator with the required level of detail may become prohibitively large. We present a simulation platform specifically designed to simulate the class of multithreaded architectures. The most important features of this simulator are its flexibility and ease of use. The simulation model provides the user with a wide range of design criteria, architectural parameters and workload characteristics. The simulation platform includes several tools, such as: an experiment planner, an interface to Matlab for processing and displaying results, and an interface to PVM for the execution of independent experiments in parallel. The simulation model is validated by comparison of analytical and experimental results.
Multithreaded architectures are widely used for, among other things, hiding long memory latency. In such an architecture, a number of threads are allocated to each Processing Element (PE), and whenever a running thread becomes suspended, the PE switches to the next ready thread. We have developed a simulation platform, MTASim, that can be used to test and evaluate various policies and parameters of a multithreaded computer. The most important features of the MTASim are its flexibility and its ease of use. The MTASim model is based on finite state machines and can be easily modified and expanded. The simulation platform includes an experimental planner, an interface to PVM for the execution of independent experiments in parallel, and an interface to Matlab for processing and displaying results. The MTASim has been used to, among other things, determine the optimal number of threads and to evaluate various prefetching strategies and thread replacement algorithms.
In this article we concentrate on implementation issues of a new shared-memory programming model, mEDA, on top of PVM. The main goal of this research is to provide a flexible, effective and relatively simple model of in- ter-task communication and synchronization via synchronizing shared memory.
A combination of multithreading with prefetching allows increased efficiency of large-scale multiprocessors. In this paper, we evaluate two prefetching techniques in multi-threaded architectures: switch-on-prefetch and run-on-prefetch. We present two basic analytical models of multithreading with prefetching, which allow rough performance prediction on the first stages of top-down system design. The first model is the first-order approximation for efficiency of multi-threaded architectures with prefetching. The second model is a queuing network of the architecture.
In this article we concentrate on semantics of operations on synchronizing shared memory that provide useful primitives for shared memory access and update. The operations are a part of a new shared- memory programming model called mEDA. The main goal of this re- search is to provide a flexible, effective and relatively simple model of inter-process communication and synchronization via synchronizing shared memory.