Global Address Space Programming Interface

GPI
Developer(s) Fraunhofer ITWM
Stable release GPI-2 v1.2.0 / May 15, 2015
Operating system Linux
Type Partitioned global address space API
Website www.itwm.fraunhofer.de
gpi-site.com

Global Address Space Programming Interface (GPI) is an API for the development of scalable, asynchronous and fault tolerant parallel applications.[1] It is an implementation of the partitioned global address space programming model.[2]

History

GPI is developed by the Fraunhofer Institute for Industrial Mathematics (ITWM) since 2005 and was initially known as FVM (Fraunhofer Virtual Machine).

In 2009 the name changed to Global Address Programming Interface or GPI.

In 2011, Fraunhofer ITWM and its partners such as Fraunhofer SCAI, TUD, T-Systems SfR, DLR, KIT, FZJ, DWD and Scapos have initiated and launched the GASPI[3] project to define a novel specification for an API (GASPI based on GPI) and to make this novel specification a reliable, scalable and universal tool for the HPC community. GPI-2 is the first open source implementation of this standard.

The software is freely available to application developers and researchers, licenses for commercial use are available through Scapos AG.[2]

GPI has completely replaced MPI at Fraunhofer ITWM, where all products and research are based on the new GPI-2.

Concepts

Segments

"GPI Architecture"
GPI Architecture

Modern hardware typically involves a hierarchy of memory with respect to the bandwidth and latency of read and write accesses. Within that hierarchy are non-uniform memory access (NUMA) partitions, solid state devices (SSDs), graphical processing unit (GPU) memory or many integrated cores (MIC) memory. The memory segments are supposed to map this variety of hardware layers to the software layer. In the spirit of the PGAS approach, these GPI segments may be globally accessible from every thread of every GPI process. GPI segments can also be used to leverage different memory models within a single application or to even run different applications.

Groups

A group is a subset of all ranks. The group members have common collective operations. A collective operation on a group is then restricted to the ranks forming that group. There is a initial group (GASPI_GROUP_ALL) from which all ranks are members. Forming a group involves 3 steps: creation, addition and a commit. These operations must be performed by all ranks forming the group. The creation is performed using gaspi_group_create. If this operation is successful, ranks can be added to the created group using gaspi_group_add. To be able to use the created group, all ranks added to it must commit to the group. This is performed using gaspi_group_commit, a collective operation between the ranks in the group.

One-sided communication

One-sided asynchronous communication is the basic communication mechanism provided by GPI-2. The one-sided communication comes in two flavors. There are read and write operations (single or in a list) from and into allocated segments. Moreover, the write operations are extended with notifications to enable remote completion events which a remote rank can react on. One-sided operations are non-blocking and asynchronous, allowing the program to continue its execution along with the data transfer.

The mechanisms for communication in GPI-2 are the following:

gaspi_write
gaspi_write_list
gaspi_read
gaspi_read_list
gaspi_wait
gaspi_notify
gaspi_write_notify
gaspi_write_list_notify
gaspi_notify_waitsome
gaspi_notify_reset

Queues

There is the possibility to use different queues for communication requests where each request can be submitted to one of the queues. These queues allow more scalability and can be used as channels for different types of requests where similar types of requests are queued and then get synchronised together but independently from the other ones (separation of concerns).

Global atomics

GPI-2 provides atomic operations such that variables can be manipulated atomically. There are two basic atomic operations: fetch_and_add and compare_and_swap. The values can be used as global shared variables and to synchronise processes or events.

Timeouts

Failure tolerant parallel programs require non-blocking communication calls. GPI-2 provides a timeout mechanism for all potentially blocking procedures. Timeouts for procedures are specified in milliseconds. For instance, GASPI_BLOCK is a pre-defined timeout value which blocks the procedure call until completion. GASPI_TEST is another predefined timeout value which blocks the procedure for the shortest time possible, i. e. the time in which the procedure call processes an atomic portion of its work.

Products using GPI

See also

References

  1. "GPI-2 project". Retrieved 2014-04-25.
  2. 1 2 "Scapos Parallel Software products". Retrieved 2014-04-25.
  3. "GASPI Project". Retrieved 2014-07-08.
  4. "Sharp Reflections". Retrieved 2014-07-08.

External links

This article is issued from Wikipedia - version of the Thursday, November 26, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.