• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1[section:introduction Introduction]
2
3Boost.MPI is a library for message passing in high-performance
4parallel applications. A Boost.MPI program is one or more processes
5that can communicate either via sending and receiving individual
6messages (point-to-point communication) or by coordinating as a group
7(collective communication). Unlike communication in threaded
8environments or using a shared-memory library, Boost.MPI processes can
9be spread across many different machines, possibly with different
10operating systems and underlying architectures.
11
12Boost.MPI is not a completely new parallel programming
13library. Rather, it is a C++-friendly interface to the standard
14Message Passing Interface (_MPI_), the most popular library interface
15for high-performance, distributed computing. MPI defines
16a library interface, available from C, Fortran, and C++, for which
17there are many _MPI_implementations_. Although there exist C++
18bindings for MPI, they offer little functionality over the C
19bindings. The Boost.MPI library provides an alternative C++ interface
20to MPI that better supports modern C++ development styles, including
21complete support for user-defined data types and C++ Standard Library
22types, arbitrary function objects for collective algorithms, and the
23use of modern C++ library techniques to maintain maximal
24efficiency.
25
26At present, Boost.MPI supports the majority of functionality in MPI
271.1. The thin abstractions in Boost.MPI allow one to easily combine it
28with calls to the underlying C MPI library. Boost.MPI currently
29supports:
30
31* Communicators: Boost.MPI supports the creation,
32  destruction, cloning, and splitting of MPI communicators, along with
33  manipulation of process groups.
34* Point-to-point communication: Boost.MPI supports
35  point-to-point communication of primitive and user-defined data
36  types with send and receive operations, with blocking and
37  non-blocking interfaces.
38* Collective communication: Boost.MPI supports collective
39  operations such as [funcref boost::mpi::reduce `reduce`]
40  and [funcref boost::mpi::gather `gather`] with both
41  built-in and user-defined data types and function objects.
42* MPI Datatypes: Boost.MPI can build MPI data types for
43  user-defined types using the _Serialization_ library.
44* Separating structure from content: Boost.MPI can transfer the shape
45  (or "skeleton") of complex data structures (lists, maps,
46  etc.) and then separately transfer their content. This facility
47  optimizes for cases where the data within a large, static data
48  structure needs to be transmitted many times.
49
50Boost.MPI can be accessed either through its native C++ bindings, or
51through its alternative, [link mpi.python Python interface].
52
53[endsect:introduction]
54