• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1[section:point_to_point Point-to-Point communication]
2
3[section:blocking Blocking communication]
4
5As a message passing library, MPI's primary purpose is to routine
6messages from one process to another, i.e., point-to-point. MPI
7contains routines that can send messages, receive messages, and query
8whether messages are available. Each message has a source process, a
9target process, a tag, and a payload containing arbitrary data. The
10source and target processes are the ranks of the sender and receiver
11of the message, respectively. Tags are integers that allow the
12receiver to distinguish between different messages coming from the
13same sender.
14
15The following program uses two MPI processes to write "Hello, world!"
16to the screen (`hello_world.cpp`):
17
18  #include <boost/mpi.hpp>
19  #include <iostream>
20  #include <string>
21  #include <boost/serialization/string.hpp>
22  namespace mpi = boost::mpi;
23
24  int main()
25  {
26    mpi::environment env;
27    mpi::communicator world;
28
29    if (world.rank() == 0) {
30      world.send(1, 0, std::string("Hello"));
31      std::string msg;
32      world.recv(1, 1, msg);
33      std::cout << msg << "!" << std::endl;
34    } else {
35      std::string msg;
36      world.recv(0, 0, msg);
37      std::cout << msg << ", ";
38      std::cout.flush();
39      world.send(0, 1, std::string("world"));
40    }
41
42    return 0;
43  }
44
45The first processor (rank 0) passes the message "Hello" to the second
46processor (rank 1) using tag 0. The second processor prints the string
47it receives, along with a comma, then passes the message "world" back
48to processor 0 with a different tag. The first processor then writes
49this message with the "!" and exits. All sends are accomplished with
50the [memberref boost::mpi::communicator::send
51communicator::send] method and all receives use a corresponding
52[memberref boost::mpi::communicator::recv
53communicator::recv] call.
54
55[endsect:blocking]
56
57[section:nonblocking Non-blocking communication]
58
59The default MPI communication operations--`send` and `recv`--may have
60to wait until the entire transmission is completed before they can
61return. Sometimes this *blocking* behavior has a negative impact on
62performance, because the sender could be performing useful computation
63while it is waiting for the transmission to occur. More important,
64however, are the cases where several communication operations must
65occur simultaneously, e.g., a process will both send and receive at
66the same time.
67
68Let's revisit our "Hello, world!" program from the previous
69[link mpi.tutorial.point_to_point.blocking section].
70The core of this program transmits two messages:
71
72    if (world.rank() == 0) {
73      world.send(1, 0, std::string("Hello"));
74      std::string msg;
75      world.recv(1, 1, msg);
76      std::cout << msg << "!" << std::endl;
77    } else {
78      std::string msg;
79      world.recv(0, 0, msg);
80      std::cout << msg << ", ";
81      std::cout.flush();
82      world.send(0, 1, std::string("world"));
83    }
84
85The first process passes a message to the second process, then
86prepares to receive a message. The second process does the send and
87receive in the opposite order. However, this sequence of events is
88just that--a *sequence*--meaning that there is essentially no
89parallelism. We can use non-blocking communication to ensure that the
90two messages are transmitted simultaneously
91(`hello_world_nonblocking.cpp`):
92
93  #include <boost/mpi.hpp>
94  #include <iostream>
95  #include <string>
96  #include <boost/serialization/string.hpp>
97  namespace mpi = boost::mpi;
98
99  int main()
100  {
101    mpi::environment env;
102    mpi::communicator world;
103
104    if (world.rank() == 0) {
105      mpi::request reqs[2];
106      std::string msg, out_msg = "Hello";
107      reqs[0] = world.isend(1, 0, out_msg);
108      reqs[1] = world.irecv(1, 1, msg);
109      mpi::wait_all(reqs, reqs + 2);
110      std::cout << msg << "!" << std::endl;
111    } else {
112      mpi::request reqs[2];
113      std::string msg, out_msg = "world";
114      reqs[0] = world.isend(0, 1, out_msg);
115      reqs[1] = world.irecv(0, 0, msg);
116      mpi::wait_all(reqs, reqs + 2);
117      std::cout << msg << ", ";
118    }
119
120    return 0;
121  }
122
123We have replaced calls to the [memberref
124boost::mpi::communicator::send communicator::send] and
125[memberref boost::mpi::communicator::recv
126communicator::recv] members with similar calls to their non-blocking
127counterparts, [memberref boost::mpi::communicator::isend
128communicator::isend] and [memberref
129boost::mpi::communicator::irecv communicator::irecv]. The
130prefix *i* indicates that the operations return immediately with a
131[classref boost::mpi::request mpi::request] object, which
132allows one to query the status of a communication request (see the
133[memberref boost::mpi::request::test test] method) or wait
134until it has completed (see the [memberref
135boost::mpi::request::wait wait] method). Multiple requests
136can be completed at the same time with the [funcref
137boost::mpi::wait_all wait_all] operation.
138
139[important Regarding communication completion/progress:
140The MPI standard requires users to keep the request
141handle for a non-blocking communication, and to call the "wait"
142operation (or successfully test for completion) to complete the send
143or receive.
144Unlike most C MPI implementations, which allow the user to
145discard the request for a non-blocking send, Boost.MPI requires the
146user to call "wait" or "test", since the request object might contain
147temporary buffers that have to be kept until the send is
148completed.
149Moreover, the MPI standard does not guarantee that the
150receive makes any progress before a call to "wait" or "test", although
151most implementations of the C MPI do allow receives to progress before
152the call to "wait" or "test".
153Boost.MPI, on the other hand, generally
154requires "test" or "wait" calls to make progress.
155More specifically, Boost.MPI  guarantee that calling "test" multiple time will
156eventually complete the communication (this is due to the fact that serialized communication are potentially a multi step operation.). ]
157
158If you run this program multiple times, you may see some strange
159results: namely, some runs will produce:
160
161  Hello, world!
162
163while others will produce:
164
165  world!
166  Hello,
167
168or even some garbled version of the letters in "Hello" and
169"world". This indicates that there is some parallelism in the program,
170because after both messages are (simultaneously) transmitted, both
171processes will concurrent execute their print statements. For both
172performance and correctness, non-blocking communication operations are
173critical to many parallel applications using MPI.
174
175[endsect:nonblocking]
176[endsect:point_to_point]
177