• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1[section:communicators Communicators]
2[section:managing Managing communicators]
3
4Communication with Boost.MPI always occurs over a communicator. A
5communicator contains a set of processes that can send messages among
6themselves and perform collective operations. There can be many
7communicators within a single program, each of which contains its own
8isolated communication space that acts independently of the other
9communicators.
10
11When the MPI environment is initialized, only the "world" communicator
12(called `MPI_COMM_WORLD` in the MPI C and Fortran bindings) is
13available. The "world" communicator, accessed by default-constructing
14a [classref boost::mpi::communicator mpi::communicator]
15object, contains all of the MPI processes present when the program
16begins execution. Other communicators can then be constructed by
17duplicating or building subsets of the "world" communicator. For
18instance, in the following program we split the processes into two
19groups: one for processes generating data and the other for processes
20that will collect the data. (`generate_collect.cpp`)
21
22  #include <boost/mpi.hpp>
23  #include <iostream>
24  #include <cstdlib>
25  #include <boost/serialization/vector.hpp>
26  namespace mpi = boost::mpi;
27
28  enum message_tags {msg_data_packet, msg_broadcast_data, msg_finished};
29
30  void generate_data(mpi::communicator local, mpi::communicator world);
31  void collect_data(mpi::communicator local, mpi::communicator world);
32
33  int main()
34  {
35    mpi::environment env;
36    mpi::communicator world;
37
38    bool is_generator = world.rank() < 2 * world.size() / 3;
39    mpi::communicator local = world.split(is_generator? 0 : 1);
40    if (is_generator) generate_data(local, world);
41    else collect_data(local, world);
42
43    return 0;
44  }
45
46When communicators are split in this way, their processes retain
47membership in both the original communicator (which is not altered by
48the split) and the new communicator. However, the ranks of the
49processes may be different from one communicator to the next, because
50the rank values within a communicator are always contiguous values
51starting at zero. In the example above, the first two thirds of the
52processes become "generators" and the remaining processes become
53"collectors". The ranks of the "collectors" in the `world`
54communicator will be 2/3 `world.size()` and greater, whereas the ranks
55of the same collector processes in the `local` communicator will start
56at zero. The following excerpt from `collect_data()` (in
57`generate_collect.cpp`) illustrates how to manage multiple
58communicators:
59
60  mpi::status msg = world.probe();
61  if (msg.tag() == msg_data_packet) {
62    // Receive the packet of data
63    std::vector<int> data;
64    world.recv(msg.source(), msg.tag(), data);
65
66    // Tell each of the collectors that we'll be broadcasting some data
67    for (int dest = 1; dest < local.size(); ++dest)
68      local.send(dest, msg_broadcast_data, msg.source());
69
70    // Broadcast the actual data.
71    broadcast(local, data, 0);
72  }
73
74The code in this except is executed by the "master" collector, e.g.,
75the node with rank 2/3 `world.size()` in the `world` communicator and
76rank 0 in the `local` (collector) communicator. It receives a message
77from a generator via the `world` communicator, then broadcasts the
78message to each of the collectors via the `local` communicator.
79
80For more control in the creation of communicators for subgroups of
81processes, the Boost.MPI [classref boost::mpi::group `group`] provides
82facilities to compute the union (`|`), intersection (`&`), and
83difference (`-`) of two groups, generate arbitrary subgroups, etc.
84
85[endsect:managing]
86
87[section:cartesian_communicator Cartesian communicator]
88
89A communicator can be organised as a cartesian grid, here a basic example:
90
91  #include <vector>
92  #include <iostream>
93
94  #include <boost/mpi/communicator.hpp>
95  #include <boost/mpi/collectives.hpp>
96  #include <boost/mpi/environment.hpp>
97  #include <boost/mpi/cartesian_communicator.hpp>
98
99  #include <boost/test/minimal.hpp>
100
101  namespace mpi = boost::mpi;
102  int test_main(int argc, char* argv[])
103  {
104    mpi::environment  env;
105    mpi::communicator world;
106
107    if (world.size() != 24)  return -1;
108    mpi::cartesian_dimension dims[] = {{2, true}, {3,true}, {4,true}};
109    mpi::cartesian_communicator cart(world, mpi::cartesian_topology(dims));
110    for (int r = 0; r < cart.size(); ++r) {
111      cart.barrier();
112      if (r == cart.rank()) {
113        std::vector<int> c = cart.coordinates(r);
114        std::cout << "rk :" << r << " coords: "
115                  << c[0] << ' ' << c[1] << ' ' << c[2] << '\n';
116      }
117    }
118    return 0;
119  }
120
121[endsect:cartesian_communicator]
122[endsect:communicators]
123