• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1.. Copyright (C) 2004-2009 The Trustees of Indiana University.
2   Use, modification and distribution is subject to the Boost Software
3   License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
4   http://www.boost.org/LICENSE_1_0.txt)
5
6============================
7|Logo| MPI BSP Process Group
8============================
9
10.. contents::
11
12Introduction
13------------
14
15The MPI ``mpi_process_group`` is an implementation of the `process
16group`_ interface using the Message Passing Interface (MPI). It is the
17primary process group used in the Parallel BGL at this time.
18
19Where Defined
20-------------
21
22Header ``<boost/graph/distributed/mpi_process_group.hpp>``
23
24Reference
25---------
26
27::
28
29  namespace boost { namespace graph { namespace distributed {
30
31  class mpi_process_group
32  {
33  public:
34    typedef boost::mpi::communicator communicator_type;
35
36    // Process group constructors
37    mpi_process_group(communicator_type comm = communicator_type());
38    mpi_process_group(std::size_t num_headers, std::size_t buffer_size,
39                      communicator_type comm = communicator_type());
40
41    mpi_process_group();
42    mpi_process_group(const mpi_process_group&, boost::parallel::attach_distributed_object);
43
44    // Triggers
45    template<typename Type, typename Handler>
46      void trigger(int tag, const Handler& handler);
47
48    template<typename Type, typename Handler>
49      void trigger_with_reply(int tag, const Handler& handler);
50
51    trigger_receive_context trigger_context() const;
52
53    // Helper operations
54    void poll();
55    mpi_process_group base() const;
56  };
57
58  // Process query
59  int process_id(const mpi_process_group&);
60  int num_processes(const mpi_process_group&);
61
62  // Message transmission
63  template<typename T>
64    void send(const mpi_process_group& pg, int dest, int tag, const T& value);
65
66  template<typename T>
67    void receive(const mpi_process_group& pg, int source, int tag, T& value);
68
69  optional<std::pair<int, int> > probe(const mpi_process_group& pg);
70
71  // Synchronization
72  void synchronize(const mpi_process_group& pg);
73
74  // Out-of-band communication
75  template<typename T>
76    void send_oob(const mpi_process_group& pg, int dest, int tag, const T& value);
77
78  template<typename T, typename U>
79    void
80    send_oob_with_reply(const mpi_process_group& pg, int dest, int
81                        tag, const T& send_value, U& receive_value);
82
83  template<typename T>
84    void receive_oob(const mpi_process_group& pg, int source, int tag, T& value);
85
86  } } }
87
88Since the ``mpi_process_group`` is an implementation of the `process
89group`_ interface, we omit the description of most of the functions in
90the prototype. Two constructors need special mentioning:
91
92::
93
94      mpi_process_group(communicator_type comm = communicator_type());
95
96The constructor can take an optional MPI communicator. As default a communicator
97constructed from MPI_COMM_WORLD is used.
98
99::
100
101    mpi_process_group(std::size_t num_headers, std::size_t buffer_size,
102                      communicator_type comm = communicator_type());
103
104
105For performance fine tuning the maximum number of headers in a message batch
106(num_headers) and the maximum combined size of batched messages (buffer_size)
107can be specified. The maximum message size of a batch is
10816*num_headers+buffer_size. Sensible default values have been found by optimizing
109a typical application on a cluster with Ethernet network, and are num_header=64k
110and buffer_size=1MB, for a total maximum batches message size of 2MB.
111
112
113
114-----------------------------------------------------------------------------
115
116Copyright (C) 2007 Douglas Gregor
117
118Copyright (C) 2007 Matthias Troyer
119
120.. |Logo| image:: pbgl-logo.png
121            :align: middle
122            :alt: Parallel BGL
123            :target: http://www.osl.iu.edu/research/pbgl
124
125.. _process group: process_group.html
126