1[section:getting_started Getting started] 2 3Getting started with Boost.MPI requires a working MPI implementation, 4a recent version of Boost, and some configuration information. 5 6[section:implementation MPI Implementation] 7To get started with Boost.MPI, you will first need a working 8MPI implementation. There are many conforming _MPI_implementations_ 9available. Boost.MPI should work with any of the 10implementations, although it has only been tested extensively with: 11 12* [@http://www.open-mpi.org Open MPI] 13* [@http://www-unix.mcs.anl.gov/mpi/mpich/ MPICH2] 14* [@https://software.intel.com/en-us/intel-mpi-library Intel MPI] 15 16You can test your implementation using the following simple program, 17which passes a message from one processor to another. Each processor 18prints a message to standard output. 19 20 #include <mpi.h> 21 #include <iostream> 22 23 int main(int argc, char* argv[]) 24 { 25 MPI_Init(&argc, &argv); 26 27 int rank; 28 MPI_Comm_rank(MPI_COMM_WORLD, &rank); 29 if (rank == 0) { 30 int value = 17; 31 int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); 32 if (result == MPI_SUCCESS) 33 std::cout << "Rank 0 OK!" << std::endl; 34 } else if (rank == 1) { 35 int value; 36 int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, 37 MPI_STATUS_IGNORE); 38 if (result == MPI_SUCCESS && value == 17) 39 std::cout << "Rank 1 OK!" << std::endl; 40 } 41 MPI_Finalize(); 42 return 0; 43 } 44 45You should compile and run this program on two processors. To do this, 46consult the documentation for your MPI implementation. With _OpenMPI_, for 47instance, you compile with the `mpiCC` or `mpic++` compiler, boot the 48LAM/MPI daemon, and run your program via `mpirun`. For instance, if 49your program is called `mpi-test.cpp`, use the following commands: 50 51[pre 52mpiCC -o mpi-test mpi-test.cpp 53lamboot 54mpirun -np 2 ./mpi-test 55lamhalt 56] 57 58When you run this program, you will see both `Rank 0 OK!` and `Rank 1 59OK!` printed to the screen. However, they may be printed in any order 60and may even overlap each other. The following output is perfectly 61legitimate for this MPI program: 62 63[pre 64Rank Rank 1 OK! 650 OK! 66] 67 68If your output looks something like the above, your MPI implementation 69appears to be working with a C++ compiler and we're ready to move on. 70[endsect] 71 72[section:config Configure and Build] 73 74As the rest of Boost, Boost.MPI uses version 2 of the 75[@http://www.boost.org/doc/html/bbv2.html Boost.Build] system for 76configuring and building the library binary. 77 78Please refer to the general Boost installation instructions for 79[@http://www.boost.org/doc/libs/release/more/getting_started/unix-variants.html#prepare-to-use-a-boost-library-binary Unix Variant] 80(including Unix, Linux and MacOS) or 81[@http://www.boost.org/doc/libs/1_58_0/more/getting_started/windows.html#prepare-to-use-a-boost-library-binary Windows]. 82The simplified build instructions should apply on most platforms with a few specific modifications described below. 83 84[section:bootstrap Bootstrap] 85 86As explained in the boost installation instructions, running the bootstrap (`./bootstrap.sh` for unix variants or `bootstrap.bat` for Windows) from the boost root directory will produce a 'project-config.jam` file. You need to edit that file and add the following line: 87 88 using mpi ; 89 90Alternatively, you can explicitly provide the list of Boost libraries you want to build. 91Please refer to the `--help` option of the `bootstrap` script. 92 93[endsect:bootstrap] 94[section:setup Setting up your MPI Implementation] 95 96First, you need to scan the =include/boost/mpi/config.hpp= file and check if some 97settings need to be modified for your MPI implementation or preferences. 98 99In particular, the [macroref BOOST_MPI_HOMOGENEOUS] macro, that you will need to comment out 100if you plan to run on a heterogeneous set of machines. See the [link mpi.tutorial.performance_optimizations.homogeneous_machines optimization] notes below. 101 102Most MPI implementations require specific compilation and link options. 103In order to mask theses details to the user, most MPI implementations provide 104wrappers which silently pass those options to the compiler. 105 106Depending on your MPI implementation, some work might be needed to tell Boost which 107specific MPI option to use. This is done through the `using mpi ;` directive in the `project-config.jam` file those general form is (do not forget to leave spaces around *:* and before *;*): 108 109[pre 110using mpi 111 : \[<MPI compiler wrapper>\] 112 : \[<compilation and link options>\] 113 : \[<mpi runner>\] ; 114] 115 116Depending on your installation and MPI distribution, the build system might be able to find all the required informations and you just need to specify: 117 118[pre 119using mpi ; 120] 121 122[section:troubleshooting Trouble shooting] 123 124Most of the time, specially with production HPC clusters, some work will need to be done. 125 126Here is a list of the most common issues and suggestions on how to fix those. 127 128* [*Your wrapper is not in your path or does ot have a standard name ] 129 130You will need to tell the build system how to call it using the first parameter: 131 132[pre 133using mpi : /opt/mpi/bullxmpi/1.2.8.3/bin/mpicc ; 134] 135 136[warning 137Boost.MPI only uses the C interface, so specifying the C wrapper should be enough. But some implementations will insist on importing the C++ bindings. 138] 139 140* [*Your wrapper is really eccentric or does not exist] 141 142With some implementations, or with some specific integration[footnote Some HPC cluster will insist that the users uss theirs own in house interface to the MPI system.] you will need to provide the compilation and link options through de second parameter using 'jam' directives. 143The following type configuration used to be required for some specific Intel MPI implementation (in such a case, the name of the wrapper can be left blank): 144 145[pre 146using mpi : mpiicc : 147 <library-path>/softs/intel/impi/5.0.1.035/intel64/lib 148 <library-path>/softs/intel/impi/5.0.1.035/intel64/lib/release_mt 149 <include>/softs/intel/impi/5.0.1.035/intel64/include 150 <find-shared-library>mpifort 151 <find-shared-library>mpi_mt 152 <find-shared-library>mpigi 153 <find-shared-library>dl 154 <find-shared-library>rt ; 155] 156 157As a convenience, MPI wrappers usually have an option that provides the required informations, which usually starts with `--show`. You can use those to find out the requested jam directive: 158[pre 159$ mpiicc -show 160icc -I/softs/...\/include ... -L/softs/...\/lib ... -Xlinker -rpath -Xlinker \/softs/...\/lib .... -lmpi -ldl -lrt -lpthread 161$ 162] 163[pre 164$ mpicc --showme 165icc -I/opt/...\/include -pthread -L/opt/...\/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl 166$ mpicc --showme:compile 167-I/opt/mpi/bullxmpi/1.2.8.3/include -pthread 168$ mpicc --showme:link 169-pthread -L/opt/...\/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl 170$ 171] 172 173To see the results of MPI auto-detection, pass `--debug-configuration` on 174the bjam command line. 175 176* [*The launch syntax cannot be detected] 177 178[note This is only used when [link mpi.getting_started.config.tests running the tests].] 179 180If you need to use a special command to launch an MPI program, you will need to specify it through the third parameter of the `using mpi` directive. 181 182So, assuming you launch the `all_gather_test` program with: 183 184[pre 185$mpiexec.hydra -np 4 all_gather_test 186] 187 188The directive will look like: 189 190[pre 191using mpi : mpiicc : 192 \[<compilation and link options>\] 193 : mpiexec.hydra -n ; 194] 195 196[endsect:troubleshooting] 197[endsect:setup] 198[section:build Build] 199 200To build the whole Boost distribution: 201[pre 202$cd <boost distribution> 203$./b2 204] 205To build the Boost.MPI library and dependancies: 206[pre 207$cd <boost distribution>\/lib/mpi/build 208$..\/../../b2 209] 210 211[endsect:build] 212[section:tests Tests] 213 214You can run the regression tests with: 215[pre 216$cd <boost distribution>\/lib/mpi/test 217$..\/../../b2 218] 219 220[endsect:tests] 221[section:installation Installation] 222 223To install the whole Boost distribution: 224[pre 225$cd <boost distribution> 226$./b2 install 227] 228 229[endsect:installation] 230[endsect:config] 231[section:using Using Boost.MPI] 232 233To build applications based on Boost.MPI, compile and link them as you 234normally would for MPI programs, but remember to link against the 235`boost_mpi` and `boost_serialization` libraries, e.g., 236 237[pre 238mpic++ -I/path/to/boost/mpi my_application.cpp -Llibdir \ 239 -lboost_mpi -lboost_serialization 240] 241 242If you plan to use the [link mpi.python Python bindings] for 243Boost.MPI in conjunction with the C++ Boost.MPI, you will also need to 244link against the boost_mpi_python library, e.g., by adding 245`-lboost_mpi_python-gcc` to your link command. This step will 246only be necessary if you intend to [link mpi.python.user_data 247register C++ types] or use the [link 248mpi.python.skeleton_content skeleton/content mechanism] from 249within Python. 250 251[endsect:using] 252[endsect:getting_started] 253