[section:getting_started Getting started] Getting started with Boost.MPI requires a working MPI implementation, a recent version of Boost, and some configuration information. [section:implementation MPI Implementation] To get started with Boost.MPI, you will first need a working MPI implementation. There are many conforming _MPI_implementations_ available. Boost.MPI should work with any of the implementations, although it has only been tested extensively with: * [@http://www.open-mpi.org Open MPI] * [@http://www-unix.mcs.anl.gov/mpi/mpich/ MPICH2] * [@https://software.intel.com/en-us/intel-mpi-library Intel MPI] You can test your implementation using the following simple program, which passes a message from one processor to another. Each processor prints a message to standard output. #include #include int main(int argc, char* argv[]) { MPI_Init(&argc, &argv); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { int value = 17; int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); if (result == MPI_SUCCESS) std::cout << "Rank 0 OK!" << std::endl; } else if (rank == 1) { int value; int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); if (result == MPI_SUCCESS && value == 17) std::cout << "Rank 1 OK!" << std::endl; } MPI_Finalize(); return 0; } You should compile and run this program on two processors. To do this, consult the documentation for your MPI implementation. With _OpenMPI_, for instance, you compile with the `mpiCC` or `mpic++` compiler, boot the LAM/MPI daemon, and run your program via `mpirun`. For instance, if your program is called `mpi-test.cpp`, use the following commands: [pre mpiCC -o mpi-test mpi-test.cpp lamboot mpirun -np 2 ./mpi-test lamhalt ] When you run this program, you will see both `Rank 0 OK!` and `Rank 1 OK!` printed to the screen. However, they may be printed in any order and may even overlap each other. The following output is perfectly legitimate for this MPI program: [pre Rank Rank 1 OK! 0 OK! ] If your output looks something like the above, your MPI implementation appears to be working with a C++ compiler and we're ready to move on. [endsect] [section:config Configure and Build] As the rest of Boost, Boost.MPI uses version 2 of the [@http://www.boost.org/doc/html/bbv2.html Boost.Build] system for configuring and building the library binary. Please refer to the general Boost installation instructions for [@http://www.boost.org/doc/libs/release/more/getting_started/unix-variants.html#prepare-to-use-a-boost-library-binary Unix Variant] (including Unix, Linux and MacOS) or [@http://www.boost.org/doc/libs/1_58_0/more/getting_started/windows.html#prepare-to-use-a-boost-library-binary Windows]. The simplified build instructions should apply on most platforms with a few specific modifications described below. [section:bootstrap Bootstrap] As explained in the boost installation instructions, running the bootstrap (`./bootstrap.sh` for unix variants or `bootstrap.bat` for Windows) from the boost root directory will produce a 'project-config.jam` file. You need to edit that file and add the following line: using mpi ; Alternatively, you can explicitly provide the list of Boost libraries you want to build. Please refer to the `--help` option of the `bootstrap` script. [endsect:bootstrap] [section:setup Setting up your MPI Implementation] First, you need to scan the =include/boost/mpi/config.hpp= file and check if some settings need to be modified for your MPI implementation or preferences. In particular, the [macroref BOOST_MPI_HOMOGENEOUS] macro, that you will need to comment out if you plan to run on a heterogeneous set of machines. See the [link mpi.tutorial.performance_optimizations.homogeneous_machines optimization] notes below. Most MPI implementations require specific compilation and link options. In order to mask theses details to the user, most MPI implementations provide wrappers which silently pass those options to the compiler. Depending on your MPI implementation, some work might be needed to tell Boost which specific MPI option to use. This is done through the `using mpi ;` directive in the `project-config.jam` file those general form is (do not forget to leave spaces around *:* and before *;*): [pre using mpi : \[\] : \[\] : \[\] ; ] Depending on your installation and MPI distribution, the build system might be able to find all the required informations and you just need to specify: [pre using mpi ; ] [section:troubleshooting Trouble shooting] Most of the time, specially with production HPC clusters, some work will need to be done. Here is a list of the most common issues and suggestions on how to fix those. * [*Your wrapper is not in your path or does ot have a standard name ] You will need to tell the build system how to call it using the first parameter: [pre using mpi : /opt/mpi/bullxmpi/1.2.8.3/bin/mpicc ; ] [warning Boost.MPI only uses the C interface, so specifying the C wrapper should be enough. But some implementations will insist on importing the C++ bindings. ] * [*Your wrapper is really eccentric or does not exist] With some implementations, or with some specific integration[footnote Some HPC cluster will insist that the users uss theirs own in house interface to the MPI system.] you will need to provide the compilation and link options through de second parameter using 'jam' directives. The following type configuration used to be required for some specific Intel MPI implementation (in such a case, the name of the wrapper can be left blank): [pre using mpi : mpiicc : /softs/intel/impi/5.0.1.035/intel64/lib /softs/intel/impi/5.0.1.035/intel64/lib/release_mt /softs/intel/impi/5.0.1.035/intel64/include mpifort mpi_mt mpigi dl rt ; ] As a convenience, MPI wrappers usually have an option that provides the required informations, which usually starts with `--show`. You can use those to find out the requested jam directive: [pre $ mpiicc -show icc -I/softs/...\/include ... -L/softs/...\/lib ... -Xlinker -rpath -Xlinker \/softs/...\/lib .... -lmpi -ldl -lrt -lpthread $ ] [pre $ mpicc --showme icc -I/opt/...\/include -pthread -L/opt/...\/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl $ mpicc --showme:compile -I/opt/mpi/bullxmpi/1.2.8.3/include -pthread $ mpicc --showme:link -pthread -L/opt/...\/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl $ ] To see the results of MPI auto-detection, pass `--debug-configuration` on the bjam command line. * [*The launch syntax cannot be detected] [note This is only used when [link mpi.getting_started.config.tests running the tests].] If you need to use a special command to launch an MPI program, you will need to specify it through the third parameter of the `using mpi` directive. So, assuming you launch the `all_gather_test` program with: [pre $mpiexec.hydra -np 4 all_gather_test ] The directive will look like: [pre using mpi : mpiicc : \[\] : mpiexec.hydra -n ; ] [endsect:troubleshooting] [endsect:setup] [section:build Build] To build the whole Boost distribution: [pre $cd $./b2 ] To build the Boost.MPI library and dependancies: [pre $cd \/lib/mpi/build $..\/../../b2 ] [endsect:build] [section:tests Tests] You can run the regression tests with: [pre $cd \/lib/mpi/test $..\/../../b2 ] [endsect:tests] [section:installation Installation] To install the whole Boost distribution: [pre $cd $./b2 install ] [endsect:installation] [endsect:config] [section:using Using Boost.MPI] To build applications based on Boost.MPI, compile and link them as you normally would for MPI programs, but remember to link against the `boost_mpi` and `boost_serialization` libraries, e.g., [pre mpic++ -I/path/to/boost/mpi my_application.cpp -Llibdir \ -lboost_mpi -lboost_serialization ] If you plan to use the [link mpi.python Python bindings] for Boost.MPI in conjunction with the C++ Boost.MPI, you will also need to link against the boost_mpi_python library, e.g., by adding `-lboost_mpi_python-gcc` to your link command. This step will only be necessary if you intend to [link mpi.python.user_data register C++ types] or use the [link mpi.python.skeleton_content skeleton/content mechanism] from within Python. [endsect:using] [endsect:getting_started]