Running in Parallel Mode Using MPI

The Message Passing Interface (MPI) is a framework for distributed computing, originally designed for use in supercomputers. Octeract Engine is distributed with MPI, and can be used in parallel mode out of the box.





Running on a Single Machine

The syntax to invoke MPI on a single machine is the following:
                
octeract-engine -n [number_of_processes] [problem_file]
                
            
The solver will now generate and run n processes in parallel. It is highly recommended to use at most as many processes as there are physical cores in your system.




Running on a Computer Cluster

Octeract Engine should run on any Linux cluster out of the box. The syntax to invoke MPI on a distributed architecture is very similar to single machine MPI mode:
                
octeract-engine -n [nu_processes]  -m [--mpi-hostfile]
                
            
The hostfile is required by MPI, as it contains the IP addresses of all the machines that the solver can connect to. A sample hostfile could look like this:
                
10.200.300.01 : 32
10.200.300.02 : 8
10.200.300.45 : 2
10.200.300.32 : 12
                
            
This file contains two columns delimited by a colon. The IP addresses of the available machines are listed in the first column. In the second column, the user can optionally declare the maximum number of cores that can be used in each machine. If the column is ommited, then MPI will use all cores by default. In this example, the first machine is allowed to utilise up to 32 cores.

Note

Octeract Engine will spawn as many processes as the user requests on startup. If that number is smaller than all the processors specified in the hostfile, some machines will not be used at all.

Was this helpful?
Please make sure you fill the comments section before sending.

Thank you for your comments.
Please contact us if you need any further support.
htg/htg1011