Parallel N-Body Simulation is a project that simulates the gravitational interactions between celestial bodies in a 3D space using parallel computing with MPI. This approach allows for efficient computation of complex gravitational calculations by distributing the workload across multiple processors.
In astrophysics, the N-body problem involves predicting the motion of celestial bodies interacting through gravitational forces. The complexity of this simulation grows with the number of bodies, making parallel processing essential for handling large datasets.
This project uses MPI (Message Passing Interface) to break down these computations, enhancing performance and making the simulation feasible for thousands of interacting bodies.
parallel-nbody-simulation-mpi/
├── src/
│ ├── main.c # Main program
│ ├── body.h # Body struct and force calculation function
│ ├── body.c
│ ├── file_io.c # Input/output functions
│ ├── file_io.h
│ ├── timer.c # Timing functions
│ └── timer.h
├── data/ # Input data files
├── results/ # Output results
└── README.md # Project documentation
- Parallel Computation: Uses MPI to distribute gravitational calculations across multiple processors.
- Optimized Performance: Compiled with
-O3
optimization for maximum speed. - Configurable Simulation Size: Allows adjusting the number of bodies for different performance requirements.
- MPI: MPICH or OpenMPI.
- CMake: A build tool that generates makefiles.
- C Compiler: Supports C99 standard.
You can download a precompiled version from the latest release or build the project from source.
Visit the Releases section of this repository, download the latest release, and extract the contents. The release package includes:
- The
parallel_nbody
executable - The
data
directory with input files - The
results
directory for output
After extracting, you can run the program directly using mpirun
(see the Running the Simulation section below for usage).
If you prefer to build the project from source, follow these steps:
-
Clone the Repository:
git clone https://github.com/pcurz/parallel-nbody-simulation-mpi.git cd parallel-nbody-simulation-mpi
-
Create a Build Directory:
mkdir build cd build
-
Run CMake:
cmake ..
-
Compile the Project:
make
This will generate an executable named parallel_nbody
in the build
directory.
Execute the program using mpirun
with a specified number of processes:
mpirun -np <num_processes> ./parallel_nbody <power_of_2_for_bodies> <initial_data_file> <output_data_file>
<num_processes>
: Number of MPI processes (adjust based on available cores).<power_of_2_for_bodies>
: Sets the number of bodies as 2^n (default: 4096).<initial_data_file>
: File with initial body data (in/data
).<output_data_file>
: File for simulation results (in/results
).
mpirun -np 4 ./parallel_nbody 12 data/initialized_4096 results/solution_4096_parallel.txt
- Result File: Stores body positions and velocities after each iteration.
- Performance Metrics: Execution time per iteration, average time, and billions of interactions per second.
Example output:
Total execution time: 0.432 seconds
Average time per iteration: 0.043 seconds
23.456 Billion interactions per second
- Data Distribution: Bodies are distributed among processes based on the number of available processors.
- Force Calculation: Each process calculates gravitational forces on its assigned bodies.
- Position Update: Local bodies' positions are updated based on the calculated forces.
- Synchronization: Processes synchronize data to ensure consistency across the simulation.
- Timing: Functions track execution time for performance evaluation.
The results are stored in results/
, containing rows of body positions and velocities.
x y z vx vy vz
x y z vx vy vz
...
- Support for Adaptive Time Steps: Adjust time steps based on proximity of bodies to improve accuracy.
- Load Balancing: Distribute computations more evenly among processors.
- Visualization: Add support for visualizing the simulation in real-time.
Feel free to open issues or submit pull requests to improve the project! Contributions to enhance performance, add features, or fix bugs are always welcome.
This project is licensed under the MIT License. See LICENSE
for more details.