Radiation particle transport problems use the Monte Carlo statistical method to predict the movement of particles through a substance. Modeling problems such as radiation particle transport is an important and effective method for better understanding of behaviors of particles at an atomic level. Within radiation transport each particle is considered an independent entity which neither effect nor or affected by other particles. This property makes modeling the movements of the particles inherently or "embarrassingly" parallel. If there exists a serial algorithm to move the particles through the substance, one simply needs to divide the particles up and run multiple serial algorithms simultaneously on on different computers. The effects of the particles on the substance are summed up to produce the final answer. Radiation particle transport problems are often parallelized to run on large distributed computers. If the problem description of is to large to fit on physical memory of a single node, breaking the description apart and dispersing it among the nodes of a super computer is a viable solution. Whether it is cutting down the number of particles per computer or "decomposing" the problem description, parallelization is an obvious solution to handling large and time consuming radiation particle transport problems.
Currently there is no standard method of parallelizing radiation transport problems. While there are widely accepted parallelization tools such as MPI or OpenMP, methods vary greatly in implementation. This is problematic for the following reasons:
Parallelization itself is a complex topic and many developers of radiation transport physics packages are not already familiar with the intricacies of parallel programming. This can give rise to the following problems:
There is a need, in the area of radiation transport modeling, of a common method for parallelization and a tool to abstract the parallel methods away from the developers of these packages.