Calculating the effective properties of random materials is not a trivial procedure. Due to their random composition, random phase shape, and widely varying length scales, these properties cannot be determined analytically, but instead necessitate a numerical computation. But before any computing may begin, detailed microstructural information must be in hand. Some of this information may be obtained experimentally using x-ray tomography or similar techniques or developed in models. In any event, the input is converted to a 2 or 3-D digital image that represents the overall structure of the composite.
One application is to the properties of cement and concrete. They are complex mixtures that can contain 20 to 30 distinct individual chemical phases in the same mixture. The nominal sizes of the data sets are 1003 to 3003 voxels, containing several thousand particles. Even though this seems to be a large number of voxels, one would like to increase the potential problem size and increase the digital resolution of the original data image. But the possible overall sizes of problems have been ultimately bound by the available computational resources of a serial based machine. In addition, the wall clock (real) time to perform these calculations routinely reach into the 100 hour regime, making even larger computations impractical. The original programs from NISTIR-6269 calculate the overall effective properties of the above systems but they suffer from several limitations. Resolution problems (hence accuracy) are affected if the dataset is too large (≥ 5123 voxels) since lack of computer memory becomes a critical issue. At present, datasets this size must be split up into smaller files and processed individually, each producing a subset of results that must be approximately linked together.
Using parallel processing allows one to have the power and storage capacity of several processors. It is then possible to divide a large dataset exactly across multiple processors and, in essence, each processor operates on a dataset of reduced size. In addition, any large arrays calculated in the serial version only have their corresponding sections calculated on each processor as well. The overall functionality of the program is not compromised by operating on smaller sections, but one can gain theoretical speed-ups of execution time on the order of the number of processors used and so be able to handle large problems.
Parallel processing also supports data transfer, i.e. send and receive mechanisms, between the processors. This is important for calculations involving nearest neighbors. Splitting the data across N processors sets up N − 1 imposed boundaries on the data by direct consequence of the split. Of course, it is necessary to know which data is needed by which processor and when in order to have a successfully operating program.
To accommodate these large calculations and decrease the time to perform them, the suite of original programs from NISTIR−6269 have been converted to run in a parallel environment in FORTRAN90 with MPI (Message Passing Interface). This conversion allows datasets of size 4003 or greater to be routinely used without any compromise of numerical accuracy.