This section will examine the basics behind the parallel versions by discussing their data models, memory management, and algorithms. Although one can probably comprehend the principles of the programming model, it is important to have the necessary mathematical background from the original NISTIR 6269 to provide a more complete understanding of the relationships between the serial and parallel versions.
Briefly, these programs operate by performing a series of matrix operations (additions and multiplications) on very large arrays, dimensioned as large as the original data set or greater, in order to minimize the overall energy of the system in question. The minimization technique used is a conjugate gradient solver . This is an iterative procedure which is carried out until a certain minimum threshold (energy gradient) is reached. The serial case defines these arrays as 1-D vectors and then operates or computes each element sequentially and therefore independently from each other. This is good behavior for a program to have if the parallel adaptation is to be made. In fact, it ensures that the operations can proceed in a parallel fashion. Therefore, it is necessary to give each processor a piece of the original data array and most of the calculations can proceed independently until special cases of communication are warranted. It is important to reduce the amount of time one spends communicating to gain the maximum time savings from parallel computations.
The theoretical core behind each program is similar, therefore many of the parallel algorithms were readily modified to adapt to the specific physical case (mechanical/thermal stress, electric fields). Differences between the codes is manifested by a few simple items, namely the dimensionality of the problem (arrays), how many arrays are included in the minimization, the specific applied field and if the program itself is either based on a finite element or finite difference method of solution of the original partial differential equations.