A microstructure defined by a digital image is already naturally discretized and lends itself immediately to numerical computation of any number of quantities. The permeability calculation is based on a lattice-Boltzmann method (LB) using D3Ql9 (3 Dimensional lattice with 19 possible momenta components; Qian et al., 1986). The implementation of the algorithm is similar to that detailed by Martys et al (1999). The physical boundary condition at solid−fluid interfaces is the no−flow condition which in the LB methods is most simply realised by the bounceback rule (Martys and Chen, 1996). The pressure gradient acting on the fluid is simulated by a body force (Ferreol and Rothman, 1995). Mirror-image boundary conditions (Martys et al., 1999) are applied in the plane perpendicular to the flow direction and all simulations were performed on an L x L x 2L system; permeability is measured in the central (L3) subset. Diagonal links in the D3Q19 model allow the fluids to leak to neighbouring lattice points which have only a single edge in common which can lead to inaccurate results for systems near disconnection (Manwart et al., 2002). To eliminate diagonal links which cross a solid boundary, we implement an erosion/dilation step on the pore space (Serra, 1982). The step reduces the porosity φ of the original tomographic image by a very small fraction = 0.04% but ensures that no flow occurs across disconnected pores. In all cases, the LB relaxation parameter τ = 1.0 is used.
Before reporting numerical results, we must ensure that numerical artefacts are minimised. First, we consider finite size effects; here we aim to ensure that the size of the image is large enough to give useful predictions. This requires comparison of grid size to some statistical length scale (e.g., correlation length, average grain size). Errors will occur if we use too few pores/grains in the system to calculate the numerical permeability. We wish to choose a system size, which has an acceptable finite size error, but is small enough to be computationally feasible. In Fig. l(a), we compare data measured on the 3 higher porosity images at different scales; 8 subsets generated at 2403 t0 27 subsets at 1603 and 64 subsets at 1203. The permeability measured on the smaller subsets is in good agreement with calculations on larger scales. While the variability observed in the smaller cells varies over a factor of three, this variability is not unlike the experimental scatter observed in classical porosity/permeability data. The ability to use small block sizes to estimate permeability is an exciting prospect. First, the permeability calculation is computationally feasible on a common workstation. Second, the combination of smaller image sizes on a large-imaged core, combined with the natural heterogeneity of rock, allow us to derive the permeability for the samples across a larger range φ. Finally, data sets at 1203 (≈700 µm3) are below the scale of drill cuttings.
Another potential numerical error, particularly when simulating permeability, is insufficient image resolution. This was originally discussed by Martys et al. (1999) who performed simulations on microtomographic images of Fontainebleau sandstone at a large scale (5103). For low-porosity samples, the presence of small pores led to the conclusion that the resolution was not adequate to produce accurate flow fields. For this case, fine graining of the pore space by a factor of two was necessary to acquire sufficient resolution for realistic simulation.
It has been previously shown (Arns et al., 2001; Moctezuma-Berthier et al., 2002) that when simulating transport properties on voxelated data, a large (>30%) discretization error can be made. This error is due to the use of discrete voxels to represent continuum objects when simulating properties from microtomographic images. Errors due to discretization can include, for example, inaccurate description of curved grain boundaries and closing of narrow pores. To measure this effect, one should generate realisations of the original tomographic data sets at integer multiples of the resolution of the original image. Unfortunately, insufficient resolution in the coarse-grained digitized microstructures leads to poor numerical accuracy. To test for the importance of discretization on permeability, we consider the permeability of a model morphology, one based on periodic arrays of spheres varying across a range of porosities calculated at varying discretizations. The prediction in Fig. (b); shows that the errors in permeability due to discretization are minimal.
Fig. 1. (a) Results for numerical permeametry at 2403 1603 , and 1203 (x) scales. This illustrates that finite size errors at 1203 remain small. (b) No appreciable discretization error is noted for the permeability of a periodic array of spheres. (c) Evolution of the permeability solution with steps in the LB solver shows convergence.
For the subsets at φ = 13%, 15%, 22%, we simulate the permeability on the 64 independent 1203 subsets in three orthogonal directions. In Fig. 1(c), we show the evolution of the numerical solution for permeability over the number of iterations for a 1203 subset. Relaxation to an asymptotic value for k is clearly observed within 5000 iterations and this cutoff is used in all simulations at this scale. The runtime on a l−GHz Alpha processor for a single sample at porosity φ = 10% ≈ 80 processor min. The runtime increases linearly with porosity. Generation of the permeability on all subsets of the original images requires the equivalent of 2 processor months.
As discussed above, the image resolution for the low-porosity Fontainebleau samples φ = 7.5% is insufficient (not enough voxels within a single pore) to produce a reliable flow field. To, allow for accurate simulation, we fine grain the original 1203 image; replace each voxel by a 2 x 2 x 2 voxels of the same phase, thus giving a system at 2402 x 480 at 2.85−µm voxel size. A total of 10,000 iterations are used as a cutoff at this scale. These simulations on the 64 independent subsets along three orthogonal directions required approximately 3 months of equivalent processor time to complete.