Cavitating and bubbly flows are encountered in many engineering problems involving propellers, pumps, valves, ultrasonic biomedical applications, etc. In this contribution, an openmp parallelized Euler–Lagrange model of two-phase flow problems and cavitation is presented. The two-phase medium is treated as a continuum and solved on an Eulerian grid, while the discrete bubbles are tracked in a Lagrangian fashion with their dynamics computed. The intimate coupling between the two description levels is realized through the local void fraction, which is computed from the instantaneous bubble volumes and locations, and provides the continuum properties. Since, in practice, any such flows will involve large numbers of bubbles, schemes for significant speedup are needed to reduce computation times. We present here a shared-memory parallelization scheme combining domain decomposition for the continuum domain and number decomposition for the bubbles; both selected to realize maximum speedup and good load balance. The Eulerian computational domain is subdivided based on geometry into several subdomains, while for the Lagrangian computations, the bubbles are subdivided based on their indices into several subsets. The number of fluid subdomains and bubble subsets matches with the number of central processing unit (CPU) cores available in a shared-memory system. Computation of the continuum solution and the bubble dynamics proceeds sequentially. During each computation time step, all selected openmp threads are first used to evolve the fluid solution, with each handling one subdomain. Upon completion, the openmp threads selected for the Lagrangian solution are then used to execute the bubble computations. All data exchanges are executed through the shared memory. Extra steps are taken to localize the memory access pattern to minimize nonlocal data fetch latency, since severe performance penalty may occur on a nonuniform memory architecture (NUMA) multiprocessing system where thread access to nonlocal memory is much slower than to local memory. This parallelization scheme is illustrated on a typical nonuniform bubbly flow problem, cloud bubble dynamics near a rigid wall driven by an imposed pressure function (Ma et al., 2013, “Euler–Lagrange Simulations of Bubble Cloud Dynamics Near a Wall,” International Mechanical Engineering Congress and Exposition, San Diego, CA, Nov. 15–21, Paper No. IMECE2013-65191 and Ma et al., 2015, “Euler–Lagrange Simulations of Bubble Cloud Dynamics Near a Wall,” ASME J. Fluids Eng., 137(4), p. 041301).

Introduction

Cavitating bubbly flows occur in various engineering problems, such as in ultrasonic biomedical applications, cavitating jets for materials testing, cleaning, and biomass treatments, and unsteady sheet cavities on propeller blades [18]. Numerical modeling of such problems is very challenging, since it involves bubble–bubble, bubble–flow, and bubble–wall interactions [7]. In addition, the problem addresses multiscale physics ranging from micron-scale individual bubbles to meter-scale flow field (e.g., propeller blade size). In between, there are very rich and complex mesoscale phenomena such as at the scale of bubble clouds, where the bubbles act collectively [1,2,913].

Fully resolved methods, such as direct numerical simulation or the boundary element method, can provide, within their limiting assumptions, details at several scales of interests with impressive progress reported (e.g., Refs. [1319]), but are limited to detail problems and not appropriate for industrial applications due to high computational cost. In practical engineering applications, cavitating bubbly flows are usually modeled using one of the several approaches: equivalent homogeneous continuum models [5,9,20], Eulerian two-fluid models [2128], or Eulerian–Lagrangian approaches wherein the bubbles are treated as discrete elements. The former two are based on volume or ensemble-averaged approximations thus suitable for cases where the bubbles are much smaller than the characteristic lengths associated with the overall dynamics of the bubbly mixture. They become inadequate when the scales associated with the individual bubbles are comparable to the mixture flow scales. Eulerian–Lagrangian methods have been developed to treat these cases (e.g., Refs. [2935]), where the carrier fluid is modeled using continuum fluid methods (Eulerian approach) and bubbles are tracked individually (Lagrangian approach). These methods have shown intrinsic advantages owing to their capability of including the physics across scales down to each individual bubble [1,2]. However, the number of bubbles in actual industrial systems is so large that the computation cost prevents full-scale applications. Therefore, efficient parallelization schemes for significant speedup are needed to reduce computation times.

In this paper, we present an openmp shared-memory parallelization for Eulerian–Lagrangian modeling of cavitating bubbly flows taking advantage of multicore multiprocessing computer systems. These systems usually consist of several multicore CPUs with a large shared-memory pool. In principal, an efficient parallelization should distribute the computation load of both Eulerian and Lagrangian phases as evenly as possible through all the CPU cores (i.e., openmp threads) to reach an optimum load balance. However, bubbles do not remain uniformly distributed in the fluid domain and are not suitable for domain decomposition [36,37]. Therefore, this study employs a hybrid parallelization scheme combining domain decomposition for the Eulerian fluid domain and number decomposition for the Lagrangian bubbles. A similar hybrid decomposition method has also been reported in parallelization of discrete element modeling of particulate flows with ideal load balancing [3840]. However, a drawback of mixing different decomposition methods is the high overhead involved in nonlocal data fetching when exchanging information between fluid and particle phases during the two-way coupling. The overhead may become considerable and may cancel out parallelization speedup on a NUMA multiprocessing system. Therefore, another major effort in this paper is to address the locality of memory access and reduce nonlocal data fetch latency, thus minimizing any performance penalties.

The paper is organized as follows: The main features of the fully coupled Euler–Lagrange model are summarized in Eulerian-Lagrangian Model section. This is followed by a description of the openmp parallelization scheme. Scaling and efficiency analysis of the scheme are then tested on the problem of the dynamics of microbubble clouds near a rigid wall [1,2]. This case is selected as it is numerically challenging and presents a typical nonuniform bubbly flow configuration where bubbles are confined to a small spherical region in the much larger fluid domain. A comparison of the speedup between nonlocal and localized data access patterns is then provided. Finally, a summary of the findings is presented.

Eulerian–Lagrangian Model

The three-dimensional (3D) Eulerian–Lagrangian model employed in this study has been extensively validated and documented in our previous studies. These included the investigation of the effects of a propeller flow on bubble size distribution in water [4042], the modeling of propeller tip vortex cavitation inception [42,43], simulation of the bubbly flow in a bubble augmented propulsor [44], bubble entrainment in plunging jets [45], and wave propagation in bubbly media [1,2,3335], etc. The present paper addresses acceleration of the computations through parallelization and does not consider additional model development. Therefore, we only provide below a brief summary of the key features of the model and concentrate on the parallelization aspects. Readers interested in more details on modeling aspects may refer to the references provided above.

Continuum Model for Viscous Mixture Flow.

Following the theoretical work on bubbly flows of van Wijngaarden [46,47], Commander and Prosperetti [48], and Brennen [49], the unsteady continuity and momentum conservation equations for a liquid–gas mixture are as follows:
(1)
(2)
where ρm is the mixture density, u its velocity, p its pressure, and υm its kinematic viscosity. The mixture density and viscosity can be related to the liquid (subscript “l”) and gas (subscript “g”) properties by
(3)

where α is the gas volume fraction.

Equations (1) and (2) are solved based on the artificial-compressibility method [50], along with a Newton iterative pseudotime stepping procedure (within each physical time step) to obtain a time-accurate solution. The numerical scheme uses a finite-volume formulation, with a first-order Euler implicit difference formula applied to the time derivatives and a flux-difference splitting scheme based on Roe's method [51] and a van Leer's monotonic upstream-centered scheme for conservation laws method [52] for spatial differencing of convection terms. A second-order central differencing is used for the viscous terms.

Lagrangian Modeling of Dispersed Phase.

For the bubbles dispersed in the medium, we use a Lagrangian bubble tracking method. All bubbles are treated as point sources to account for volume change and dipoles to account for translation. The bubble volume variations are obtained using a modified Rayleigh–Plesset–Keller–Herring equation [3], which accounts for the surrounding medium compressibility and nonuniform pressure field through the surface-averaged pressure model [7,31]
(4)
In the above equation, R is the bubble radius and dots represent time differentiation. μm is the two-phase medium dynamic viscosity, pv is the liquid vapor pressure, and pg is the gas pressure in the bubble. k is the gas polytropic constant and is equal to 1.4 for air under adiabatic conditions and one under isothermal conditions, and γ is the water surface tension. cm is the local sound speed in the mixture, which is related to the local void fraction, α, by
(5)

where cl is the sound speed in pure liquid [5]. penc and uenc are the mixture pressures and velocities averaged over the bubble surface, and ub is the bubble center translation velocity.

The introduction in Eq. (4) of (uenc-ub)2/4 is to account for slip between the bubble and the host medium. The average terms, penc and uenc, are to account for nonuniform pressures and velocities along the bubble surface. They are obtained through an arithmetic average of the mixture quantities, p and u, in Eqs. (1) and (2), at six polar points on a bubble surface. The use of penc results in a major improvement over classical models, which use the pressure at the bubble center.

This approach has been validated in our previous studies [7,31,42]. For instance, in the simulation of the dynamics of a bubble captured by a vortex, the conventional spherical model overpredicts significantly the bubble size versus time when using the pressure at the bubble center. On the other hand, using Eq. (4) enables a more realistic evaluation of the bubble dynamics with the solution agreeing very well with a fully resolved moving grid method [42].

The bubble trajectory is obtained using the motion equation of Johnson and Hsieh [53]
(6)
where ω is the local vorticity, and CL is the lift coefficient [54]. CD is the drag coefficient defined as a function of the bubble Reynolds number, Reb, as [55]
(7)

Eulerian–Lagrangian Coupling.

The two-way coupling between the Eulerian continuum-based model and the Lagrangian discrete bubble model is realized as follows [1,2,34,35]:

  • The bubble dynamics and motion of the individual bubbles in the flow field are controlled by the two-phase medium local properties and gradients.

  • The local properties of the mixture are determined by the bubble size and distribution, and the void fractions and local densities are determined by the instantaneous bubble volume and distributions.

  • The mixture flow field has an evolving mixture density in space and time and satisfies mass and momentum conservation.

The key of this coupling scheme is to deduce the void fraction from the instantaneous bubble sizes and locations. In our previous studies, the void fraction was obtained by dividing in each computational cell the total volume of the bubbles in the cell by the cell volume. We have found that scheme not totally satisfactory because of the requirement of many bubbles in each cell and also mainly because of the noncontinuous character of α across cells resulting in difficulties with differentiation [33].

To smooth the discontinuities and improve stability, we have implemented a Gaussian distribution scheme to smoothly “spread” each bubble “void” effect across neighboring cells. This scheme has been found to significantly increase numerical stability and to enable handling of high-void bubbly flow simulations [1,2,34,35]. The void fraction is obtained using the following expression:
(8)

where Vjb and Vkcell are the volumes of bubble, j, and a cell, k, respectively; Ni is the number of bubbles which are in the “influence range” to a cell i; Ncells is the number of cells “influenced” by a bubble j; and fi,j is the weight factor of bubble j contributing to a cell i, which gradually decays to zero as the distance between the bubble and cell increases. Here, the Gaussian distribution function is used as illustrated in Fig. 1 [1,2,34,35].

Fig. 1
Illustration of the void fraction computation using the Gaussian distribution scheme [35]
Fig. 1
Illustration of the void fraction computation using the Gaussian distribution scheme [35]
Close modal

Parallelization Algorithm and Implementation

Algorithm and Program Procedure.

Some multicore, multiprocessing systems have a shared-memory pool consisting of multiple multicore CPUs. For example, dyna-omp, one of dynaflow's multiprocessing workstations, has four 12-core AMDTM Opteron 2.80 GHz CPUs with 128 GB shared memory. A shared-memory openmp parallelization scheme was implemented on dyna-omp to parallelize our 3dynafs Eulerian/Lagrangian coupled model.

As illustrated in Fig. 2, a scheme combining domain decomposition for the continuum domain and number decomposition for the bubbles is used. Both decompositions are selected to realize maximum speedup and good load balance.

Fig. 2
Illustration of openmp parallelization by combining domain decomposition for Eulerian fluid domain and particle decomposition for Lagrangian bubbles
Fig. 2
Illustration of openmp parallelization by combining domain decomposition for Eulerian fluid domain and particle decomposition for Lagrangian bubbles
Close modal
  • The Eulerian fluid domain is geometrically subdivided into subdomains with the continuum computations in each subdomain handled by an openmp thread running on one CPU processor core.

  • The bubbles are also subdivided, based on their indices, into several subsets with each handled by an openmp thread using a single CPU processor core.

The overall flow chart for implementation of the hybrid parallelization algorithm is shown in Fig. 3. The algorithm shows that the computations of the continuum solution and the bubble dynamics proceed sequentially. During each computation time step, all selected openmp threads (maximum thread number limited by the number of CPU cores available) are first used to execute the continuum computations, with each handling one subdomain. Upon completion, the openmp threads selected for the Lagrangian computations are then used to obtain the bubble dynamics. This procedure is repeated for each physical time step until the final step is reached.

Fig. 3
Flow chart for the openmp parallelized Eulerian–Lagrangian coupled two-phase bubbly flow model
Fig. 3
Flow chart for the openmp parallelized Eulerian–Lagrangian coupled two-phase bubbly flow model
Close modal

openmp Implementation.

The advantage of domain decomposition for Eulerian fluid solver is that all cells in a given subdomain (assigned to one thread) are stored in contiguous memory addresses. This is very favorable for memory access since continuum cells need to interact only with their neighboring cells. Only the small portion of continuum cells near the borders of subdomains require information from cells located in different subdomains and have to fetch the data from the global shared-memory. The openmp parallelization is realized based on a multiblock structure of the code. This is done by enclosing the block (i.e., subdomain) loops with openmp directives: “!$omp parallel do…!$omp end parallel.”

Similarly, the Lagrangian bubble computations are done by enclosing the loops of bubble dynamics and bubble motion computations with similar directives. The bubbles are divided into a multitude of subsets based on their indexes with each handled independently by a thread processor. Each thread can efficiently access a contiguous global memory space. Therefore, the openmp implementation for either the Eulerian or the Lagrangian component alone is straightforward.

However, an efficient parallel implementation for the coupling between the phases, in particular the void fraction computations (Eq. (8)) requires careful consideration for data access pattern. Difficulty stems from the two different decomposition methods applied to the continuum and to the bubbles. Nonlocal data fetching is necessary since bubbles located in a continuum computation cell may belong to several different processor cores. This necessitates irregular, remote data fetching, which results in performance penalty. On a NUMA multiprocessing system especially, thread access to nonlocal memory is much slower than to local memory. This high-latency data access mode creates very high overhead, which could cancel out any gains from load balancing.

To overcome this, extra steps are taken to localize the memory access pattern to minimize data fetch latency. This is illustrated with the pseudocode of void fraction computation in Fig. 4. The code stores the void fractions in a global array, void. Following Eq. (8), for a given cell indexed by (i,j,k,m) (where i, j, and k are the indices along x, y, and z directions, and m is the block index), multiple bubbles located inside and around the cell contribute void effects to it. To avoid remote data access (e.g., have each bubble access the global memory address corresponding to void (i,j,k,m) and modify it by adding the void contribution), a temporal array, “void_tmp” is created under each thread and temporally stores the void contribution of all the bubbles assigned to that thread (Fig. 4(b)). Once the bubble loop is completed by all threads, the void values stored in the void_tmp arrays are summed and transferred to the global array void. The temporal arrays are then emptied to release memory. At the beginning of each loop, these temporary arrays must be allocated independently under each thread in order to ensure the locality of data placement by following the “first touch policy” [36]. According to this policy, array data are stored in a memory local to the thread which allocates and initializes the array.

Fig. 4
A pseudo code of the void fraction computation illustrating the conversion from remote data access (a) into localized data access (b). Here, nbub is the total bubble number, void and void_tmp are the global and local void fraction arrays. (imin, imax), (jmin, jmax), and (kmin, kmax) correspond to the bubble's void effect along the x, y, z directions, respectively, and (SDmin, SDmax) means the subdomain range.
Fig. 4
A pseudo code of the void fraction computation illustrating the conversion from remote data access (a) into localized data access (b). Here, nbub is the total bubble number, void and void_tmp are the global and local void fraction arrays. (imin, imax), (jmin, jmax), and (kmin, kmax) correspond to the bubble's void effect along the x, y, z directions, respectively, and (SDmin, SDmax) means the subdomain range.
Close modal

Test Simulation and Results

To evaluate the developed openmp parallelized Euler–Lagrange model, we apply it to simulate the dynamics of a cloud of microbubbles located near a rigid wall and subjected to a sinusoidal pressure time variation imposed on the computational domain far field boundaries [1,2]. This is a numerically challenging problem since it involves violent bubble volume changes and strong coupling effects fed back to the flow [2,5,9,13]. Also, the problem involves a typical nonuniform void fraction distribution since the bubbles are only present in a small space region while the resolved fluid domain extends in the 3D space much far away than the initial cloud volume. Figure 5 illustrates the dynamics for a bubble cloud of initial radius 1.5 mm, consisting of initially monodispersed bubbles of radius 50 μm and an average void fraction of 5%. Nonslip boundary conditions are imposed at the rigid wall and zero velocity gradients and imposed pressure are imposed on the other “far-field” domain boundaries [1]. This cloud was excited by an imposed sinusoidal pressure variations at infinity with an amplitude of 1.5 atm and a frequency of 10 kHz. The frequency was selected to match the cloud natural frequency [5]. As known from previous studies [1,2], under this frequency, a collective cluster behavior occurs as the cloud experiences a cascading collapse with the bubbles farthest away from the wall collapsing first and the nearest ones collapsing last cumulating the energy. This results in a very high-pressure peak at the wall at the end of cloud collapse, as illustrated in the right panel of Fig. 5. More simulation details and result analysis can be found in Refs. [1,2] and are not repeated here for brevity. We focus instead on the major objective of this paper, i.e., parallelization efficiency tests and analysis.

Fig. 5
Snapshot of pressures and velocity vectors for a bubble cloud near a wall at the end of its collapse following sinusoidal pressure excitation with a frequency matching the cloud natural frequency (left). Pressure histories monitored at three locations on the wall during the cloud dynamics (right) [1,2].
Fig. 5
Snapshot of pressures and velocity vectors for a bubble cloud near a wall at the end of its collapse following sinusoidal pressure excitation with a frequency matching the cloud natural frequency (left). Pressure histories monitored at three locations on the wall during the cloud dynamics (right) [1,2].
Close modal

Work Load Decomposition and Balancing.

Figure 6 shows how the workload for a case with 65,000 Eulerian grids and 170,000 Lagrangian bubbles is distributed, with different colors (or gray scales in print) corresponding to different openmp threads. Since the mesh is finer in the cloud region than away from it, the corresponding subdomains are smaller than elsewhere to ensure that each subdomain contains the same number of continuum cells. On the other hand, bubbles are evenly subdivided into subsets of equal number, independent of their locations. Therefore, a good load balance is achieved through this decomposition.

Fig. 6
Decomposition of fluid and particle computation load: for both particles and fluid cells, a different color (or gray scale in print) corresponds to a different openmp thread running on a CPU core. The right plot is a zoomed view of the bubble cloud region.
Fig. 6
Decomposition of fluid and particle computation load: for both particles and fluid cells, a different color (or gray scale in print) corresponds to a different openmp thread running on a CPU core. The right plot is a zoomed view of the bubble cloud region.
Close modal

Scaling Tests and Speedup Analysis.

To examine the scalability of the developed scheme, we tested the code for the same bubble cloud shown above on an Intel Xeon CPU node of a Linux cluster. This node has dual six-core 3 GHz Xeon CPUs and a total of 12 GB shared memory. Figure 7 (top) shows the wall time per time step of computation and the speedup factor (bottom) when varying the number of openmp threads. Here, a time step includes both the Eulerian and Lagrangian solution and the coupling through void fraction computation. It is seen that the wall time per time step i drops from 923 s to about 81.6 s when the thread number increases from one to 12. The speedup factor is linear and nearly ideal up to 12 threads as shown in Fig. 7 (bottom).

Fig. 7
Wall time per step for bubble cloud case with the increase of the number of threads on a 3 GHz Intel Xeon CPUs (left). Speedup factors (right).
Fig. 7
Wall time per step for bubble cloud case with the increase of the number of threads on a 3 GHz Intel Xeon CPUs (left). Speedup factors (right).
Close modal

Comparison Between Nonlocal and Localized Memory Accesses.

To quantify the gains from the procedure of data access optimization for void fraction computation, the speedup factors for each call of void fraction subroutine are compared in Fig. 8 between nonlocal data access and optimized local data access. The speedup is seen to gradually increase as more threads are used. With 12 threads, a 17% improvement can be seen.

Fig. 8
Speedup factor using 1–12 openmp threads on a 3 GHz Intel Xeon CPUs for a call of void fraction computation subroutine
Fig. 8
Speedup factor using 1–12 openmp threads on a 3 GHz Intel Xeon CPUs for a call of void fraction computation subroutine
Close modal

For the purpose of further demonstrating the portability of this optimization procedure, similar tests were done on a more challenging NUMA multiprocessing system, dyna-omp. Each of the four CPUs in dyna-omp consists of two NUMA units with each having its own local memory. Thread access to memory belonging to a different NUMA unit has a high-latency and is very inefficient. The corresponding overhead may result in nullifying the acceleration achieved by openmp parallelization. This can be clearly observed in the top plot of Fig. 9, which shows that if the localization of data access is not used, increasing the number of threads over six may increase the cost of the void fraction computation instead of reducing it. The number six is critical because beyond six threads more than two NUMA units are involved and remote fetch of data becomes necessary. On the contrary, after applying the localized data access optimization significant speedup can be achieved with higher thread numbers. Figure 9 shows that by increasing the number of threads from one to 48, the speedup factor grows to 44, a parallelization efficiency of about 92%. The above examples also indicate good portability of the developed parallelization scheme since good speedups were achieved with both Intel and AMD multicore multiprocessing systems, which use very different architectures including core number, memory size, and memory types.

Fig. 9
Wall time per call for the void fraction computations for different numbers of openmp threads on 2.8 GHz AMD Opteron CPUs (left). Speedup factors versus number of threads (right).
Fig. 9
Wall time per call for the void fraction computations for different numbers of openmp threads on 2.8 GHz AMD Opteron CPUs (left). Speedup factors versus number of threads (right).
Close modal

Conclusion

An openmp shared-memory parallelized 3D Euler–Lagrange model was developed for two-phase bubbly flow problems and cavitating flows. This parallelization scheme combines domain decomposition for the continuum domain and number decomposition for the bubbles to guarantee a good load balance, even for nonuniform bubble distributions. The executions of the Eulerian and Lagrangian solvers proceed sequentially. All available openmp threads are first used to execute the continuum computations with each handling one subdomain. The same are then used to execute the bubble computations with each thread handling one bubble subset. All data exchanges are executed through accessing the shared memory. However, special consideration is given in the programming to localize as much as possible memory access through use of temporary local memory in order to minimize remote data fetch latency.

Scaling tests were conducted using the problem of a bubble cloud dynamics near a rigid wall. A nearly ideal speedup could be achieved on both a 12-core Intel Xeon Node and on a NUMA 48-core AMD Opteron workstation.

Acknowledgment

This work was supported by the Department of Energy under SBIR Phase II DE-FG02-07ER84839. The authors would also like to acknowledge support from the Office of Naval Research under Contract No. N00014-09-C-0676 monitored by Dr. Ki-Han Kim. This work was also supported in part by grants of computer time from The National Energy Research Scientific Computing Center (NERSC), a scientific computing facility for the Office of Science in the U.S. Department of Energy.

Nomenclature

CD =

drag coefficient

CL =

lift coefficient

Cm =

mixture sound speed on bubble surface

dij =

distance between points i and j

k =

polytropic gas constant

p =

pressure

pg =

gas pressure

pv =

vapor pressure

pg0 =

initial gas pressure

R =

bubble radius

Reb =

Reynolds number

SD =

subdomain

t =

time

u =

velocity vector

α =

void fraction

α0 =

initial void fraction

γ =

surface tension of liquid

μ =

absolute viscosity

μm =

absolute viscosity of the mixture

ρ =

density

ρm =

density of the mixture

ω =

vorticity

Subscripts
b,bubble =

bubble property

enc =

bubble encountered mixture property

g =

gas property

i,j,k,m =

indices

l =

liquid property

m =

mixture property

max =

maximum

min =

minimum

0 =

initial condition

References

1.
Ma
,
J.
,
Hsiao
,
C.-T.
, and
Chahine
,
G. L.
,
2013
, “
Euler–Lagrange Simulations of Bubble Cloud Dynamics Near a Wall
,”
International Mechanical Engineering Congress and Exposition
,
San Diego, CA
, Nov. 15–21, Paper No. IMECE2013-65191.
2.
Ma
,
J.
,
Hsiao
,
C.-T.
, and
Chahine
,
G. L.
,
2015
, “
Euler–Lagrange Simulations of Bubble Cloud Dynamics Near a Wall
,”
ASME J. Fluids Eng.
,
137
(
4
), p.
041301
.10.1115/1.4028853
3.
Plesset
,
M. S.
, and
Prosperetti
,
A.
,
1977
, “
Bubble Dynamics and Cavitation
,”
Annu. Rev. Fluid Mech.
,
9
, pp.
145
185
.10.1146/annurev.fl.09.010177.001045
4.
Blake
,
J. R.
, and
Gibson
,
D. C.
,
1987
, “
Bubble Dynamics and Cavitation
,”
Annu. Rev. Fluid Mech.
,
19
(
1
), pp.
99
123
.10.1146/annurev.fl.19.010187.000531
5.
Brennen
,
C. E.
,
1995
,
Cavitation and Bubble Dynamics
,
Oxford University Press
,
New York
.
6.
Crowe
,
C. T.
,
Troutt
,
T. R.
, and
Chung
,
J. N.
,
1996
, “
Numerical Models for Two-Phase Turbulent Flows
,”
Annu. Rev. Fluid Mech.
,
28
, pp.
11
43
.10.1146/annurev.fl.28.010196.000303
7.
Chahine
,
G. L.
,
2009
, “
Numerical Simulation of Bubble Flow Interactions
,”
J. Hydrodyn.
,
21
(
3
), pp.
316
332
.10.1016/S1001-6058(08)60152-3
8.
Balachandar
,
S.
, and
Eaton
,
J. K.
,
2010
, “
Turbulent Dispersed Multiphase Flow
,”
Annu. Rev. Fluid Mech.
,
42
, pp.
111
133
.10.1146/annurev.fluid.010908.165243
9.
van Wijngaarden
,
L.
,
1966
, “
On the Collective Collapse of a Large Number of Gas Bubbles in Water
,”
Proceedings of the 11th International Congress of Applied Mechanics
,
Springer
,
Berlin
, pp.
854
861
.
10.
Morch
,
K. A.
,
1981
, “
Cavity Cluster Dynamics and Cavitation Erosion
,” ASME Proceeding of the Cavitation and Polyphase Flow Forum, Boulder, CO, June 22–24, pp. 1–10.
11.
Chahine
,
G. L.
,
1983
, “
Cloud Cavitation: Theory
,”
Proceedings of the 14th Symposium on Naval Hydrodynamics, National Academy Press
, Ann Arbor, MI, pp.
165
194
.
12.
Chahine
,
G. L.
, and
Liu
,
H.-L.
,
1985
, “
A Singular Perturbation Theory of the Growth of a Bubble Cluster in a Superheated Liquid
,”
J. Fluid Mech.
,
156
, pp.
257
279
.10.1017/S0022112085002087
13.
Chahine
,
G. L.
, and
Duraiswami
,
R.
,
1992
, “
Dynamical Interaction in a Multi-Bubble Cloud
,”
ASME J. Fluids Eng.
,
114
(
4
), pp.
680
686
.10.1115/1.2910085
14.
Esmaeeli
,
A.
, and
Tryggvason
,
G.
,
1998
, “
Direct Numerical Simulations of Bubbly Flows. Part 1. Low Reynolds Number Array
,”
J. Fluid Mech.
,
377
, pp.
313
345
.10.1017/S0022112098003176
15.
Lu
,
J.
,
Biswas
,
S.
, and
Tryggvason
,
G.
,
2006
, “
A DNS Study of Laminar Bubbly Flows in a Vertical Channel
,”
Int. J. Multiphase Flow
,
32
(6), pp.
643
660
.10.1016/j.ijmultiphaseflow.2006.02.003
16.
Calvisi
,
M. L.
,
Iloreta
,
J. I.
, and
Szeri
,
A. J.
,
2008
, “
Dynamics of Bubbles Near a Rigid Surface Subjected to a Lithotripter Shock Wave. Part 2. Reflected Shock Intensifies Nonspherical Cavitation Collapse
,”
J. Fluid Mech.
,
616
, pp.
63
97
.10.1017/S0022112008003054
17.
Seo
,
J. H.
,
Lele
,
S. K.
, and
Tryggvason
,
G.
,
2010
, “
Investigation and Modeling of Bubble-Bubble Interaction Effect in Homogeneous Bubbly Flows
,”
Phys. Fluids
,
22
(6), p.
063302
.10.1063/1.3432503
18.
Wang
,
Q.
,
2014
, “
Multi-Oscillations of a Bubble in a Compressible Liquid Near a Rigid Boundary
,”
J. Fluid Mech.
,
745
, pp.
509
536
.10.1017/jfm.2014.105
19.
Peng
,
G.
,
Tryggvason
,
G.
, and
Shimizu
,
S.
,
2015
, “
Two-Dimensional Direct Numerical Simulation of Bubble Cloud Cavitation by Front-Tracking Method
,”
IOP Conf. Ser.: Mater. Sci. Eng.
,
72
(
1
), p.
012001
.10.1088/1757-899X/72/1/012001
20.
Gilmore
,
F. R.
,
1952
, “
The Growth and Collapse of a Spherical Bubble in a Viscous Compressible Liquid
,” California Institute of Technology, Hydrodynamics Laboratory Report No. 26-4.
21.
Drew
,
D. A.
, and
Passman
,
S.
,
1999
,
Theory of Multicomponent Fluids
,
Springer
,
Berlin
10.1007/b97678.
22.
Prosperetti
,
A.
,
Sundaresan
,
S.
,
Pannala
,
S.
, and
Zhang
,
D. Z.
,
2007
, “
Segregated Methods for Two-Fluid Models
,”
Computational Methods for Multiphase Flow
,
Cambridge University Press
,
New York
, pp.
320
385
10.1017/CBO9780511607486.011.
23.
Tamura
,
Y.
, and
Matsumoto
,
Y.
,
2009
, “
Improvement of Bubble Model for Cavitating Flow Simulations
,”
J. Hydrodyn., Ser. B
,
21
(
1
), pp.
41
46
.10.1016/S1001-6058(08)60117-1
24.
Peng
,
G.
, and
Shimizu
,
S.
,
2013
, “
Progress in Numerical Simulation of Cavitating Water Jets
,”
J. Hydrodyn., Ser. B
,
25
(
4
), pp.
502
509
.10.1016/S1001-6058(11)60389-3
25.
Ma
,
J.
,
Oberari
,
A.
,
Drew
,
D.
,
Lahey
,
R.
, Jr.
, and
Moraga
,
J.
,
2010
, “
A Quantitative Sub-Grid Air Entrainment Model for Bubbly Flows—Plunging Jet
,”
J. Comput. Fluids
,
39
(
1
), pp.
77
86
.10.1016/j.compfluid.2009.07.004
26.
Ma
,
J.
,
Oberai
,
A.
,
Hyman
,
M.
,
Drew
,
D. A.
, and
Lahey
,
R. T.
, Jr.
,
2011
, “
Two-Fluid Modeling of Bubbly Flows Around Surface Ships Using a Phenomenological Subgrid Air Entrainment Model
,”
Comput. Fluids
,
52
, pp.
50
57
.10.1016/j.compfluid.2011.08.015
27.
Xiang
,
M.
,
Cheung
,
S. C. P.
,
Tu
,
J. Y.
, and
Zhang
,
W. H.
,
2010
, “
On the Numerical Study of Bubbly Wakes Generated by Ventilated Cavity Using Population Balance Approach
,”
J. Comput. Multiphase Flows
,
2
(
2
), pp.
101
117
.10.1260/1757-482X.2.2.101
28.
Ma
,
J.
,
Oberai
,
A.
,
Drew
,
D. A.
, and
Lahey
,
R. T.
, Jr.
,
2012
, “
A Two-Way Coupled Polydispersed Two-Fluid Model for the Simulation of Air Entrainment Beneath a Plunging Liquid Jet
,”
ASME J. Fluids Eng.
,
134
(
10
), p.
101304
.10.1115/1.4007335
29.
Elghobashi
,
S.
, and
Truesdell
,
G.
,
1993
, “
On the Two-Way Interaction Between Homogeneous Turbulence and Dispersed Solid Particles. I: Turbulence Modification
,”
Phys. Fluids A: Fluid Dyn.
5
(7), pp.
1790
1801
.10.1063/1.858854
30.
Spelt
,
P. D. M.
, and
Biesheivel
,
A.
,
1997
, “
On the Motion of Gas Bubbles in Homogeneous Isotropic Turbulence
,”
J. Fluid Mech.
,
336
, pp.
221
244
.10.1017/S0022112096004739
31.
Hsiao
,
C.-T.
,
Chahine
,
G. L.
, and
Liu
,
H.-L.
,
2003
, “
Scaling Effects on Prediction of Cavitation Inception in a Line Vortex Flow
,”
J. Fluid Eng.
,
125
(1), pp.
53
60
.10.1115/1.1521956
32.
Mattson
,
M. D.
, and
Mahesh
,
K.
,
2012
, “
A One-Way Coupled, Euler–Lagrangian Simulation of Bubble Coalescence in a Turbulent Pipe Flow
,”
Int. J. Multiphase Flow
,
40
, pp.
68
82
.10.1016/j.ijmultiphaseflow.2011.11.013
33.
Raju
,
R.
,
Singh
,
S.
,
Hsiao
,
C.-T.
, and
Chahine
,
G. L.
,
2011
, “
Study of Pressure Wave Propagation in a Two-Phase Bubbly Mixture
,”
ASME J. Fluids Eng
,
133
(12), p.
121302
.10.1115/1.4005263
34.
Ma
,
J.
,
Singh
,
S.
,
Hsiao
,
C.-T.
,
Choi
,
J.-K.
, and
Chahine
,
G. L.
,
2012
, “
Spherical Bubble Dynamics in a Bubbly Medium
,”
International Conference on Numerical Methods in Multiscale Flows
,
Philadelphia, PA
, June 12–14.
35.
Ma
,
J.
,
Chahine
,
G. L.
, and
Hsiao
,
C.-T.
,
2015
, “
Spherical Bubble Dynamics in a Bubbly Medium Using an Euler–Lagrange Model
,”
Chem. Eng. Sci.
,
128
, pp.
64
81
.10.1016/j.ces.2015.01.056
36.
Darmana
,
D.
,
Deen
,
N. G.
, and
Kuipers
,
J. A. M.
,
2006
, “
Parallelization of an Euler–Lagrange Model Using Mixed Domain Decomposition and a Mirror Domain Technique: Application to Dispersed Gas–Liquid Two-Phase Flow
,”
J. Comput. Phys.
,
220
(
1
), pp.
216
248
.10.1016/j.jcp.2006.05.011
37.
Yakubov
,
S.
,
Cankurt
,
B.
,
Abdel-Maksoud
,
M.
, and
Rung
,
T.
,
2013
, “
Hybrid MPI/OpenMP Parallelization of an Euler–Lagrange Approach to Cavitation Modelling
,”
Comput. Fluids
,
80
, pp.
365
371
.10.1016/j.compfluid.2012.01.020
38.
Xu
,
M.
,
Chen
,
F.
,
Liu
,
X.
,
Ge
,
W.
, and
Li
,
J.
,
2012
, “
Discrete Particle Simulation of Gas–Solid Two-Phase Flows With Multi-Scale CPU–GPU Hybrid Computation
,”
Chem. Eng. J.
,
207–208
(
1
), pp.
746
757
.10.1016/j.cej.2012.07.049
39.
Amritkar
,
A.
,
Deb
,
S.
, and
Tafti
,
D.
,
2014
, “
Efficient Parallel CFD-DEM Simulations Using OpenMP
,”
J. Comput. Phys.
,
256
, pp.
501
519
10.1016/j.jcp.2013.09.007
40.
Liu
,
H.
,
Tafti
,
D. K.
, and
Li
,
T.
,
2014
, “
Hybrid Parallelism in MFIX CFD-DEM Using OpenMP
,”
Powder Technol.
,
259
, pp.
22
29
.10.1016/j.powtec.2014.03.047
41.
Hsiao
,
C.-T.
, and
Chahine
,
G. L.
,
2012
, “
Effect of a Propeller and Gas Diffusion on Bubble Nuclei Distribution in a Liquid
,”
J. Hydrodyn., Ser. B
,
24
(6), pp.
809
822
.10.1016/S1001-6058(11)60308-9
42.
Hsiao
,
C.-T.
, and
Chahine
,
G. L.
,
2004
, “
Prediction of Tip Vortex Cavitation Inception Using Coupled Spherical and Nonspherical Bubble Models and Navier–Stokes Computations
,”
J. Mar. Sci. Technol.
,
8
(
3
), pp.
99
108
.10.1007/s00773-003-0162-6
43.
Hsiao
,
C.-T.
, and
Chahine
,
G.
,
2008
, “
Numerical Study of Cavitation Inception Due to Vortex/Vortex Interaction in a Ducted Propulsor
,”
J. Ship Res.
,
52
(2), pp.
114
123
.
44.
Wu
,
X.
,
Choi
,
J.-K.
,
Hsiao
,
C.-T.
, and
Chahine
,
G. L.
,
2010
, “
Bubble Augmented Waterjet Propulsion: Numerical and Experimental Studies
,”
28th Symposium on Naval Hydrodynamics
.
45.
Hsiao
,
C. T.
,
Wu
,
X.
,
Ma
,
J.
, and
Chahine
,
G. L.
,
2013
, “
Numerical and Experimental Study of Bubble Entrainment Due to a Horizontal Plunging Jet
,”
Int. Shipbuild. Prog.
,
60
(1–4), pp.
435
469
10.3233/ISP-130093.
46.
van Wijngaarden
,
L.
,
1968
, “
On the Equations of Motion for Mixtures of Liquid and Gas Bubbles
,”
J. Fluid Mech.
,
33
(
3
), pp.
465
474
.10.1017/S002211206800145X
47.
van Wijngaarden
,
L.
,
1972
, “
One-Dimensional Flow of Liquids Containing Small Gas Bubbles
,”
Annu. Rev. Fluid Mech.
,
4
, pp.
369
396
.10.1146/annurev.fl.04.010172.002101
48.
Commander
,
K. W.
, and
Prosperetti
,
A.
,
1989
, “
Linear Pressure Waves in Bubbly Liquids: Comparison Between Theory and Experiments
,”
J. Acoust. Soc. Am.
,
85
(
2
), pp.
732
746
.10.1121/1.397599
49.
Brennen
,
C. E.
,
2005
,
Fundamentals of Multiphase Flow
,
Cambridge University Press
,
New York
.10.1017/CBO9780511807169
50.
Chorin
,
A. J.
,
1967
, “
A Numerical Method for Solving Incompressible Viscous Flow Problems
,”
J. Comput. Phys.
,
2
, pp.
12
26
.10.1016/0021-9991(67)90037-X
51.
Roe
,
P. L.
,
1981
, “
Approximate Riemann Solvers, Parameter Vectors, and Difference Schemes
,”
J. Comput. Phys.
,
43
, pp.
357
372
.10.1016/0021-9991(81)90128-5
52.
Van Leer
,
B.
,
Thomas
,
J. L.
,
Roe
,
P. L.
, and
Newsome
,
R. W.
,
1987
, “
A Comparison of Numerical Flux Formulas for the Euler and Navier–Stokes Equation
,”
AIAA
Paper No. 87-1104-CP.10.2514/6.87-1104-CP
53.
Johnson
,
V. E.
, and
Hsieh
,
T.
,
1966
, “
The Influence of the Trajectories of Gas Nuclei on Cavitation Inception
,”
6th Symposium on Naval Hydrodynamics
, pp.
163
179
.
54.
Saffman
,
P.
,
1965
, “
The Lift on a Small Sphere in a Slow Shear Flow
,”
J. Fluid Mech.
,
22
(
2
), pp.
385
400
.10.1017/S0022112065000824
55.
Haberman
,
W. L.
, and
Morton
,
R. K.
,
1953
, “
An Experimental Investigation of the Drag and Shape of Air Bubbles Rising in Various Liquids
,” DTMB, Report No. 802.