Parallel

During a simulation with distributed parallel, the communication among processors is crucial. In KitAMR.jl, the communication is supported by MPI.jl.

Ghost layers

KitAMR.jl decomposes the computational domain in physical space. With explicit spatial discretization, the update of a cell only depends on its neighboring cells. If some of these neighboring cells lie on other processors, they are locally marked as ghost. The variables of the ghost cells are updated by MPI communication. Related functions are

KitAMR.solid_exchange!Function
solid_exchange!(
    p4est::Union{Ptr{KitAMR.P4est.LibP4est.p4est}, Ptr{KitAMR.P4est.LibP4est.p8est}},
    ka::KA{DIM, NDF}
)

Update df in Ghost_VsData for the update of immersed boundaries. Currently, the communication occurs through all ghost layers. The difference between data_exchange! is that only immersed-boundary-related ghost cells' mirror_data is updated.

source

In contrast to traditional CFD solver, KitAMR.jl also discretizes velocity space. The resolution of this discretization varies as time and physical coordinates changes. This results in the size of communicated data changing. Currently, the size of the communicated data is unified to hold the largest velocity space that exists in ghost layers. The size is obtained by

KitAMR.get_vs_numFunction
get_vs_num(
    forest::Union{Ptr{KitAMR.P4est.LibP4est.p4est}, Ptr{KitAMR.P4est.LibP4est.p8est}},
    ghost::Union{Ptr{KitAMR.P4est.LibP4est.p4est_ghost_t}, Ptr{KitAMR.P4est.LibP4est.p8est_ghost_t}}
) -> Any

Get the globally largest number of the velocity cells in ghost layers.

source

Whether a unified communication size decreases the efficiency noticeably still requires testing.

Partition

After an AMR process, the grids density on different processors may changes a lot. To balance the load, partition can be performed by calling

KitAMR.partition!Function
partition!(
    p4est::Ptr{KitAMR.P4est.LibP4est.p4est}
) -> Tuple{Vector{Int64}, Vector{Int64}, Int32, Int32}
partition!(
    p4est::Ptr{KitAMR.P4est.LibP4est.p4est},
    weight::Function
) -> Tuple{Vector{Int64}, Vector{Int64}, Int32, Int32}
source

The grids in physical space are encoded as a 1-dimensional sequence by Morton code, and then partitioned according to the weight provided by

KitAMR.partition_weightFunction
partition_weight(
    p4est::Union{Ptr{KitAMR.P4est.LibP4est.p4est}, Ptr{KitAMR.P4est.LibP4est.p8est}},
    which_tree,
    quadrant::Union{Ptr{KitAMR.P4est.LibP4est.p4est_quadrant}, Ptr{KitAMR.P4est.LibP4est.p8est_quadrant}}
) -> Any
source

The backend functions are provided by p4est, insuring high efficiency.

When the partition is finished, a data exchange process is performed to transfer all of the data to where it is required. The communication only occurs to those partitioned cells.

All of the functionalities are wrapped into

KitAMR.ps_partition!Function
ps_partition!(
    p4est::Union{Ptr{KitAMR.P4est.LibP4est.p4est}, Ptr{KitAMR.P4est.LibP4est.p8est}},
    ka::KA
)

Re-partition grids in physical space to balance the computational load according to the weight function.

source