Grid generation
JustRelax uses staggered Cartesian grids. The main entry point is Geometry, which stores:
xci: cell-centered coordinatesxvi: vertex coordinatesxi_vel: staggered velocity coordinatesdi: grid spacing at cell centers, vertices, and velocity locations
For most workflows you either build a uniform grid from the number of cells and domain size, or a nonuniform grid from explicit vertex coordinates.
Uniform grids
Use Geometry(ni, li; origin = ...) to create a uniform grid:
using JustRelax
ni = (128, 64) # number of cells
li = (1.0e6, 3.0e5) # physical domain size
origin = (0.0, -3.0e5)
grid = Geometry(ni, li; origin = origin)
xci = grid.xci # cell-centered coordinates
xvi = grid.xvi # vertex coordinates
grid_vx, grid_vy = grid.xi_vel # staggered velocity coordinates
dx, dy = grid.di.centerIn serial, the grid covers the full domain directly. If ImplicitGlobalGrid is already initialized, the same constructor returns the local MPI subdomain, while still using the global domain lengths li to compute the spacing.
Nonuniform grids
Use explicit vertex coordinates when you want local refinement or nonuniform spacing:
using JustRelax
xv = [0.0, 0.1, 0.2, 0.4, 0.7, 1.0]
yv = [-1.0, -0.7, -0.45, -0.2, 0.0]
grid = Geometry(xv, yv)
xci = grid.xci
xvi = grid.xvi
grid_vx, grid_vy = grid.xi_vel
dx = grid.di.vertex[1]
dy = grid.di.vertex[2]This constructor derives:
cell-centered coordinates from the vertex coordinates
nonuniform spacings with
diff.(xvi)staggered velocity grids with the required ghost points
If you want the coordinate arrays stored in a specific array type, pass an array constructor as the first argument:
grid = Geometry(Array, xv, yv)MPI-distributed grids
For distributed runs, initialize ImplicitGlobalGrid first and then construct the grid exactly as in serial:
using JustRelax
nx, ny = 128, 64
igg = IGG(init_global_grid(nx, ny, 1; init_MPI = true)...)
grid = Geometry((nx, ny), (1.0e6, 3.0e5); origin = (0.0, -3.0e5))Here grid.xci, grid.xvi, and grid.xi_vel correspond to the local rank, while the spacing is computed from the global grid dimensions returned by ImplicitGlobalGrid.
Particle initialization
Recent particle-related updates use the staggered velocity grids stored in Geometry directly:
using JustPIC, JustPIC._2D
nxcell = 24
max_xcell = 36
min_xcell = 12
particles = init_particles(backend, nxcell, max_xcell, min_xcell, grid.xi_vel...)This is the preferred setup in the current examples and tests. You only need velocity_grids(xci, xvi, di) explicitly if you want the staggered coordinates outside of Geometry.
API reference
JustRelax.IGG Type
IGG(me, dims, nprocs, coords, comm_cart)Container for the Cartesian MPI topology returned by ImplicitGlobalGrid.init_global_grid.
This is typically created as:
igg = IGG(init_global_grid(nx, ny, nz; init_MPI = true)...)and then passed around so code can access the current rank, Cartesian coordinates, and communicator associated with the distributed grid decomposition.
sourceJustRelax.Geometry Type
struct Geometry{nDim,V,D,T}A staggered Cartesian grid in nDim dimensions.
Geometry stores the domain size, origin, cell spacing, cell-centered coordinates, vertex coordinates, and the staggered velocity grids used throughout JustRelax.
JustRelax.lazy_grid Function
lazy_grid(di, ni, Li; origin = ntuple(_ -> zero(T1), Val(N)))Create cell-centered and vertex coordinates for a serial uniform grid.
di gives the spacing in each direction, ni the number of cells, and Li the physical lengths of the domain.
JustRelax.velocity_grids Function
velocity_grids(xci, xvi, di)Build staggered velocity coordinates from cell-centered and vertex grids.
For each velocity component, the coordinate along that component lives on vertices, while the transverse directions are extended with one ghost point on either side. Both uniform spacings and nonuniform spacing vectors are supported in 2D and 3D.
Arguments
xci: Cell-centered coordinates in each direction.xvi: Vertex coordinates in each direction.di: Cell spacing as either scalars for a uniform grid or vectors for a nonuniform grid.