Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add regularization to Laplacian and preconditioners for ConjugateGradientPoissonSolver #3848

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
6 changes: 6 additions & 0 deletions src/ImmersedBoundaries/ImmersedBoundaries.jl
Original file line number Diff line number Diff line change
Expand Up @@ -304,6 +304,12 @@ end

isrectilinear(ibg::IBG) = isrectilinear(ibg.underlying_grid)

# TODO: eliminate when ImmersedBoundaries precedes Solvers
# This is messy because IBG does _not_ have a Poisson solver, only the underlying grid does
import Oceananigans.Solvers: has_fft_poisson_solver, fft_poisson_solver
has_fft_poisson_solver(ibg::IBG) = has_fft_poisson_solver(ibg.underlying_grid)
fft_poisson_solver(ibg::IBG) = fft_poisson_solver(ibg.underlying_grid)

@inline fractional_x_index(x, locs, grid::ImmersedBoundaryGrid) = fractional_x_index(x, locs, grid.underlying_grid)
@inline fractional_y_index(x, locs, grid::ImmersedBoundaryGrid) = fractional_y_index(x, locs, grid.underlying_grid)
@inline fractional_z_index(x, locs, grid::ImmersedBoundaryGrid) = fractional_z_index(x, locs, grid.underlying_grid)
Expand Down
7 changes: 6 additions & 1 deletion src/Solvers/Solvers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ using Oceananigans.Grids: XYRegularRG, XZRegularRG, YZRegularRG, XYZRegularRG

Return the `M`th root of unity raised to the `k`th power.
"""
@inline ω(M, k) = exp(-2im*π*k/M)
@inline ω(M, k) = exp(-2im * π * k / M)

reshaped_size(N, dim) = dim == 1 ? (N, 1, 1) :
dim == 2 ? (1, N, 1) :
Expand All @@ -51,8 +51,13 @@ include("heptadiagonal_iterative_solver.jl")
const GridWithFFTSolver = Union{XYZRegularRG, XYRegularRG, XZRegularRG, YZRegularRG}
const GridWithFourierTridiagonalSolver = Union{XYRegularRG, XZRegularRG, YZRegularRG}

# TODO: will be unnecessary when ImmersedBoundaries precedes Solvers
has_fft_poisson_solver(grid) = false
has_fft_poisson_solver(::GridWithFFTSolver) = true

fft_poisson_solver(grid::XYZRegularRG) = FFTBasedPoissonSolver(grid)
fft_poisson_solver(grid::GridWithFourierTridiagonalSolver) =
FourierTridiagonalPoissonSolver(grid.underlying_grid)

end # module

109 changes: 80 additions & 29 deletions src/Solvers/conjugate_gradient_poisson_solver.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using Oceananigans.Operators: divᶜᶜᶜ, ∇²ᶜᶜᶜ
using Oceananigans.Operators
tomchor marked this conversation as resolved.
Show resolved Hide resolved
using Statistics: mean

using KernelAbstractions: @kernel, @index
Expand Down Expand Up @@ -34,33 +34,47 @@ end
@inbounds ∇²ϕ[i, j, k] = ∇²ᶜᶜᶜ(i, j, k, grid, ϕ)
end

function compute_laplacian!(∇²ϕ, ϕ)
struct RegularizedLaplacian{D}
δ :: D
end

function (L::RegularizedLaplacian)(Lϕ, ϕ)
grid = ϕ.grid
arch = architecture(grid)
fill_halo_regions!(ϕ)
launch!(arch, grid, :xyz, laplacian!, ∇²ϕ, grid, ϕ)
launch!(arch, grid, :xyz, laplacian!, Lϕ, grid, ϕ)

if !isnothing(L.δ)
# Add regularization
ϕ̄ = mean(ϕ)
parent(Lϕ) .+= L.δ * ϕ̄
Copy link
Contributor

@Yixiao-Zhang Yixiao-Zhang Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to mask after this operation? It seems to that the masked region also gets a non-zero value after this line.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good catch. Actually, while dot and norm should ignore immersed regions, I'm actually not sure they do right now so this may matter. I will test it.

end

return nothing
end

struct DefaultPreconditioner end

function ConjugateGradientPoissonSolver(grid;
regularization = nothing,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be clearer to call this the regularization_constant. What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about regularizer?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me!

preconditioner = DefaultPreconditioner(),
reltol = sqrt(eps(grid)),
abstol = sqrt(eps(grid)),
kw...)

if preconditioner isa DefaultPreconditioner # try to make a useful default
if grid isa ImmersedBoundaryGrid && grid.underlying_grid isa GridWithFFTSolver
preconditioner = fft_poisson_solver(grid.underlying_grid)
if has_fft_poisson_solver(grid)
preconditioner = fft_poisson_solver(grid)
else
preconditioner = DiagonallyDominantPreconditioner()
preconditioner = AsymptoticPoissonPreconditioner()
end
end

rhs = CenterField(grid)
operator = RegularizedLaplacian(regularization)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be a stupid question. It seems to me that regularization and δ is set to a positive number, 1 for example in the demo. However, the Laplacian operator is actually negative semi-definite (although our previous conversations were about positive semi-definite), and thus regularization (or sayδ) should be negative as well. Am I missing something?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, right... I played around this and found that the solver "doesn't blow up" only if we use positive regularization. But this may actually be a clue that there's a bug somewhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm now convinced that there is a bug or some conceptual error given that this doesn't work with $\delta < 0$

preconditioner = RegularizedPreconditioner(preconditioner, rhs, regularization)

conjugate_gradient_solver = ConjugateGradientSolver(compute_laplacian!;
conjugate_gradient_solver = ConjugateGradientSolver(operator;
reltol,
abstol,
preconditioner,
Expand Down Expand Up @@ -110,41 +124,72 @@ function compute_preconditioner_rhs!(solver::FourierTridiagonalPoissonSolver, rh
return nothing
end

const FFTBasedPreconditioner = Union{FFTBasedPoissonSolver, FourierTridiagonalPoissonSolver}
struct RegularizedPreconditioner{P, R, D}
preconditioner :: P
rhs :: R
regularization :: D
end

const SolverWithFFT = Union{FFTBasedPoissonSolver, FourierTridiagonalPoissonSolver}
const FFTBasedPreconditioner = RegularizedPreconditioner{<:SolverWithFFT}

function precondition!(p, preconditioner::FFTBasedPreconditioner, r, args...)
compute_preconditioner_rhs!(preconditioner, r)
p = solve!(p, preconditioner)
function precondition!(p, regularized::FFTBasedPreconditioner, r, args...)
solver = regularized.preconditioner
compute_preconditioner_rhs!(solver, r)
p = solve!(p, solver)

δ = regularized.regularization
rhs = regularized.rhs
mean_p = mean(p)
Copy link
Contributor

@Yixiao-Zhang Yixiao-Zhang Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may have found why the solver blows up for positive regularization.

Here, p is the solution in the whole grid by the FFT Poisson solver, but mean_p calculates the mean of p in the immersed boundary grid. In my test, mean_p is usually order 1e-5, which is much larger than round-off errors. I also calculated the eigenvalues of the preconditioner with regularization=0, and I found both negative and positive eigenvalues.

Removing the subtraction of mean_p enables convergence for positive regularization according to my test.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is interesting! I'll test it. I don't completely understand why this is what's needed, but let's think about it.

Copy link
Member Author

@glwagner glwagner Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still getting NaNs. What regularization did you use? Did you make any other changes? EDIT: the regularization does seem to matter, now I get non-blow-up with regularization ~ 1/L^2.

Copy link
Contributor

@Yixiao-Zhang Yixiao-Zhang Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested the code in a simpler case without FlatExtrapolationOpenBoundaryCondition. That may explain why the result is different. Besides, I usually check the distribution of eigenvalues to judge whether the solver is numerically stable than running the simulation. I am looking into the case with FlatExtrapolationOpenBoundaryCondition and aim to understand how it changes the distribution of eigenvalues.

I just ran your demo and got NaNs as well.

Copy link
Member Author

@glwagner glwagner Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to write out the idea, we are trying to solve a "regularized" Poisson equation:

$$ \tilde L \phi = R $$

where $\tilde L = \nabla^2 - \delta V^{-1} \int \mathrm{d V}$ is the regularized Laplace operator (on an immersed boundary grid) with the singularity removed.

We're proposing to use a preconditioner that directly inverts (with an FFT --- fast) the "non-immersed" (but regularized) Laplace operator,

$$ L_u = \nabla_u^2 - \delta V_\star^{-1} \int_{\Omega_\star} \mathrm{d V} $$

where $\nabla_u^2$ applies on the "underlying grid" (ie ignoring the immersed boundary). The present discussion is how to define the mean, which is taken over the domain $\Omega_\star$ with volume $V_\star$. The current code takes $V_\star = V$, but we can also consider taking $V_\star = V_u$ and $\Omega_\star = \Omega_u$.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way @Yixiao-Zhang I think this example really meaningfully exposes the effect of regularization, because it's only with this kind of open boundary condition that $\bar R \ne 0$. I believe that with impenetrable (and periodic of course) boundary conditions $\bar R$ is very small.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not realized $\overline{R} \neq 0$ for this kind of open boundary condition. However, I think the current pressure solver cannot handle FlatExtrapolationOpenBoundaryCondition properly. As far as I understand, the pressure solver is unaware of the open boundary condition, but the open boundary condition should affect how fill_halo_region! works for rhs. The original boundary condition for rhs is $\hat{n}\cdot \nabla r = 0$, but it should be $(\hat{n}\cdot \nabla)^2 r = 0$ for open boundary conditions? (I am not sure.)

Right now, rhs is defined in this way:

rhs = CenterField(grid)

The good news is that the Laplace operator becomes negative definite if we take the open boundary into account. So, there is no need to regularize the Poisson solver. (The preconditioner is still semi-definite, though.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the pressure solver is unaware of the open boundary condition

This isn't quite true. The inhomogeneous part of any boundary condition is incorporated into the Poisson equation RHS (this happens implicitly by 1) filling halo regions on the predictor velocity prior to the pressure solve and then 2) taking the divergence of the predictor velocity.) This is why the operator / FFT-solve must use homogeneous Neumann conditions for the algorithm to work.


if !isnothing(δ)
mean_rhs = mean(rhs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle, the preconditioner solves the Poisson equation in the underlying grid. Following this logic, it is the mean of rhs calculated in the underlying grid that should be subtracted. I have not tested which is better, but I feel the mean in the underlying grid is more consistent mathematically. (I assume mean(rhs) here is the mean of rhs calculated in the immersed boundary grid.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, mean(rhs) accounts for the immersed boundary. My objective with this was to define a preconditioner that (approximately) solves the same problem as the CG iteration. This means using mean over the whole grid.

Put another way, if we use mean over the whole grid, then the mean pressure produced by the preconditioner would be different than the mean pressure that would minimize the conjugate gradient residual?

It's worth testing though. To implement this properly, we need to know the volume of the immersed region which will require an additional computation.

Δp = mean_p - mean_rhs / δ
else
Δp = mean_p
end

grid = p.grid
arch = architecture(grid)
launch!(arch, grid, :xyz, subtract_and_mask!, p, grid, mean_p)
launch!(arch, grid, :xyz, subtract_and_mask!, p, grid, Δp)

return p
end

@kernel function subtract_and_mask!(a, grid, b)
i, j, k = @index(Global, NTuple)
active = !inactive_cell(i, j, k, grid)
a[i, j, k] = (a[i, j, k] - b) * active
@inbounds a[i, j, k] = (a[i, j, k] - b) * active
end

#####
##### The "DiagonallyDominantPreconditioner" (Marshall et al 1997)
##### The "AsymptoticPoissonPreconditioner" (Marshall et al 1997)
#####

struct DiagonallyDominantPreconditioner end
Base.summary(::DiagonallyDominantPreconditioner) = "DiagonallyDominantPreconditioner"
struct AsymptoticPoissonPreconditioner end
const RegularizedNDPP = RegularizedPreconditioner{<:AsymptoticPoissonPreconditioner}
Base.summary(::AsymptoticPoissonPreconditioner) = "AsymptoticPoissonPreconditioner"

@inline function precondition!(p, ::DiagonallyDominantPreconditioner, r, args...)
@inline function precondition!(p, preconditioner::RegularizedNDPP, r, args...)
grid = r.grid
arch = architecture(p)
fill_halo_regions!(r)
launch!(arch, grid, :xyz, _diagonally_dominant_precondition!, p, grid, r)
launch!(arch, grid, :xyz, _asymptotic_poisson_precondition!, p, grid, r)

δ = preconditioner.regularization
rhs = preconditioner.rhs
mean_p = mean(p)
launch!(arch, grid, :xyz, subtract_and_mask!, p, grid, mean_p)

if !isnothing(δ)
mean_rhs = mean(rhs)
Δp = mean_p - mean_rhs / δ
else
Δp = mean_p
end

grid = p.grid
arch = architecture(grid)
launch!(arch, grid, :xyz, subtract_and_mask!, p, grid, Δp)

return p
end
Expand All @@ -161,17 +206,23 @@ end
Ay⁻(i, j, k, grid) - Ay⁺(i, j, k, grid) -
Az⁻(i, j, k, grid) - Az⁺(i, j, k, grid)

@inline heuristic_residual(i, j, k, grid, r) =
@inbounds 1 / Ac(i, j, k, grid) * (r[i, j, k] - 2 * Ax⁻(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i-1, j, k, grid)) * r[i-1, j, k] -
2 * Ax⁺(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i+1, j, k, grid)) * r[i+1, j, k] -
2 * Ay⁻(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j-1, k, grid)) * r[i, j-1, k] -
2 * Ay⁺(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j+1, k, grid)) * r[i, j+1, k] -
2 * Az⁻(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j, k-1, grid)) * r[i, j, k-1] -
2 * Az⁺(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j, k+1, grid)) * r[i, j, k+1])

@kernel function _diagonally_dominant_precondition!(p, grid, r)
@inline function heuristic_poisson_solution(i, j, k, grid, r)
@inbounds begin
a⁰⁰⁰ = r[i, j, k]
a⁻⁰⁰ = 2 * Ax⁻(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i-1, j, k, grid)) * r[i-1, j, k]
a⁺⁰⁰ = 2 * Ax⁺(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i+1, j, k, grid)) * r[i+1, j, k]
a⁰⁻⁰ = 2 * Ay⁻(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j-1, k, grid)) * r[i, j-1, k]
a⁰⁺⁰ = 2 * Ay⁺(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j+1, k, grid)) * r[i, j+1, k]
a⁰⁰⁻ = 2 * Az⁻(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j, k-1, grid)) * r[i, j, k-1]
a⁰⁰⁺ = 2 * Az⁺(i, j, k, grid) / (Ac(i, j, k, grid) + Ac(i, j, k+1, grid)) * r[i, j, k+1]
end

return (a⁰⁰⁰ - a⁻⁰⁰ - a⁺⁰⁰ - a⁰⁻⁰ - a⁰⁺⁰ - a⁰⁰⁻ - a⁰⁰⁺) / Ac(i, j, k, grid)
end

@kernel function _asymptotic_poisson_precondition!(p, grid, r)
i, j, k = @index(Global, NTuple)
active = !inactive_cell(i, j, k, grid)
@inbounds p[i, j, k] = heuristic_residual(i, j, k, grid, r) * active
@inbounds p[i, j, k] = heuristic_poisson_solution(i, j, k, grid, r) * active
end

25 changes: 13 additions & 12 deletions src/Solvers/conjugate_gradient_solver.jl
Original file line number Diff line number Diff line change
Expand Up @@ -94,18 +94,18 @@ function ConjugateGradientSolver(linear_operation;
FT = eltype(grid)

return ConjugateGradientSolver(arch,
grid,
linear_operation,
FT(reltol),
FT(abstol),
maxiter,
0,
zero(FT),
linear_operator_product,
search_direction,
residual,
preconditioner,
precondition_product)
grid,
linear_operation,
FT(reltol),
FT(abstol),
maxiter,
0,
zero(FT),
linear_operator_product,
search_direction,
residual,
preconditioner,
precondition_product)
end

"""
Expand Down Expand Up @@ -269,3 +269,4 @@ function Base.show(io::IO, solver::ConjugateGradientSolver)
"├── abstol: ", prettysummary(solver.abstol), "\n",
"└── maxiter: ", solver.maxiter)
end

10 changes: 7 additions & 3 deletions src/Solvers/fft_based_poisson_solver.jl
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ end
Solve the "generalized" Poisson equation,

```math
(∇² + m) ϕ = b,
(∇² + m) ϕ + μ ϕ̄ = b,
```

where ``m`` is a number, using a eigenfunction expansion of the discrete Poisson operator
Expand All @@ -92,7 +92,7 @@ elements (typically the same type as `solver.storage`).
Equation ``(∇² + m) ϕ = b`` is sometimes referred to as the "screened Poisson" equation
when ``m < 0``, or the Helmholtz equation when ``m > 0``.
"""
function solve!(ϕ, solver::FFTBasedPoissonSolver, b=solver.storage, m=0)
function solve!(ϕ, solver::FFTBasedPoissonSolver, b=solver.storage, m=0, μ=0)
arch = architecture(solver)
topo = TX, TY, TZ = topology(solver.grid)
Nx, Ny, Nz = size(solver.grid)
Expand All @@ -112,7 +112,11 @@ function solve!(ϕ, solver::FFTBasedPoissonSolver, b=solver.storage, m=0)
# If m === 0, the "zeroth mode" at `i, j, k = 1, 1, 1` is undetermined;
# we set this to zero by default. Another slant on this "problem" is that
# λx[1, 1, 1] + λy[1, 1, 1] + λz[1, 1, 1] = 0, which yields ϕ[1, 1, 1] = Inf or NaN.
m === 0 && CUDA.@allowscalar ϕc[1, 1, 1] = 0
if m === μ === 0
CUDA.@allowscalar ϕc[1, 1, 1] = 0
elseif μ > 0
CUDA.@allowscalar ϕc[1, 1, 1] = - b[1, 1, 1] / (μ - m)
end

# Apply backward transforms in order
for transform! in solver.transforms.backward
Expand Down