Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add regularization to Laplacian and preconditioners for ConjugateGradientPoissonSolver #3848

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

glwagner
Copy link
Member

@glwagner glwagner commented Oct 16, 2024

This PR experiments with "regularizing" the Poisson operator in ConjugateGradientPoissonSolver in an attempt to resolve #3831. The issue there seems specific to FlatExtrapolationOpenBoundaryCondition. So far, the regularization in this PR doesn't seem to help. However, there might be some details in the implementation that need tweaking which is why I'm opening this PR.

There's related discussion on #3802.

cc @ali-ramadhan @jagoosw @Yixiao-Zhang @xkykai

Here is a script that I've been working with on this journey:

using Printf
using Statistics
using Oceananigans
using Oceananigans.Grids: with_number_type
using Oceananigans.BoundaryConditions: FlatExtrapolationOpenBoundaryCondition
using Oceananigans.Solvers: ConjugateGradientPoissonSolver, fft_poisson_solver
using Oceananigans.Simulations: NaNChecker
using Oceananigans.Utils: prettytime

N = 16
h, w = 50, 20
H, L = 100, 100
x = y = (-L/2, L/2)
z = (-H, 0)

grid = RectilinearGrid(size=(N, N, N); x, y, z, halo=(2, 2, 2), topology=(Bounded, Periodic, Bounded))

mount(x, y=0) = h * exp(-(x^2 + y^2) / 2w^2)
bottom(x, y=0) = -H + mount(x, y)
grid = ImmersedBoundaryGrid(grid, GridFittedBottom(bottom))

prescribed_flow = OpenBoundaryCondition(0.01)
extrapolation_bc = FlatExtrapolationOpenBoundaryCondition()
u_bcs = FieldBoundaryConditions(west = prescribed_flow,
                                east = extrapolation_bc)
                                #east = prescribed_flow)

boundary_conditions = (; u=u_bcs)
reduced_precision_grid = with_number_type(Float32, grid.underlying_grid)
preconditioner = fft_poisson_solver(reduced_precision_grid)
pressure_solver = ConjugateGradientPoissonSolver(grid; preconditioner, maxiter=1000, regularization=1/N^3)

model = NonhydrostaticModel(; grid, boundary_conditions, pressure_solver)
simulation = Simulation(model; Δt=0.1, stop_iteration=1000)
conjure_time_step_wizard!(simulation, cfl=0.5)

u, v, w = model.velocities
δ = ∂x(u) + ∂y(v) + ∂z(w)

function progress(sim)
    model = sim.model
    u, v, w = model.velocities
    @printf("Iter: %d, time: %.1f, Δt: %.2e, max|δ|: %.2e",
            iteration(sim), time(sim), sim.Δt, maximum(abs, δ))

    r = model.pressure_solver.conjugate_gradient_solver.residual
    @printf(", solver iterations: %d, max|r|: %.2e\n",
            iteration(model.pressure_solver), maximum(abs, r))
end

add_callback!(simulation, progress)

simulation.output_writers[:fields] =
    JLD2OutputWriter(model, model.velocities; filename="3831.jld2", schedule=IterationInterval(10), overwrite_existing=true)

run!(simulation)

using GLMakie

ds = FieldDataset("3831.jld2")
fig = Figure(size=(1000, 500))

n = Observable(1)
times = ds["u"].times
title = @lift @sprintf("time = %s", prettytime(times[$n]))

Nx, Ny, Nz = size(grid)
j = round(Int, Ny/2)
k = round(Int, Nz/2)
u_surface = @lift view(ds["u"][$n], :, :, k)
u_slice = @lift view(ds["u"][$n], :, j, :)

ax1 = Axis(fig[1, 1]; title = "u (xy)", xlabel="x", ylabel="y")
hm1 = heatmap!(ax1, u_surface, colorrange=(-0.01, 0.01), colormap=:balance)
Colorbar(fig[1, 2], hm1, label="m/s")

ax2 = Axis(fig[1, 3]; title = "u (xz)", xlabel="x", ylabel="z")
hm2 = heatmap!(ax2, u_slice, colorrange=(-0.01, 0.01), colormap=:balance)
Colorbar(fig[1, 4], hm2, label="m/s")

fig[0, :] = Label(fig, title)

record(fig, "3831.mp4", 1:length(times), framerate=10) do i
    n[] = i
end

@glwagner
Copy link
Member Author

I'll add that I also tried zeroing the mean of the RHS of the Poisson equation, by inserting some code after this line:

launch!(arch, grid, :xyz, _compute_source_term!, rhs, grid, Δt, Ũ)

@glwagner glwagner marked this pull request as ready for review October 17, 2024 16:22
@glwagner glwagner changed the title Add regularization to Laplacian operator for ConjugateGradientPoissonSolver Add regularization to Laplacian and preconditioners for ConjugateGradientPoissonSolver Oct 17, 2024
@glwagner
Copy link
Member Author

Hi @ali-ramadhan @xkykai @jagoosw @Yixiao-Zhang this PR is ready to be viewed. I think we can merge as is since it implements a number of improvements, and continue work in a new PR.

In short this PR implements an optional "regularization" of the Laplace operator:

$$ \nabla^2 \mapsto \underbrace{\nabla^2 + \frac{\delta}{V} \int \mathrm{d} V}_{\equiv \tilde L} $$

where $\delta$ is some constant and $\tilde L$ denotes the regularized Laplace operator. Using this to solve for pressure results in the "regularized Poisson equation",

$$ \tilde L p = R $$

which has the property that the mean pressure $\bar p \equiv \frac{1}{V} \int p \mathrm{d} V$ is now determined, since the mean of the regularized Poisson equation yields

$$ \bar p = \bar R / \delta $$

Similarly $\tilde L c = \delta c$ for some constant $c$, so $\delta$ is the zeroth eigenvalue of the regularized Laplace operator corresponding to the constant eigenfunction / eigenvector.

This PR also modifies the preconditioners for the solver to account for the regularization, so the preconditioners provide an approximate solution to the regularized Laplacian equation. This provides an additional benefit over just regularizing the Laplacian equation alone.

I'm calling $\delta$ the regularization, and it's provided by keyword argument to ConjugateGradientPoissonSolver:

pressure_solver = ConjugateGradientPoissonSolver(grid; preconditioner, regularization=1)

Probably we can come up with a better name than "regularization". Note also that as formulated, $\delta$ has units of $1 / \mathrm{length}^2$. But maybe we should write the code so that the "regularization" has units length^2, or even just length. On the other hand the interpretation of "regularization" as an eigenvalue is nice.

The choice of regularization affects the rate of convergence of the CG iteration. Note that the eigenvalues of the Poisson operator all scale with $1 / \Delta^2$, where $\Delta$ is the grid spacing (for highly anisotropic grids, take the finer direction for this scaling argument). Empirically I found that regularization=1 works best for the MWE above though, which is a bit larger than the inverse grid spacing squared.

The regularization seems to stabilize the CG iteration, so that we can iterate all the way to maxiter. Disappointingly though the convergence with FlatExtrapolationOpenBoundaryCondition is still very slow. Also interesting is that because the convergence is so slow, the difference in error / divergence between maxiter=N^3 and maxiter=10 is not great. Here's an example screen shot using regularization=1 and maxiter=N^3=4096:

julia> include("mwe.jl")
[ Info: Initializing simulation...
Iter: 0, time: 0.0, Δt: 1.10e-01, max|δ|: 0.00e+00, mean(p): 0.00e+00, solver iterations: 0, max|r|: 0.00e+00
[ Info:     ... simulation initialization complete (2.976 seconds)
[ Info: Executing initial time step...
[ Info:     ... initial time step complete (9.965 seconds).
Iter: 1, time: 0.1, Δt: 1.10e-01, max|d|: 4.17e-04, mean(p): -5.98e-03, solver iterations: 4096, max|r|: 6.97e-03
Iter: 2, time: 0.2, Δt: 1.10e-01, max|d|: 6.88e-04, mean(p): 3.33e-03, solver iterations: 4096, max|r|: 2.21e-02
Iter: 3, time: 0.3, Δt: 1.10e-01, max|d|: 5.67e-04, mean(p): -4.26e-03, solver iterations: 4096, max|r|: 1.12e-02

Above d is the divergence and r is the residual of the Poisson equation. But then with maxiter=10,

julia> include("mwe.jl")
[ Info: Initializing simulation...
Iter: 0, time: 0.0, Δt: 1.10e-01, max|d|: 0.00e+00, mean(p): 0.00e+00, solver iterations: 0, max|r|: 0.00e+00
[ Info:     ... simulation initialization complete (5.899 seconds)
[ Info: Executing initial time step...
[ Info:     ... initial time step complete (29.592 ms).
Iter: 1, time: 0.1, Δt: 1.10e-01, max|d|: 5.91e-04, mean(p): -9.52e-04, solver iterations: 10, max|r|: 1.52e-02
Iter: 2, time: 0.2, Δt: 1.10e-01, max|d|: 2.45e-04, mean(p): -7.63e-04, solver iterations: 10, max|r|: 5.91e-03
Iter: 3, time: 0.3, Δt: 1.10e-01, max|d|: 2.65e-04, mean(p): -5.79e-06, solver iterations: 10, max|r|: 7.22e-03

Turning regularization off with regularization=nothing yields

julia> include("mwe.jl")
[ Info: Initializing simulation...
Iter: 0, time: 0.0, Δt: 1.10e-01, max|d|: 0.00e+00, mean(p): 0.00e+00, solver iterations: 0, max|r|: 0.00e+00
[ Info:     ... simulation initialization complete (2.973 seconds)
[ Info: Executing initial time step...
[ Info:     ... initial time step complete (17.534 ms).
Iter: 1, time: 0.1, Δt: 1.10e-01, max|d|: 4.30e-04, mean(p): -1.22e-16, solver iterations: 10, max|r|: 1.17e-02
Iter: 2, time: 0.2, Δt: 1.10e-01, max|d|: 2.24e-04, mean(p): 3.78e-17, solver iterations: 10, max|r|: 6.10e-03
Iter: 3, time: 0.3, Δt: 1.10e-01, max|d|: 1.16e-04, mean(p): 8.26e-17, solver iterations: 10, max|r|: 3.15e-03

So in conclusion:

  • regularization stabilizes the CG iteration for our current example. Without regularization, CG is unstable and diverges if we take too many iterations
  • Interestingly however, the solver converges more quickly for a small number of iterations (eg maxiter=10) without regularization (where we apply the constraint that mean(p) = 0 instead)
  • My current recommendation is to cap maxiter at a small number, or to use regularization if not capping it.

I think there is probably more work to do. First, I am not sure that the regularization is applied correctly; the rather bland results suggest there could be improvement. Second, and separately, I am wondering if there is another way to stabilize the non-regularized CG iteration by applying the linear constraint within the CG solver (eg to the search direction). It seems that the solver may be more efficient if we apply the linear constraint directly (setting mean(p)=0) rather than modifying the linear operator. This does make sense to me, because when we modify the linear operator the CG algorithm is also tasked with finding mean(p) whereas when we apply the constraint, we insert our desired solution for mean(p) directly.

Feedback welcome, I think we should merge this PR and continue work in future PRs.

@Yixiao-Zhang
Copy link
Contributor

Great job! Though I have not looked at the code, I have two comments:

  1. I am surprised that the choice of regularization affects the convergence rate. I will think about it more.
  2. The rate of convergence depends on the distribution of eigenvalues of the preconditioned linear operator. I have a piece code of that outputs the preconditioned linear operator as a dense Matrix, which can be useful for testing the convergence rate and new algorithms without invoking Oceananigans. It can be helpful for testing why FlatExtrapolationOpenBoundaryCondition results in a much slower convergence. Besides, I have found that grids with better symmetry have faster convergence because the resulting Laplacian operators have more repeated eigenvalues. In theory, the number of iterations required for the CG method to get the exact solution is the number of unique eigenvalues.
using LinearAlgebra
using Oceananigans
using Oceananigans.Models.NonhydrostaticModels: ImmersedPoissonSolver
using Oceananigans.ImmersedBoundaries: active_cells_map, immersed_cell, mask_immersed_field!
using Oceananigans.Solvers: solve!
using Statistics: norm, mean

using Oceananigans.Solvers: precondition!

ENV["JULIA_DEBUG"] = "Solvers"

# ---------------------------------------------------------------------- #
# Define Parameters

# Numerical Technic
const arch = CPU()

# Grid
const Nx = 10
const Ny = 10
const Nz = 10
const Lx = 1.0
const Ly = 1.0
const Lz = 1.0

const Δz = Lz / 2  # elevation difference at the top

# ---------------------------------------------------------------------- #
# Define Utils

# Height at Top
@inline function z_top(y::R) where {R<:Real}
    # return Lz - Δz * sin(π/2 * y/Ly) - Δz * 0.2
    return Lz - Δz
end

# ---------------------------------------------------------------------- #
# Define the Simulation

# Grid
ib_grid = begin
    underlying_grid = RectilinearGrid(
        arch,
        size = (Nx, Ny, Nz),
        x = (-Lx / 2, Lx / 2),
        y = (0.0, Ly),
        z = (0.0, Lz),
        topology = (Periodic, Bounded, Bounded),
        halo = (4, 4, 4),
    )

    @inline function is_ib(x::R, y::R, z::R) where {R<:Real}
        return z > z_top(y)
        # return false
    end

    ImmersedBoundaryGrid(
        underlying_grid,
        GridFittedBoundary(is_ib)
    )
end

# pressure solver
pressure_solver = ImmersedPoissonSolver(
    ib_grid,
    preconditioner = :FFT,
    solver_method = :PreconditionedConjugateGradient,
    reltol = 0,
    abstol = 0,
    maxiter = 20,
)

# ---------------------------------------------------------------------- #
# test the solver

rhs = Field((Center, Center, Center), ib_grid)
output = Field((Center, Center, Center), ib_grid)

set!(output, 0.0)

output[1, 1, 1] = 1.0  # use a random field as the solution

pressure_solver.pcg_solver.linear_operation!(rhs, output)

rhs ./= norm(rhs)

set!(output, 0.0)

solve!(output, pressure_solver.pcg_solver, rhs)

# ---------------------------------------------------------------------- #
# calculate the eigenvalues

const c0 = 0.1
const do_gauge_fixing = true
const active_cells = active_cells_map(ib_grid, ib_grid.immersed_boundary)

const n_active_cells = length(active_cells)


if do_gauge_fixing
    @assert c0 > 0
end

function apply_linear_operation!(w, v, solver)
    solver.pcg_solver.linear_operation!(w, v)
    if do_gauge_fixing
        w .-= c0 * mean(v)
    end
end

function apply_preconditioner!(w, v, solver)
    precondition!(w, solver.pcg_solver.preconditioner, v)
    mask_immersed_field!(w)
    if do_gauge_fixing
        w .-= inv(c0) * mean(v)
    end
end

function get_matrix(f!, ib_grid, pressure_solver)

    # define fields
    rhs = Field((Center, Center, Center), ib_grid)
    p = Field((Center, Center, Center), ib_grid)

    # use a matrix to represent the preconditioner

    matrix = zeros((n_active_cells, n_active_cells))

    for i in 1:n_active_cells
        rhs .= 0.
        rhs.data[active_cells[i]...] = 1.0
        f!(p, rhs, pressure_solver)
        for j in 1:n_active_cells
            v = p.data[active_cells[j]...]
            matrix[i, j] = v
        end
    end
    return -Symmetric(matrix)
end

const A = get_matrix(apply_linear_operation!, ib_grid, pressure_solver)
const M⁻¹ = get_matrix(apply_preconditioner!, ib_grid, pressure_solver)

C = cholesky(M⁻¹)

A1 =  Symmetric(C.U * A * C.L)

@info "Distribution of eigenvalues" eigvals(A1)

# should be the same as A1
# @info "Distribution of eigenvalues" eigvals(A * M⁻¹)

mean_p = mean(p)

if !isnothing(δ)
mean_rhs = mean(rhs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle, the preconditioner solves the Poisson equation in the underlying grid. Following this logic, it is the mean of rhs calculated in the underlying grid that should be subtracted. I have not tested which is better, but I feel the mean in the underlying grid is more consistent mathematically. (I assume mean(rhs) here is the mean of rhs calculated in the immersed boundary grid.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, mean(rhs) accounts for the immersed boundary. My objective with this was to define a preconditioner that (approximately) solves the same problem as the CG iteration. This means using mean over the whole grid.

Put another way, if we use mean over the whole grid, then the mean pressure produced by the preconditioner would be different than the mean pressure that would minimize the conjugate gradient residual?

It's worth testing though. To implement this properly, we need to know the volume of the immersed region which will require an additional computation.

if !isnothing(L.δ)
# Add regularization
ϕ̄ = mean(ϕ)
parent(Lϕ) .+= L.δ * ϕ̄
Copy link
Contributor

@Yixiao-Zhang Yixiao-Zhang Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to mask after this operation? It seems to that the masked region also gets a non-zero value after this line.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, good catch. Actually, while dot and norm should ignore immersed regions, I'm actually not sure they do right now so this may matter. I will test it.

end
end

rhs = CenterField(grid)
operator = RegularizedLaplacian(regularization)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be a stupid question. It seems to me that regularization and δ is set to a positive number, 1 for example in the demo. However, the Laplacian operator is actually negative semi-definite (although our previous conversations were about positive semi-definite), and thus regularization (or sayδ) should be negative as well. Am I missing something?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, right... I played around this and found that the solver "doesn't blow up" only if we use positive regularization. But this may actually be a clue that there's a bug somewhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm now convinced that there is a bug or some conceptual error given that this doesn't work with $\delta &lt; 0$

@xkykai
Copy link
Collaborator

xkykai commented Oct 17, 2024

The regularization seems to stabilize the CG iteration, so that we can iterate all the way to maxiter.

From the experiment with a regularizer, but comparing maxiter = 4096 and maxiter = 10, it appears that the residual is larger for maxiter = 4096. If I'm understanding it correctly, does this mean that the CG solver is still weakly unstable ie: residual grows with number of iterations? I'm also curious if you know whether the residual decreases with every CG iteration.

Looking at divergence as well it even seems to me that maxiter = 10 does better than maxiter = 4096.

@glwagner
Copy link
Member Author

glwagner commented Oct 17, 2024

The regularization seems to stabilize the CG iteration, so that we can iterate all the way to maxiter.

From the experiment with a regularizer, but comparing maxiter = 4096 and maxiter = 10, it appears that the residual is larger for maxiter = 4096. If I'm understanding it correctly, does this mean that the CG solver is still weakly unstable ie: residual grows with number of iterations? I'm also curious if you know whether the residual decreases with every CG iteration.

Looking at divergence as well it even seems to me that maxiter = 10 does better than maxiter = 4096.

Actually I didn't look into that, I just implemented the regularization and ran a few simulations. I think it would be useful to investigate whether the residual is decreasing. It does seem like there may be a problem with the implementation, both given this and @Yixiao-Zhang's comment about the sign of the regularization.

@glwagner
Copy link
Member Author

After the considerations yesterday I made the following changes:

  1. Mask the product / operation so that it is 0 in immersed regions.
  2. Change the sign of regularization so that regularization > 0 implies that $\tilde L$ is negative-definite.

I'm no longer convinced that this PR actually makes progress on #3831. But it could be the basis for further investigation.

I also think that RegularizedPreconditioner needs some improvement; it has a property preconditioner (the non-regularized preconditioner) which leads to silly constructs like pressure_solver.conjugate_gradient_solver.preconditioner.preconditioner.

@glwagner
Copy link
Member Author

This PR implements an optional "regularization" of the Laplace operator:

$$ \nabla^2 \mapsto \underbrace{\nabla^2 - \frac{\delta}{V} \int \mathrm{d} V}_{\equiv \tilde L} $$

where $\delta$ is some constant and $\tilde L$ denotes the regularized Laplace operator. Using this to solve for pressure results in the "regularized Poisson equation",

$$ \tilde L p = R $$

which has the property that the mean pressure $\bar p \equiv \frac{1}{V} \int p \mathrm{d} V$ is now determined, since the mean of the regularized Poisson equation yields

$$ \bar p = - \bar R / \delta $$

Similarly $\tilde L c = - \delta c$ for some constant $c$, so $- \delta$ is the zeroth eigenvalue of the regularized Laplace operator corresponding to the constant eigenfunction / eigenvector. With $\delta &gt; 0$ the operator $\tilde L$ is negative definite.

The choice of regularization affects the rate of convergence of the CG iteration. Note that the eigenvalues of the Poisson operator have scales between $1/L^2$ and $1 / \Delta^2$, where $L$ is the linear domain extent and $\Delta$ is the grid spacing (for highly anisotropic grids, take the shortest/finest direction for this scaling argument).

Copy link
Member

@ali-ramadhan ali-ramadhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@glwagner and @Yixiao-Zhang Thank you for looking into all of this!

I'm no longer convinced that this PR actually makes progress on #3831. But it could be the basis for further investigation.

If merging this PR means we don't need manually play around with maxiter to avoid blowups, isn't that an improvement? Even if regularization doesn't improve the rate of convergence.

I'm curious to play around with this next week! I think I have a few more tests in mind beyond the MWE from #3831.

return nothing
end

struct DefaultPreconditioner end

function ConjugateGradientPoissonSolver(grid;
regularization = nothing,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be clearer to call this the regularization_constant. What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about regularizer?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me!


function regularize_poisson_solution!(p, regularized)
δ = regularized.regularization
rhs = regularized.rhs
mean_p = mean(p)
Copy link
Contributor

@Yixiao-Zhang Yixiao-Zhang Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may have found why the solver blows up for positive regularization.

Here, p is the solution in the whole grid by the FFT Poisson solver, but mean_p calculates the mean of p in the immersed boundary grid. In my test, mean_p is usually order 1e-5, which is much larger than round-off errors. I also calculated the eigenvalues of the preconditioner with regularization=0, and I found both negative and positive eigenvalues.

Removing the subtraction of mean_p enables convergence for positive regularization according to my test.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is interesting! I'll test it. I don't completely understand why this is what's needed, but let's think about it.

Copy link
Member Author

@glwagner glwagner Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still getting NaNs. What regularization did you use? Did you make any other changes? EDIT: the regularization does seem to matter, now I get non-blow-up with regularization ~ 1/L^2.

Copy link
Contributor

@Yixiao-Zhang Yixiao-Zhang Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested the code in a simpler case without FlatExtrapolationOpenBoundaryCondition. That may explain why the result is different. Besides, I usually check the distribution of eigenvalues to judge whether the solver is numerically stable than running the simulation. I am looking into the case with FlatExtrapolationOpenBoundaryCondition and aim to understand how it changes the distribution of eigenvalues.

I just ran your demo and got NaNs as well.

Copy link
Member Author

@glwagner glwagner Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to write out the idea, we are trying to solve a "regularized" Poisson equation:

$$ \tilde L \phi = R $$

where $\tilde L = \nabla^2 - \delta V^{-1} \int \mathrm{d V}$ is the regularized Laplace operator (on an immersed boundary grid) with the singularity removed.

We're proposing to use a preconditioner that directly inverts (with an FFT --- fast) the "non-immersed" (but regularized) Laplace operator,

$$ L_u = \nabla_u^2 - \delta V_\star^{-1} \int_{\Omega_\star} \mathrm{d V} $$

where $\nabla_u^2$ applies on the "underlying grid" (ie ignoring the immersed boundary). The present discussion is how to define the mean, which is taken over the domain $\Omega_\star$ with volume $V_\star$. The current code takes $V_\star = V$, but we can also consider taking $V_\star = V_u$ and $\Omega_\star = \Omega_u$.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way @Yixiao-Zhang I think this example really meaningfully exposes the effect of regularization, because it's only with this kind of open boundary condition that $\bar R \ne 0$. I believe that with impenetrable (and periodic of course) boundary conditions $\bar R$ is very small.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not realized $\overline{R} \neq 0$ for this kind of open boundary condition. However, I think the current pressure solver cannot handle FlatExtrapolationOpenBoundaryCondition properly. As far as I understand, the pressure solver is unaware of the open boundary condition, but the open boundary condition should affect how fill_halo_region! works for rhs. The original boundary condition for rhs is $\hat{n}\cdot \nabla r = 0$, but it should be $(\hat{n}\cdot \nabla)^2 r = 0$ for open boundary conditions? (I am not sure.)

Right now, rhs is defined in this way:

rhs = CenterField(grid)

The good news is that the Laplace operator becomes negative definite if we take the open boundary into account. So, there is no need to regularize the Poisson solver. (The preconditioner is still semi-definite, though.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the pressure solver is unaware of the open boundary condition

This isn't quite true. The inhomogeneous part of any boundary condition is incorporated into the Poisson equation RHS (this happens implicitly by 1) filling halo regions on the predictor velocity prior to the pressure solve and then 2) taking the divergence of the predictor velocity.) This is why the operator / FFT-solve must use homogeneous Neumann conditions for the algorithm to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants