Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Global mass conservation and diffusion equation #124

Open
JanSuntajs opened this issue Sep 9, 2024 · 4 comments
Open

Global mass conservation and diffusion equation #124

JanSuntajs opened this issue Sep 9, 2024 · 4 comments
Labels
question Further information is requested

Comments

@JanSuntajs
Copy link

JanSuntajs commented Sep 9, 2024

Hello, I am following the code in Example 106, since I am interested in implementing the nonlinear diffusion case in 1D for some benchmarking purposes. However, upon running the code, I've noticed the mass is not conserved globally in the example.

If I understand the tutorials and Github issues correctly, if no boundary conditions are specified, the default are the homogenous Neumann boundary conditions and so the global mass of the system should remain conserved at all times. Already slightly increasing the final time of the simulation in the example and summing the solution vectors shows this is not the case. I am appending a slightly modified example below in which I changed the initial condition (it is a step function now) and am also simulating the standard linear diffusion equation for simplicity.

module CheckMassConservation
using Printf
using VoronoiFVM
using ExtendableGrids
using GridVisualize



function step(x, a, b, n)
    return (a < x < b) ? 1.0 : 0.0 / n
end

function main(; n = 200, m = 2, Plotter = nothing, verbose = false,
              unknown_storage = :sparse, tend = 1., tstep = 0.001, assembly = :edgewise)

    # Create a one-dimensional discretization
    h = 1.0 / convert(Float64, n / 2)
    X = collect(-1:h:1)
    grid = simplexgrid(X)

    # Flux function which describes the flux
    # between neighboring control volumes
    function flux!(f, u, edge)
        f[1] = u[1, 1] - u[1, 2]
    end

    # Storage term
    function storage!(f, u, node)
        f[1] = u[1]
    end

    # Create a physics structure
    physics = VoronoiFVM.Physics(; flux = flux!,
                                 storage = storage!)

    # Create a finite volume system - either
    # in the dense or  the sparse version.
    # The difference is in the way the solution object
    # is stored - as dense or as sparse matrix
    sys = VoronoiFVM.System(grid, physics; 
    unknown_storage = unknown_storage, assembly = assembly)

    # Add species 1 to region 1
    enable_species!(sys, 1, [1])

    # boundary_neumann!(sys, 1, 1, 0.)
    # boundary_neumann!(sys, 1, 2, 0.)
    # Create a solution array
    inival = unknowns(sys)
    t0 = 0.001

    # Broadcast the initial value
    # inival[1, :] .= map(x -> step(x, ), X)
    inival[1, :] .= map(x -> step(x, -0.5, 0.5, length(X)), X)

    # Create solver control info for constant time step size
    control = VoronoiFVM.NewtonControl()
    control.verbose = verbose
    control.Δt_min = tstep
    control.Δt_max = tstep
    control.Δt = tstep
    control.Δu_opt = 1

    tsol = solve(sys; inival, times = [t0, tend], control)

    return grid, tsol, sum.(tsol.u)
end

using Test

grid, tsol, sumsol = main()
println(sumsol)
end

Given that the sums of the solution vector increase from $\approx 462$ to $\approx 462$ in a rather short time span, something is clearly not right; am I implementing the boundary conditions incorrectly (the same also happens if I explicitly specify them using boundary_neumann!(...), is there an error in post-processing or perhaps I should impose stricter solver tolerances?

Apart from this, is it possible there is a bug in the code that the tests do not catch?

Regards,
Jan

EDIT:

After some more tinkering and trying out different initial profiles, it looks as if I choose an initial profile that was too difficult for the solver to handle in the initial stages of relaxation. I will report back once I am convinced about that.

@j-fu
Copy link
Member

j-fu commented Sep 10, 2024

Hi, thanks for coming up with this.

The subtle point is the following: the "mass" to be conserved on the continuous level is \int_\omega u dx. The discrete analogon of this integral is
\sum_i u_i |\omega_i| where \omega_i is the control volume around x_i.

For an N-nodes grid with stepsize h we have |\omega_i|=h for all but i=1 and i=N. We have |\omega_1| = h/2 and |\omega_N|=h/2. So in the calculation with sum the first and last values count twice as much as necessary. This explains the slight growth of the solution.

The proper way to work here would be to define e.g.

mass(u)=integrate(sys,storage!,u)[1,1]

and return mass.(tsol.u). See https://j-fu.github.io/VoronoiFVM.jl/stable/post/#Solution-integrals

EDIT: Note to self: make an example from this

@JanSuntajs
Copy link
Author

Thanks for your prompt reply!

Regarding the following lines:

The proper way to work here would be to define e.g.

mass(u)=integrate(sys,storage!,u)[1,1]

and return mass.(tsol.u). See https://j-fu.github.io/VoronoiFVM.jl/stable/post/#Solution-integrals

EDIT: Note to self: make an example from this

I was already looking this routine up, but didn't know how to correctly use it. Since the routine takes sys as the input parameter, I assume it can also correctly handle different geometries and their corresponding Jacobian factors?

On another note (note #1, since there will be 2), I have tried implementing weighted averaging of the diffusion constants for the $u$-dependent diffusion coefficient. I have succeeded by doing this:

    # Flux function which describes the flux
    # between neighboring control volumes
    function flux!(f, u, edge, data)#, data)

        @inbounds begin

            _D::Float64 = data.D
            # correction to the standard diffusionfv
            _k::Float64 = data.k
            
            # get the corrected diffusion constants
            # at nodes

            d1::Float64 = _D * (1. + _k * u[1, 1])#_corrected_D(u[1, 1], _D, _k)
            d2::Float64 =  _D * (1. + _k * u[1, 2])#_corrected_D(u[1, 2], _D, _k)
            # (d1, d2) = _corrected_D.(u[1, :], _D, _k)
            # get the harmonic average of the diffusion constants
            # df::Float64 = _harm_D(d1, d2)
            if d1 == 0 || d2 == 0
                df = 0.
            elseif (d1 + d2) == 0
                df = 0.
            else
                df::Float64 = 2. * d1 * d2 / (d1 + d2)
            end
            
            f[1] = df * (u[1, 1] - u[1, 2])

        end
    end

However, if I tried to break the above code into smaller functions (which would improve readability in some more complex cases), allocation warnings in the assembly loop were prompted. Is there a way to go about this? I have read about the allocation warnings, but I am unsure about what the culprit would be in this case.

My #note2 is actually related to the former one, since I would also like to implement inverse-distance weighting; using

nvols = nodevolumes(sys)

I was able to obtain the node volumes outside the flux function. Then, inside the flux function, I call

x1 = nvols[edge.node[1]]

which again prompts allocation warnings. Is there also a way to go about this? I've noticed that no allocation warnings were issued when I tested the tutorial on the generic operators which probably attempts to achieve something similar (in a sense that it accesses and indexes an eternal array).

Thank you and best regards,

Jan

@j-fu
Copy link
Member

j-fu commented Sep 17, 2024

As for the flux function:
It is counterproductive to declare the types of intermediate values. This disables the possibility to calculate the full Jacobian via autodiff. Just leave all types to auto-detection. (Coming from C++ it took me a while as well to grasp this; OTOH - in a way, Julia functions are similar to C++20 generic lambdas).

After a short glance on the code, the only line where I would expect allocations
is (d1, d2) = _corrected_D.(u[1, :], _D, _k) Here, u[1,:] creates a new heap allocated vector. It might help to prefix this with @views.

@j-fu
Copy link
Member

j-fu commented Sep 17, 2024

As for nodevolumes, it is a bit unfortunate that in the moment, these data can be created after creating the system.
In v2.0 (will be registered this week) it will be possible to pass data to solve. So you can create the system with a prototype of the data, create the nodevolumes and then solve with another instance of data containing the nodevolumes. Avoiding access of global values is one of the steps to take to avoid allocations.

@jpthiele jpthiele added the question Further information is requested label Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants