Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Line integration #299

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open

Line integration #299

wants to merge 14 commits into from

Conversation

mrhardman
Copy link
Collaborator

@mrhardman mrhardman commented Jan 2, 2025

NB: This PR is based on #291, which should therefore be merged first to reduce the number of apparent changes to review.

A feature building on the Lagrange polynomial methods first introduced in the gauss_legendre.jl module. Facility to calculate the primitive of a function $f(x)$, i.e.,

$$F(x) = \int^x f(x^\prime) d x^\prime$$

with the constant of integration chosen such that $F(x)=0$ when $x$ is the lowest point on the coordinate grid. This feature is tested for all three discretization options (spectral methods based on Lagrange polynomials with matrix multiplication on elements for Gauss Legendre and Chebyshev options, and trapezium rule for finite difference options). An attempt to incorporate the feature into em_fields.jl is made, in anticipation that accurate calculate of phi will be required to get Er in cases with a radial dimension. A direct test of the function calculate_phi_from_Epar!() is available by running, e.g.,

mpirun -n 8 --output-filename mpitest julia --project -O3 test_scripts/phi_from_Epar_test.jl

which prints out the maximum absolute values of error on each process calculating phi in a manufactured solutions test.

@johnomotani @LucasMontoya4 Please make suggestions of documentation, and a suitable test case to debug the run-time implementation of the calculation of phi.

Documentation for how the numerical method is implemented: to calculate the primative $F(x)$ at the i-th collocation point $x_i$ we apply the matrix $A_{ij}$ to the vector $f_j$ from the elemental expression $f(x) = \sum_j f_j l_j(x)$, with $l_i(x)$ the i-th Lagrange interpolating polynomial on the element, i.e., $F_i = \sum_j A_{ij} f_j$. The matrix for integration is defined by

$$A_{ij} = \left(\frac{x_i + 1}{2}\right) \int^1_{-1} l_j \left( \frac{(x_i + 1)y + x_i - 1}{2} \right) dy,$$

or in discretised form

$$A_{ij} = \left(\frac{x_i + 1}{2}\right) \sum_k l_j \left( \frac{(x_i + 1)y_k + x_i - 1}{2} \right) w_k,$$

with $y_k$ and $w_k$ Gauss-quadrature points and weights, respectively.

…on for Gauss-Legendre grids, using a precomputed elemental matrix, much the same as how the differentiation matrix is handled. Distributed memory MPI is not presently supported. Tested with test_script/GaussLobattoLegendre_test.jl. The lower limit of the integral is taken to be coord.grid[1].
…stent with chebyshev_radau_points(). This will permit us to reuse some functions from gauss_legendre.jl for line integration.
…ion, using Lagrange polynomial-based integration method from the gauss_legendre.jl module. Note that the FFT-based method cannot support this kind of operation as integration increases the number of polynomials required to store the primitive of f. The Lagrange polynomial method avoids this problem by doing the integral numerically rather than by relying on relations between orthogonal polynomials to get the primative. The automatic tests are extended and error constraints relaxed slightly to accomodate the Chebyshev option.
…sing a second-order accurate trapezium rule. An elemental formulation is used to allow quick reuse of functions designed for the spectral methods.
…buted calculation of the potential. The script test_scripts/phi_from_Epar_test.jl has been run to confirm that the code is working for a mix of r and z resolutions on 2, 4, and 8 cores.
@johnomotani johnomotani changed the base branch from master to wall-bc-Jacobian January 11, 2025 13:00
@johnomotani johnomotani changed the base branch from wall-bc-Jacobian to master January 11, 2025 13:00
Copy link
Collaborator

@johnomotani johnomotani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this integration routine exactly invert differentiation where the element boundaries are 'centred'? I feel like that might be a nice property to have numerically (up to rounding errors). The routine can only exactly invert differentiation with (at most) one of: centred element boundaries; positive-velocity upwind element boundaries; negative-velocity upwind element boundaries. It's not immediately obvious to me which (if any!) this routine corresponds to - @mrhardman maybe you've thought about it already?

Comment on lines +57 to +74
function indefinite_integral_elements_to_full_grid!(pf, coord)
pf2d = coord.scratch_2d
nelement_local = coord.nelement_local
ngrid = coord.ngrid
igrid_full = coord.igrid_full
j = 1 # the first element
for i in 1:ngrid
pf[i] = pf2d[i,j]
end
for j in 2:nelement_local
ilast = igrid_full[ngrid,j-1] # the grid index of the last point in the previous element
for k in 2:coord.ngrid
i = coord.igrid_full[k,j] # the index in the full grid
pf[i] = pf2d[k,j] + pf[ilast]
end
end
return nothing
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handling of distributed-MPI is done in calculate_phi_from_Epar!(). It might be nicer to handle it generically within this function, to avoid code repetition? Maybe we could add an optional argument or two that would select whether to preserve the value on the lower boundary or the upper boundary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about doing this, but I didn't want to have to implement all the possible choices for which constant to preserve, I just wanted to show that accurate line integration was possible! I am happy to implement the MPI communication as send-receive operations for a general BC in a generic function, if it is judged useful. (I think the broadcasting operation is slower in scaling because all the processes have to recompute something ever broadcast, but I should check my notes). I also wasn't sure if this operation would be needed anywhere else in the code, and so I left the MPI as a special case for now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's fine (for now!), but please do add a docstring saying that the indefinite integral functions only work on a single block, and distributed-MPI needs to be implemented elsewhere, but could be refactored into these functions if/when it becomes useful.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Do you have a preference for where I put the documentation for how the method works (my addition to the head descript of the PR)? The functions are spread around a bit, I would be tempted to put the description for the whole method in a docstring in gauss_legendre.jl.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, that's where the main numerical heavy-lifting is done.

moment_kinetics/src/em_fields.jl Outdated Show resolved Hide resolved
@johnomotani johnomotani added the enhancement New feature or request label Jan 11, 2025
@mrhardman
Copy link
Collaborator Author

Does this integration routine exactly invert differentiation where the element boundaries are 'centred'? I feel like that might be a nice property to have numerically (up to rounding errors). The routine can only exactly invert differentiation with (at most) one of: centred element boundaries; positive-velocity upwind element boundaries; negative-velocity upwind element boundaries. It's not immediately obvious to me which (if any!) this routine corresponds to - @mrhardman maybe you've thought about it already?

I haven't tested or thought about this. I doubt that the method would have any special conservative property, since it does not know anything about centring or upwinding. All the method does to calculate the primative $F(x)$ at the i-th collocation point $x_i$ is apply the matrix $A_{ij}$ to the vector $f_j$ from the elemental expression $f(x) = \sum_j f_j l_j(x)$, with $l_i(x)$ the i-th Lagrange interpolating polynomial on the element, i.e., $F_i = \sum_j A_{ij} f_j$. The matrix for integration is defined by

$$A_{ij} = \left(\frac{x_i + 1}{2}\right) \int^1_{-1} l_j \left( \frac{(x_i + 1)y + x_i - 1}{2} \right) dy,$$

or in discretised form

$$A_{ij} = \left(\frac{x_i + 1}{2}\right) \sum_k l_j \left( \frac{(x_i + 1)y_k + x_i - 1}{2} \right) w_k,$$

with $y_k$ and $w_k$ Gauss-quadrature points and weights, respectively.

You can be fairly sure (with evidence in the automatic tests) that the integration is anyway far more accurate than differentiation for the usual reason that integration adds polynomial order, whereas differentiation takes it away.

@mrhardman
Copy link
Collaborator Author

@johnomotani Added the documentation that hopefully with satisfy you for now.

I would like to consider adding test_scripts/test_scripts/phi_from_Epar_test.jl to the automatic tests, if you think it is necessary, but I don't know how to handle the require for a the element splits, which must be done based on the number of cores available. If this is too hard for now, we can stop here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Spectrally accurate 1D indefinite integration
2 participants