-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Line integration #299
base: master
Are you sure you want to change the base?
Line integration #299
Conversation
…on for Gauss-Legendre grids, using a precomputed elemental matrix, much the same as how the differentiation matrix is handled. Distributed memory MPI is not presently supported. Tested with test_script/GaussLobattoLegendre_test.jl. The lower limit of the integral is taken to be coord.grid[1].
…stent with chebyshev_radau_points(). This will permit us to reuse some functions from gauss_legendre.jl for line integration.
…ion, using Lagrange polynomial-based integration method from the gauss_legendre.jl module. Note that the FFT-based method cannot support this kind of operation as integration increases the number of polynomials required to store the primitive of f. The Lagrange polynomial method avoids this problem by doing the integral numerically rather than by relying on relations between orthogonal polynomials to get the primative. The automatic tests are extended and error constraints relaxed slightly to accomodate the Chebyshev option.
…sing a second-order accurate trapezium rule. An elemental formulation is used to allow quick reuse of functions designed for the spectral methods.
…_Epar!() in em_fields.jl
…eatures of calculate_phi_from_Epar!().
…buted calculation of the potential. The script test_scripts/phi_from_Epar_test.jl has been run to confirm that the code is working for a mix of r and z resolutions on 2, 4, and 8 cores.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this integration routine exactly invert differentiation where the element boundaries are 'centred'? I feel like that might be a nice property to have numerically (up to rounding errors). The routine can only exactly invert differentiation with (at most) one of: centred element boundaries; positive-velocity upwind element boundaries; negative-velocity upwind element boundaries. It's not immediately obvious to me which (if any!) this routine corresponds to - @mrhardman maybe you've thought about it already?
function indefinite_integral_elements_to_full_grid!(pf, coord) | ||
pf2d = coord.scratch_2d | ||
nelement_local = coord.nelement_local | ||
ngrid = coord.ngrid | ||
igrid_full = coord.igrid_full | ||
j = 1 # the first element | ||
for i in 1:ngrid | ||
pf[i] = pf2d[i,j] | ||
end | ||
for j in 2:nelement_local | ||
ilast = igrid_full[ngrid,j-1] # the grid index of the last point in the previous element | ||
for k in 2:coord.ngrid | ||
i = coord.igrid_full[k,j] # the index in the full grid | ||
pf[i] = pf2d[k,j] + pf[ilast] | ||
end | ||
end | ||
return nothing | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handling of distributed-MPI is done in calculate_phi_from_Epar!()
. It might be nicer to handle it generically within this function, to avoid code repetition? Maybe we could add an optional argument or two that would select whether to preserve the value on the lower boundary or the upper boundary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about doing this, but I didn't want to have to implement all the possible choices for which constant to preserve, I just wanted to show that accurate line integration was possible! I am happy to implement the MPI communication as send-receive operations for a general BC in a generic function, if it is judged useful. (I think the broadcasting operation is slower in scaling because all the processes have to recompute something ever broadcast, but I should check my notes). I also wasn't sure if this operation would be needed anywhere else in the code, and so I left the MPI as a special case for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's fine (for now!), but please do add a docstring saying that the indefinite integral functions only work on a single block, and distributed-MPI needs to be implemented elsewhere, but could be refactored into these functions if/when it becomes useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. Do you have a preference for where I put the documentation for how the method works (my addition to the head descript of the PR)? The functions are spread around a bit, I would be tempted to put the description for the whole method in a docstring in gauss_legendre.jl
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, that's where the main numerical heavy-lifting is done.
I haven't tested or thought about this. I doubt that the method would have any special conservative property, since it does not know anything about centring or upwinding. All the method does to calculate the primative or in discretised form with You can be fairly sure (with evidence in the automatic tests) that the integration is anyway far more accurate than differentiation for the usual reason that integration adds polynomial order, whereas differentiation takes it away. |
Avoid allocating an intermediate array for -Epar. Co-authored-by: John Omotani <[email protected]>
@johnomotani Added the documentation that hopefully with satisfy you for now. I would like to consider adding |
NB: This PR is based on #291, which should therefore be merged first to reduce the number of apparent changes to review.
A feature building on the Lagrange polynomial methods first introduced in the$f(x)$ , i.e.,
gauss_legendre.jl
module. Facility to calculate the primitive of a functionwith the constant of integration chosen such that$F(x)=0$ when $x$ is the lowest point on the coordinate grid. This feature is tested for all three discretization options (spectral methods based on Lagrange polynomials with matrix multiplication on elements for Gauss Legendre and Chebyshev options, and trapezium rule for finite difference options). An attempt to incorporate the feature into
em_fields.jl
is made, in anticipation that accurate calculate ofphi
will be required to getEr
in cases with a radial dimension. A direct test of the functioncalculate_phi_from_Epar!()
is available by running, e.g.,which prints out the maximum absolute values of error on each process calculating
phi
in a manufactured solutions test.@johnomotani @LucasMontoya4 Please make suggestions of documentation, and a suitable test case to debug the run-time implementation of the calculation of
phi
.Documentation for how the numerical method is implemented: to calculate the primative$F(x)$ at the i-th collocation point $x_i$ we apply the matrix $A_{ij}$ to the vector $f_j$ from the elemental expression $f(x) = \sum_j f_j l_j(x)$ , with $l_i(x)$ the i-th Lagrange interpolating polynomial on the element, i.e., $F_i = \sum_j A_{ij} f_j$ . The matrix for integration is defined by
or in discretised form
with$y_k$ and $w_k$ Gauss-quadrature points and weights, respectively.