Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge fkpl collisions #149

Merged
merged 338 commits into from
Dec 14, 2023
Merged

Merge fkpl collisions #149

merged 338 commits into from
Dec 14, 2023

Conversation

mrhardman
Copy link
Collaborator

@mrhardman mrhardman commented Nov 20, 2023

Pull request containing the weak-form Fokker-Planck self-collision operator for ions and associated features (check-in tests, underlying ability to support cross-ion-species collisions). Creating this request now to gather feedback before the merge. The major additions are the fokker_planck*.jl modules and the gauss_legendre.jl module, which provides the weak-form matrices necessary for the implementation.

To do list (before merging):

  • (COMPLETE) Include gauss_legendre.jl methods in calculus_tests.jl
  • (COMPLETE) Tests for second derivative function provided by the gauss_legendre.jl module.
  • (COMPLETE) Remove Q from second_derivative functions for chebyshev methods.
  • Retest the manufactured solutions tests (@johnomotani's Krook operator appears to break the MMS tests, again).
  • (COMPLETE) Clean up the earlier "strong-form" implementation of the collision operator.
  • (COMPLETE) Test the collision operator in a domain with sheath boundary conditions to check for reasonable behaviour.
  • (COMPLETE) Structure calculus.jl and the chebyshev.jl, gauss_legendre.jl modules in a fashion that is compatible with the new structure in master.
  • (COMPLETE) Example input file.
  • (COMPLETE) Instructions/checks for when to use "chebyshev_pseudospectral" vs. "gausslegendre_pseudospectral" options.
  • (COMPLETE) Determine how to usefully store interactive tests scripts "2D_FEM_assembly_test.jl" "fkpl_functional_test.jl" and "GaussLobattoLegendre_test.jl".
  • Any other business (keep this list updated).

To do list (after merging):

mrhardman and others added 30 commits August 2, 2023 13:43
…ms to show a minimum achievable error of ~ 1e-7 that may be due to the limits of available precision.
…g of printout, and addition of L2 norm for diagnostic tests
…testing, and switch to a flow-free Maxwellian for testing.
…where there is a divergence. The previous method put the divergence accommodating grid wherever ielement_vpa or ielement_vperp were separately the indices for a diverging element, when divergences only occur when both element indices are match that of the diverging 2D element.
…sion integration should be possible for F with ngrid = 17 and nelement = 8.
…loop, using Gauss-Legendre points with 2*ngrid points in all elements on the primed grid.
…emental integration weights. Currently the out-of-the-box cubature rule package is many order of magnitude slower than the FastGaussQuadrature based-quadrature.
…)/sqrt(1-z) integrand to 10^-15 precision. ArbNumerics is only leveraged to improve accuracy of K(z), exp, log, and sqrt functions. Extra accuracy is not used in the Gauss Laguerre quadrature, which is still computed by FastGaussQuadrature.
…iptic integrals, sqrt and exp functions in the potential integrands. A significant slowdown is observed in the standard quick test with ngrid = 5 and nelement = 2, from a compute time of order seconds to a compute time of order minutes.
…By looking at a single field location we can speed up testing, at the cost of not seeing the largest possible errors over all vpa and vperp.
… consisting of Gauss-Legendre points far from the divergences and Gauss-Laguerre in the points closest to the divergence.
…ors in the output potentials as a function of vpa and vperp are now extremely localised to vperp element boundaries for vpa small, at the cost of a longer evaluation time.
…re to use a quadrature for vperp integration that has divergence accommodation only in the (vpa,vperp) element where the divergence is present. In principle the same change has to be made to the loops over ivpa and ivperp within each element.
…e for Lagrange polynomial integrals where divergence in integrand is present (where the divergence lines up with the collocation point where the polynomial is 1).
…ure within a 2D element, for all Lagrange polynomials
…t_kinetics code and standalone test script demonstrating the correct implementation.
@mrhardman
Copy link
Collaborator Author

I have noticed that in initial_conditions.jl we are now using the centred version of reconcile_element_boundaries_MPI! rather than the upwind version. This might explain the bugs that I am seeing with spikes at element boundaries on distriubted memory -- commits to follow. We should make sure distributed memory is tested to prevent bugs like this creeping in..

@views reconcile_element_boundaries_MPI!(f,

@views reconcile_element_boundaries_MPI!(pdf,

@views reconcile_element_boundaries_MPI!(f,

@views reconcile_element_boundaries_MPI!(pdf,

@johnomotani
Copy link
Collaborator

Do we have any ideas on why the MacOS checks are failing consistently?

Don't really know. Sometimes seems to just get timed out for being slow, but sometimes they segfault or seem to hang. My current hope is that when we move the postprocessing to separate packages, getting rid of some dependencies (especially Python) might help.

…ng of element boundaries to upwind choice for element boundaries. This requires extra dummy arrays to be passed to the boundary condition functions.
@johnomotani
Copy link
Collaborator

I have noticed that in initial_conditions.jl we are now using the centred version of reconcile_element_boundaries_MPI! rather than the upwind version. This might explain the bugs that I am seeing with spikes at element boundaries on distriubted memory -- commits to follow. We should make sure distributed memory is tested to prevent bugs like this creeping in..

What's changed? I can't see that we ever used a different version of reconcile_element_boundaries_MPI!() in these functions that enforce boundary conditions. I also wouldn't expect to - the element boundary points at MPI boundaries should just have the same value on both processes. There is no 'advection speed' passed to these functions to use for upwinding.

@mrhardman
Copy link
Collaborator Author

@johnomotani It would be handy for you to take a look at the last commit for your comments. It doesn't fix my current bug issue #169 but I think it is an important correction.

@mrhardman
Copy link
Collaborator Author

I have noticed that in initial_conditions.jl we are now using the centred version of reconcile_element_boundaries_MPI! rather than the upwind version. This might explain the bugs that I am seeing with spikes at element boundaries on distriubted memory -- commits to follow. We should make sure distributed memory is tested to prevent bugs like this creeping in..

What's changed? I can't see that we ever used a different version of reconcile_element_boundaries_MPI!() in these functions that enforce boundary conditions. I also wouldn't expect to - the element boundary points at MPI boundaries should just have the same value on both processes. There is no 'advection speed' passed to these functions to use for upwinding.

Perhaps I am getting confused. In any case, when we reconcile the boundaries here, I would have thought that we would want to take as the "correct" value of the pdf to be the upwind one. Do you disagree?

@johnomotani
Copy link
Collaborator

Perhaps I am getting confused. In any case, when we reconcile the boundaries here, I would have thought that we would want to take as the "correct" value of the pdf to be the upwind one. Do you disagree?

I think it shouldn't make any difference. Communication was already taken care of in the derivative routines, so the two values should be identical. The communication being done in the boundary conditions functions is, I think, only to be doubly sure that some rounding error difference does not cause the values to diverge.
If it did make any difference, I think your update would not be quite correct, as the boundary conditions are applied at the end of a time-step stage and after the RK update is done, so the distribution function, etc. are the updated values, but the adv_fac was calculated with the old values, so it doesn't seem consistent to use it here.

So I think you'll find that 816e058 makes not difference at all, and we should revert it. If it does make any difference then I am wrong and need to look much harder at the communication in those boundary condition functions.

@mrhardman
Copy link
Collaborator Author

Perhaps I am getting confused. In any case, when we reconcile the boundaries here, I would have thought that we would want to take as the "correct" value of the pdf to be the upwind one. Do you disagree?

I think it shouldn't make any difference. Communication was already taken care of in the derivative routines, so the two values should be identical. The communication being done in the boundary conditions functions is, I think, only to be doubly sure that some rounding error difference does not cause the values to diverge. If it did make any difference, I think your update would not be quite correct, as the boundary conditions are applied at the end of a time-step stage and after the RK update is done, so the distribution function, etc. are the updated values, but the adv_fac was calculated with the old values, so it doesn't seem consistent to use it here.

So I think you'll find that 816e058 makes not difference at all, and we should revert it. If it does make any difference then I am wrong and need to look much harder at the communication in those boundary condition functions.

OK. I am open to doing this. There was a minor bug that I did catch there where one of the advect structs was used incorrectly. I should have fixed it separately but it was tempting to commit it all at once.

@johnomotani
Copy link
Collaborator

Ah, sorry - l.1016-1017 in gauss_legendre.jl

@. QQ[1,:] -= lobatto.Dmat[1,:]/scale_factor
@. QQ[coord.ngrid,:] += lobatto.Dmat[coord.ngrid,:]/scale_factor

@johnomotani
Copy link
Collaborator

Those boundary terms end up being something like the difference of the first derivative evaluated in the elements on either side of the boundary, once you add the contributions from the two elements that share the boundary point, right? So for smooth input data they converge to zero as the resolution increases? That probably makes them a bit hard to test. Maybe you could cook something up like calculate the second derivative of abs(sin(x))? I don't know if it'd be possible to calculate analytically the expected result for that though...

@mrhardman
Copy link
Collaborator Author

Ah, sorry - l.1016-1017 in gauss_legendre.jl

@. QQ[1,:] -= lobatto.Dmat[1,:]/scale_factor
@. QQ[coord.ngrid,:] += lobatto.Dmat[coord.ngrid,:]/scale_factor

I thought I had tested these terms in https://github.com/mabarnes/moment_kinetics/blob/merge_fkpl_collisions/test_scripts/GaussLobattoLegendre_test.jl. The signs look correct to me. What we want is

int phi d^2 f / dx^2 d x = [ phi d f / d x ]^{x_max}_{x_min} - int (dphi /d x) (d f / d x ) d x.

Does that make sense?

@mrhardman
Copy link
Collaborator Author

Those boundary terms end up being something like the difference of the first derivative evaluated in the elements on either side of the boundary, once you add the contributions from the two elements that share the boundary point, right? So for smooth input data they converge to zero as the resolution increases? That probably makes them a bit hard to test. Maybe you could cook something up like calculate the second derivative of abs(sin(x))? I don't know if it'd be possible to calculate analytically the expected result for that though...

Initially I would remove the internal boundary points and check that the result was the same.

@johnomotani
Copy link
Collaborator

I agree, flipping those signs seems to be wrong - I tried calculating a second derivative of abs(sin(2*pi*vpa.grid/vpa.L)), and the version with flipped signs gave large errors everywhere.

It does look like a problem at element boundaries though - in your simulation that 'goes wrong', one symptom of the problem seems to be that there is a peak in f at vpa=0, but the vpa-diffusion evaluates the second derivative as positive at vpa=0 and so makes the peak grow, which seems clearly wrong. I don't understand why it does that though!

@mrhardman
Copy link
Collaborator Author

Using the updated test script in the REPL, we can confirm the correctness for a single derivative.

julia> include("test_scripts/GaussLobattoLegendre_test.jl")

julia> gausslegendre_test(ngrid=17,nelement=16, L_in=6.0)

vpa test

F_err: 2.220446049250313e-16 F_exact: 1.7724538509055159 F_num: 1.772453850905516
max(df_err) (interpolation): 8.770761894538737e-15
max(d2f_err) (double first derivative by interpolation): 6.433603649824704e-13
max(df_err) (weak form): 6.140921104957897e-14
max(d2f_err) (weak form): 2.6008240006092365e-10

vperp test

F_err: 3.3306690738754696e-16 F_exact: 1.0 F_num: 0.9999999999999997
max(df_err) (interpolation): 1.2101430968414206e-14
max(d2f_err) (double first derivative by interpolation): 1.3371082019375535e-11
max(divg_err) (weak form): 8.908651594197181e-12
max(d2f_err) (weak form): 2.0351857976663723e-9
max(laph_err) (weak form): 1.0388290228036112e-9
max(divg_err) (interpolation): 2.1036505870597466e-12
max(laph_err) (interpolation): 4.0971048775872987e-10

However, I did turn off the explicit boundary terms in the collision operator already because they caused bad behaviour in the elliptic solvers. It might be that the terms should never be used because of a numerical instability.

@johnomotani
Copy link
Collaborator

I think turning off the terms at interior element boundaries might be the way to go. Eliminating them at (non-periodic) global boundaries would give large errors, but eliminating them at internal boundaries I think is equivalent to assuming that the first derivative is continuous between elements. For smooth functions this makes no noticeable difference, but if there is a cusp on an element boundary, the version without 'explicit boundary terms' at interior boundaries evaluates in a way that would decrease the size of the cusp (although it also introduces large 'errors' at nearby points when the cusp is large, presumably due to Gibbs ringing) - I checked this with the abs(sin(x)) test. I think that would give better numerical stability, because if there is a kink at an element boundary, the version without 'explicit bonudary terms' at internal element boundaries will smooth it.

… L (Laplacian) stiffness matrices so that internal boundary terms are not included. This appears to improve the numerical stability of the diffusion operators without losing accuracy in a single application of the derivative on a smooth function.
@mrhardman
Copy link
Collaborator Author

I think turning off the terms at interior element boundaries might be the way to go. Eliminating them at (non-periodic) global boundaries would give large errors, but eliminating them at internal boundaries I think is equivalent to assuming that the first derivative is continuous between elements. For smooth functions this makes no noticeable difference, but if there is a cusp on an element boundary, the version without 'explicit boundary terms' at interior boundaries evaluates in a way that would decrease the size of the cusp (although it also introduces large 'errors' at nearby points when the cusp is large, presumably due to Gibbs ringing) - I checked this with the abs(sin(x)) test. I think that would give better numerical stability, because if there is a kink at an element boundary, the version without 'explicit bonudary terms' at internal element boundaries will smooth it.

I have carried this out successfully (according to checks so far) in d306695. There is an example input file in 6dfd0d9 that could form the basis for a check in test. The z and r dissipations could be tested in separate 1D tests to give coverage to these features.

@mrhardman
Copy link
Collaborator Author

See update to #169.

…on is now explicitly intended for constructing weak matrices for use with numerical differentiation and not for the solving of 1D ODEs. This is because the experimental features for imposing boundary conditions on the matrix are removed.
…buted_memory_MPI_for_weights_precomputation() to likely correct value based on comparison to corresponding lines in setup_distributed_memory_MPI().
… averaging of element boundaries to upwind choice for element boundaries. This requires extra dummy arrays to be passed to the boundary condition functions."

This reverts commit 816e058.
@mrhardman
Copy link
Collaborator Author

Perhaps I am getting confused. In any case, when we reconcile the boundaries here, I would have thought that we would want to take as the "correct" value of the pdf to be the upwind one. Do you disagree?

I think it shouldn't make any difference. Communication was already taken care of in the derivative routines, so the two values should be identical. The communication being done in the boundary conditions functions is, I think, only to be doubly sure that some rounding error difference does not cause the values to diverge. If it did make any difference, I think your update would not be quite correct, as the boundary conditions are applied at the end of a time-step stage and after the RK update is done, so the distribution function, etc. are the updated values, but the adv_fac was calculated with the old values, so it doesn't seem consistent to use it here.

So I think you'll find that 816e058 makes not difference at all, and we should revert it. If it does make any difference then I am wrong and need to look much harder at the communication in those boundary condition functions.

Reverting my commit affecting the initial_conditions.jl module with ca15efd does not fix issue #169. Another change which affects distributed memory MPI must be responsible.

Copy link
Collaborator

@johnomotani johnomotani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should merge this now.
🎉

@mrhardman
Copy link
Collaborator Author

I think we should merge this now. 🎉

Let's go for it.

@mrhardman mrhardman merged commit 9c2cc23 into master Dec 14, 2023
14 of 16 checks passed
@mrhardman
Copy link
Collaborator Author

Once the tests are run on master and I have the simulation I want we can delete the development branch.

@johnomotani johnomotani deleted the merge_fkpl_collisions branch October 8, 2024 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

vperp BC for chebyshev pseudospectral option
2 participants