-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge fkpl collisions #149
Conversation
…ms to show a minimum achievable error of ~ 1e-7 that may be due to the limits of available precision.
…g of printout, and addition of L2 norm for diagnostic tests
…testing, and switch to a flow-free Maxwellian for testing.
…ction instead of using global variable.
…where there is a divergence. The previous method put the divergence accommodating grid wherever ielement_vpa or ielement_vperp were separately the indices for a diverging element, when divergences only occur when both element indices are match that of the diverging 2D element.
…sion integration should be possible for F with ngrid = 17 and nelement = 8.
…loop, using Gauss-Legendre points with 2*ngrid points in all elements on the primed grid.
…emental integration weights. Currently the out-of-the-box cubature rule package is many order of magnitude slower than the FastGaussQuadrature based-quadrature.
…)/sqrt(1-z) integrand to 10^-15 precision. ArbNumerics is only leveraged to improve accuracy of K(z), exp, log, and sqrt functions. Extra accuracy is not used in the Gauss Laguerre quadrature, which is still computed by FastGaussQuadrature.
…iptic integrals, sqrt and exp functions in the potential integrands. A significant slowdown is observed in the standard quick test with ngrid = 5 and nelement = 2, from a compute time of order seconds to a compute time of order minutes.
…By looking at a single field location we can speed up testing, at the cost of not seeing the largest possible errors over all vpa and vperp.
… consisting of Gauss-Legendre points far from the divergences and Gauss-Laguerre in the points closest to the divergence.
…ors in the output potentials as a function of vpa and vperp are now extremely localised to vperp element boundaries for vpa small, at the cost of a longer evaluation time.
…re to use a quadrature for vperp integration that has divergence accommodation only in the (vpa,vperp) element where the divergence is present. In principle the same change has to be made to the loops over ivpa and ivperp within each element.
…id = 3 to be used.
…rgence of integrals.
…e for Lagrange polynomial integrals where divergence in integrand is present (where the divergence lines up with the collocation point where the polynomial is 1).
…ure within a 2D element, for all Lagrange polynomials
…tions needed for the collision operator.
…t_kinetics code and standalone test script demonstrating the correct implementation.
I have noticed that in initial_conditions.jl we are now using the centred version of reconcile_element_boundaries_MPI! rather than the upwind version. This might explain the bugs that I am seeing with spikes at element boundaries on distriubted memory -- commits to follow. We should make sure distributed memory is tested to prevent bugs like this creeping in.. moment_kinetics/src/initial_conditions.jl Line 1197 in ee12e1a
moment_kinetics/src/initial_conditions.jl Line 1247 in ee12e1a
moment_kinetics/src/initial_conditions.jl Line 1379 in ee12e1a
moment_kinetics/src/initial_conditions.jl Line 1430 in ee12e1a
|
Don't really know. Sometimes seems to just get timed out for being slow, but sometimes they segfault or seem to hang. My current hope is that when we move the postprocessing to separate packages, getting rid of some dependencies (especially Python) might help. |
…ng of element boundaries to upwind choice for element boundaries. This requires extra dummy arrays to be passed to the boundary condition functions.
What's changed? I can't see that we ever used a different version of |
@johnomotani It would be handy for you to take a look at the last commit for your comments. It doesn't fix my current bug issue #169 but I think it is an important correction. |
Perhaps I am getting confused. In any case, when we reconcile the boundaries here, I would have thought that we would want to take as the "correct" value of the pdf to be the upwind one. Do you disagree? |
I think it shouldn't make any difference. Communication was already taken care of in the derivative routines, so the two values should be identical. The communication being done in the boundary conditions functions is, I think, only to be doubly sure that some rounding error difference does not cause the values to diverge. So I think you'll find that 816e058 makes not difference at all, and we should revert it. If it does make any difference then I am wrong and need to look much harder at the communication in those boundary condition functions. |
OK. I am open to doing this. There was a minor bug that I did catch there where one of the advect structs was used incorrectly. I should have fixed it separately but it was tempting to commit it all at once. |
Ah, sorry - l.1016-1017 in gauss_legendre.jl moment_kinetics/src/gauss_legendre.jl Lines 1016 to 1017 in 816e058
|
Those boundary terms end up being something like the difference of the first derivative evaluated in the elements on either side of the boundary, once you add the contributions from the two elements that share the boundary point, right? So for smooth input data they converge to zero as the resolution increases? That probably makes them a bit hard to test. Maybe you could cook something up like calculate the second derivative of |
I thought I had tested these terms in https://github.com/mabarnes/moment_kinetics/blob/merge_fkpl_collisions/test_scripts/GaussLobattoLegendre_test.jl. The signs look correct to me. What we want is int phi d^2 f / dx^2 d x = [ phi d f / d x ]^{x_max}_{x_min} - int (dphi /d x) (d f / d x ) d x. Does that make sense? |
Initially I would remove the internal boundary points and check that the result was the same. |
I agree, flipping those signs seems to be wrong - I tried calculating a second derivative of It does look like a problem at element boundaries though - in your simulation that 'goes wrong', one symptom of the problem seems to be that there is a peak in f at vpa=0, but the vpa-diffusion evaluates the second derivative as positive at vpa=0 and so makes the peak grow, which seems clearly wrong. I don't understand why it does that though! |
Using the updated test script in the REPL, we can confirm the correctness for a single derivative. julia> include("test_scripts/GaussLobattoLegendre_test.jl") julia> gausslegendre_test(ngrid=17,nelement=16, L_in=6.0) vpa test F_err: 2.220446049250313e-16 F_exact: 1.7724538509055159 F_num: 1.772453850905516 vperp test F_err: 3.3306690738754696e-16 F_exact: 1.0 F_num: 0.9999999999999997 However, I did turn off the explicit boundary terms in the collision operator already because they caused bad behaviour in the elliptic solvers. It might be that the terms should never be used because of a numerical instability. |
I think turning off the terms at interior element boundaries might be the way to go. Eliminating them at (non-periodic) global boundaries would give large errors, but eliminating them at internal boundaries I think is equivalent to assuming that the first derivative is continuous between elements. For smooth functions this makes no noticeable difference, but if there is a cusp on an element boundary, the version without 'explicit boundary terms' at interior boundaries evaluates in a way that would decrease the size of the cusp (although it also introduces large 'errors' at nearby points when the cusp is large, presumably due to Gibbs ringing) - I checked this with the abs(sin(x)) test. I think that would give better numerical stability, because if there is a kink at an element boundary, the version without 'explicit bonudary terms' at internal element boundaries will smooth it. |
… L (Laplacian) stiffness matrices so that internal boundary terms are not included. This appears to improve the numerical stability of the diffusion operators without losing accuracy in a single application of the derivative on a smooth function.
…ith numerical dissipation.
I have carried this out successfully (according to checks so far) in d306695. There is an example input file in 6dfd0d9 that could form the basis for a check in test. The z and r dissipations could be tested in separate 1D tests to give coverage to these features. |
See update to #169. |
…on is now explicitly intended for constructing weak matrices for use with numerical differentiation and not for the solving of 1D ODEs. This is because the experimental features for imposing boundary conditions on the matrix are removed.
…buted_memory_MPI_for_weights_precomputation() to likely correct value based on comparison to corresponding lines in setup_distributed_memory_MPI().
… averaging of element boundaries to upwind choice for element boundaries. This requires extra dummy arrays to be passed to the boundary condition functions." This reverts commit 816e058.
Reverting my commit affecting the initial_conditions.jl module with ca15efd does not fix issue #169. Another change which affects distributed memory MPI must be responsible. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should merge this now.
🎉
Let's go for it. |
Once the tests are run on master and I have the simulation I want we can delete the development branch. |
Pull request containing the weak-form Fokker-Planck self-collision operator for ions and associated features (check-in tests, underlying ability to support cross-ion-species collisions). Creating this request now to gather feedback before the merge. The major additions are the fokker_planck*.jl modules and the gauss_legendre.jl module, which provides the weak-form matrices necessary for the implementation.
To do list (before merging):
To do list (after merging):