Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Line integration #299

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions moment_kinetics/src/calculus.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,14 @@
"""
module calculus

# Import moment_kinetics so that we can refer to it in docstrings
import moment_kinetics

export derivative!, second_derivative!, laplacian_derivative!
export elementwise_indefinite_integration!
export reconcile_element_boundaries_MPI!
export integral
export indefinite_integral!

using ..moment_kinetics_structs: discretization_info, null_spatial_dimension_info,
null_velocity_dimension_info, weak_discretization_info
Expand All @@ -30,6 +35,55 @@ Result is stored in coord.scratch_2d.
"""
function elementwise_derivative! end

"""
"""
function elementwise_indefinite_integration! end

"""
indefinite_integral!(pf, f, coord, spectral)

Indefinite line integral.

This function is designed to work on local-in-memory data only,
with distributed-memory MPI not implemented here.
A function which integrates along a line which is distributed
in memory exists in [`moment_kinetics.em_fields`](@ref) as
`calculate_phi_from_Epar!()`. The distributed-memory functionality could
be ported to a generic function, similiar to how the derivative!
functions are generalised in [`moment_kinetics.derivatives`](@ref).
"""
function indefinite_integral!(pf, f, coord, spectral::discretization_info)
# get the indefinite integral at each grid point within each element and store in
# coord.scratch_2d
elementwise_indefinite_integration!(coord, f, spectral)
# map the integral from the elemental grid to the full grid;
# taking care to match integral constants at the boundaries
# we assume that the lower limit of the indefinite integral
# is the lowest value in coord.grid
indefinite_integral_elements_to_full_grid!(pf, coord)
end

"""
"""
function indefinite_integral_elements_to_full_grid!(pf, coord)
pf2d = coord.scratch_2d
nelement_local = coord.nelement_local
ngrid = coord.ngrid
igrid_full = coord.igrid_full
j = 1 # the first element
for i in 1:ngrid
pf[i] = pf2d[i,j]
end
for j in 2:nelement_local
ilast = igrid_full[ngrid,j-1] # the grid index of the last point in the previous element
for k in 2:coord.ngrid
i = coord.igrid_full[k,j] # the index in the full grid
pf[i] = pf2d[k,j] + pf[ilast]
end
end
return nothing
end
Comment on lines +68 to +85
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handling of distributed-MPI is done in calculate_phi_from_Epar!(). It might be nicer to handle it generically within this function, to avoid code repetition? Maybe we could add an optional argument or two that would select whether to preserve the value on the lower boundary or the upper boundary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about doing this, but I didn't want to have to implement all the possible choices for which constant to preserve, I just wanted to show that accurate line integration was possible! I am happy to implement the MPI communication as send-receive operations for a general BC in a generic function, if it is judged useful. (I think the broadcasting operation is slower in scaling because all the processes have to recompute something ever broadcast, but I should check my notes). I also wasn't sure if this operation would be needed anywhere else in the code, and so I left the MPI as a special case for now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's fine (for now!), but please do add a docstring saying that the indefinite integral functions only work on a single block, and distributed-MPI needs to be implemented elsewhere, but could be refactored into these functions if/when it becomes useful.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Do you have a preference for where I put the documentation for how the method works (my addition to the head descript of the PR)? The functions are spread around a bit, I would be tempted to put the description for the whole method in a docstring in gauss_legendre.jl.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, that's where the main numerical heavy-lifting is done.


"""
derivative!(df, f, coord, adv_fac, spectral)

Expand Down
83 changes: 69 additions & 14 deletions moment_kinetics/src/chebyshev.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,11 @@ using ..type_definitions: mk_float, mk_int
using ..array_allocation: allocate_float, allocate_complex
using ..clenshaw_curtis: clenshawcurtisweights
import ..calculus: elementwise_derivative!
import ..calculus: elementwise_indefinite_integration!
using ..communication
import ..interpolation: single_element_interpolate!
using ..moment_kinetics_structs: discretization_info

using ..gauss_legendre: integration_matrix!
"""
Chebyshev pseudospectral discretization
"""
Expand All @@ -43,6 +44,8 @@ struct chebyshev_base_info{TForward <: FFTW.cFFTWPlan, TBackward <: AbstractFFTs
Dmat::Array{mk_float,2}
# elementwise differentiation vector (ngrid) for the point x = -1
D0::Array{mk_float,1}
# elementwise antidifferentiation matrix
Amat::Array{mk_float,2}
end

struct chebyshev_info{TForward <: FFTW.cFFTWPlan, TBackward <: AbstractFFTs.ScaledPlan} <: discretization_info
Expand Down Expand Up @@ -141,9 +144,12 @@ function setup_chebyshev_pseudospectral_lobatto(coord, fftw_flags)
cheb_derivative_matrix_elementwise!(Dmat,coord.ngrid)
D0 = allocate_float(coord.ngrid)
D0 .= Dmat[1,:]
Amat = allocate_float(coord.ngrid, coord.ngrid)
x = chebyshevpoints(coord.ngrid)
integration_matrix!(Amat,x,coord.ngrid)
# return a structure containing the information needed to carry out
# a 1D Chebyshev transform
return chebyshev_base_info(fext, fcheby, dcheby, forward_transform, backward_transform, Dmat, D0)
return chebyshev_base_info(fext, fcheby, dcheby, forward_transform, backward_transform, Dmat, D0, Amat)
end

function setup_chebyshev_pseudospectral_radau(coord, fftw_flags)
Expand All @@ -164,9 +170,12 @@ function setup_chebyshev_pseudospectral_radau(coord, fftw_flags)
cheb_derivative_matrix_elementwise_radau_by_FFT!(Dmat, coord, fcheby, dcheby, fext, forward_transform)
D0 = allocate_float(coord.ngrid)
cheb_lower_endpoint_derivative_vector_elementwise_radau_by_FFT!(D0, coord, fcheby, dcheby, fext, forward_transform)
Amat = allocate_float(coord.ngrid, coord.ngrid)
x = chebyshev_radau_points(coord.ngrid)
integration_matrix!(Amat,x,coord.ngrid)
# return a structure containing the information needed to carry out
# a 1D Chebyshev transform
return chebyshev_base_info(fext, fcheby, dcheby, forward_transform, backward_transform, Dmat, D0)
return chebyshev_base_info(fext, fcheby, dcheby, forward_transform, backward_transform, Dmat, D0, Amat)
end

"""
Expand All @@ -190,11 +199,10 @@ imax -- the array of maximum indices of each element on the extended grid.
"""
function scaled_chebyshev_grid(ngrid, nelement_local, n,
element_scale, element_shift, imin, imax)
# initialize chebyshev grid defined on [1,-1]
# initialize chebyshev grid defined on [-1,1]
# with n grid points chosen to facilitate
# the fast Chebyshev transform (aka the discrete cosine transform)
# needed to obtain Chebyshev spectral coefficients
# this grid goes from +1 to -1
chebyshev_grid = chebyshevpoints(ngrid)
# create array for the full grid
grid = allocate_float(n)
Expand All @@ -206,9 +214,9 @@ function scaled_chebyshev_grid(ngrid, nelement_local, n,
@inbounds for j ∈ 1:nelement_local
scale_factor = element_scale[j]
shift = element_shift[j]
# reverse the order of the original chebyshev_grid (ran from [1,-1])
# take the original chebyshev_grid (ran from [-1,1])
# and apply the scale factor and shift
grid[imin[j]:imax[j]] .= (reverse(chebyshev_grid)[k:ngrid] * scale_factor) .+ shift
grid[imin[j]:imax[j]] .= (chebyshev_grid[k:ngrid] * scale_factor) .+ shift
# after first element, increase minimum index for chebyshev_grid to 2
# to avoid double-counting boundary element
k = 2
Expand All @@ -219,11 +227,10 @@ end

function scaled_chebyshev_radau_grid(ngrid, nelement_local, n,
element_scale, element_shift, imin, imax, irank)
# initialize chebyshev grid defined on [1,-1]
# initialize chebyshev grid defined on [-1,1]
# with n grid points chosen to facilitate
# the fast Chebyshev transform (aka the discrete cosine transform)
# needed to obtain Chebyshev spectral coefficients
# this grid goes from +1 to -1
chebyshev_grid = chebyshevpoints(ngrid)
chebyshev_radau_grid = chebyshev_radau_points(ngrid)
# create array for the full grid
Expand All @@ -242,9 +249,9 @@ function scaled_chebyshev_radau_grid(ngrid, nelement_local, n,
@inbounds for j ∈ 2:nelement_local
scale_factor = element_scale[j]
shift = element_shift[j]
# reverse the order of the original chebyshev_grid (ran from [1,-1])
# take the original chebyshev_grid (ran from [-1,1])
# and apply the scale factor and shift
grid[imin[j]:imax[j]] .= (reverse(chebyshev_grid)[k:ngrid] * scale_factor) .+ shift
grid[imin[j]:imax[j]] .= (chebyshev_grid[k:ngrid] * scale_factor) .+ shift
end
wgts = clenshaw_curtis_radau_weights(ngrid, nelement_local, n, imin, imax, element_scale)
else
Expand All @@ -255,9 +262,9 @@ function scaled_chebyshev_radau_grid(ngrid, nelement_local, n,
@inbounds for j ∈ 1:nelement_local
scale_factor = element_scale[j]
shift = element_shift[j]
# reverse the order of the original chebyshev_grid (ran from [1,-1])
# take the original chebyshev_grid (ran from [-1,1])
# and apply the scale factor and shift
grid[imin[j]:imax[j]] .= (reverse(chebyshev_grid)[k:ngrid] * scale_factor) .+ shift
grid[imin[j]:imax[j]] .= (chebyshev_grid[k:ngrid] * scale_factor) .+ shift
# after first element, increase minimum index for chebyshev_grid to 2
# to avoid double-counting boundary element
k = 2
Expand Down Expand Up @@ -621,6 +628,7 @@ end

"""
returns the Chebyshev-Gauss-Lobatto grid points on an n point grid
in the range [-1,1]
"""
function chebyshevpoints(n)
grid = allocate_float(n)
Expand All @@ -631,9 +639,14 @@ function chebyshevpoints(n)
grid[j] = cospi((j-1)*nfac)
end
end
return grid
# return grid on z in [-1,1]
return reverse(grid)
end

"""
returns the Chebyshev-Gauss-Radau grid points on an n point grid
in the range (-1,1]
"""
function chebyshev_radau_points(n)
grid = allocate_float(n)
nfac = 1.0/(n-0.5)
Expand Down Expand Up @@ -979,4 +992,46 @@ https://people.maths.ox.ac.uk/trefethen/pdetext.html
D[1] = -sum(D[:])
end

"""
A function that takes the indefinite integral in each element of `coord.grid`,
leaving the result (element-wise) in `coord.scratch_2d`.
"""
function elementwise_indefinite_integration!(coord, ff, chebyshev::chebyshev_info)
# the primitive of f
pf = coord.scratch_2d
# define local variable nelement for convenience
nelement = coord.nelement_local
# check array bounds
@boundscheck nelement == size(pf,2) && coord.ngrid == size(pf,1) || throw(BoundsError(pf))

# variable k will be used to avoid double counting of overlapping point
k = 0
j = 1 # the first element
imin = coord.imin[j]-k
# imax is the maximum index on the full grid for this (jth) element
imax = coord.imax[j]
if coord.radau_first_element && coord.irank == 0 # differentiate this element with the Radau scheme
@views mul!(pf[:,j],chebyshev.radau.Amat[:,:],ff[imin:imax])
else #differentiate using the Lobatto scheme
@views mul!(pf[:,j],chebyshev.lobatto.Amat[:,:],ff[imin:imax])
end
# transform back to the physical coordinate scale
for i in 1:coord.ngrid
pf[i,j] *= coord.element_scale[j]
end
# calculate the derivative on each element
@inbounds for j ∈ 2:nelement
k = 1
imin = coord.imin[j]-k
# imax is the maximum index on the full grid for this (jth) element
imax = coord.imax[j]
@views mul!(pf[:,j],chebyshev.lobatto.Amat[:,:],ff[imin:imax])
# transform back to the physical coordinate scale
for i in 1:coord.ngrid
pf[i,j] *= coord.element_scale[j]
end
end
return nothing
end

end
50 changes: 35 additions & 15 deletions moment_kinetics/src/em_fields.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ module em_fields

export setup_em_fields
export update_phi!
# for testing
export calculate_phi_from_Epar!

using ..type_definitions: mk_float
using ..array_allocation: allocate_shared_float
Expand All @@ -16,6 +18,7 @@ using ..velocity_moments: update_density!
using ..derivatives: derivative_r!, derivative_z!
using ..electron_fluid_equations: calculate_Epar_from_electron_force_balance!
using ..gyroaverages: gyro_operators, gyroaverage_field!
using ..calculus: indefinite_integral!

using MPI

Expand Down Expand Up @@ -121,7 +124,7 @@ function update_phi!(fields, fvec, vperp, z, r, composition, collisions, moments
composition.n_neutral_species, collisions.reactions.electron_charge_exchange_frequency,
composition.me_over_mi, fvec.density_neutral, fvec.uz_neutral,
fvec.electron_upar)
calculate_phi_from_Epar!(fields.phi, fields.Ez, r, z)
calculate_phi_from_Epar!(fields.phi, fields.Ez, r, z, z_spectral)
end
## can calculate phi at z = L and hence phi_wall(z=L) using jpar_i at z =L if needed
_block_synchronize()
Expand Down Expand Up @@ -174,34 +177,49 @@ function update_phi!(fields, fvec, vperp, z, r, composition, collisions, moments
return nothing
end

function calculate_phi_from_Epar!(phi, Epar, r, z)
function calculate_phi_from_Epar!(phi, Epar, r, z, z_spectral)
# simple calculation of phi from Epar for now. Assume phi is already set at the
# lower-ze boundary, e.g. by the kinetic electron boundary condition.
begin_serial_region()

dz = z.cell_width
@serial_region begin
# Need to broadcast the lower-z boundary value, because we only communicate
# delta_phi below, rather than passing the boundary values directly from block to
# block.
MPI.Bcast!(@view(phi[1,:]), z.comm; root=0)

if z.irank == 0
# Don't want to change the lower-z boundary value, so save it here so we can
# restore it at the end
@views @. r.scratch3 = phi[1,:]
end
# Broadcast the lower-z boundary value, so that we can communicate
# delta_phi = phi[end,:] - phi[1,:] below, rather than passing the boundary values directly
# from block to block.
MPI.Bcast!(r.scratch3, z.comm; root=0)
if z.irank == z.nrank - 1
# Don't want to change the upper-z boundary value, so save it here so we can
# restore it at the end
@views @. r.scratch = phi[end,:]
end


# calculate phi on each local rank, up to a constant
# phi[1,:] = 0.0 by convention here
@loop_r ir begin
for iz ∈ 2:z.n
phi[iz,ir] = phi[iz-1,ir] - dz[iz-1]*Epar[iz,ir]
end
@views @. z.scratch = -Epar[:,ir]
@views indefinite_integral!(phi[:,ir], z.scratch, z, z_spectral)
end

# Restore the constant offset from the lower boundary
# to all ranks so that we can use this_delta_phi below
@loop_r_z ir iz begin
@views phi[iz,ir] += r.scratch3[ir]
end

# Add contributions to integral along z from processes at smaller z-values than
# this one.
this_delta_phi = r.scratch2 .= phi[end,:] .- phi[1,:]
this_delta_phi = r.scratch2
for irank ∈ 0:z.nrank-2
# update this_delta_phi before broadcasting
@loop_r ir begin
this_delta_phi[ir] = phi[end,ir] - phi[1,ir]
end
# broadcast this_delta_phi from z.irank = irank to all processes
# updating this_delta_phi everywhere
MPI.Bcast!(this_delta_phi, z.comm; root=irank)
if z.irank > irank
@loop_r_z ir iz begin
Expand All @@ -212,7 +230,9 @@ function calculate_phi_from_Epar!(phi, Epar, r, z)

if z.irank == z.nrank - 1
# Restore the upper-z boundary value
@views @. phi[end,:] = r.scratch
@loop_r ir begin
@views phi[end,ir] = r.scratch[ir]
end
end
end

Expand Down
Loading
Loading