Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor product differentiation operators #313

Closed
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
b1be58a
Update _reference_derivative_matrices to recognize TensorProductEleme…
a-alveyblanc Aug 1, 2023
8fa3321
Stub in tensor product gradient computation in _gradient_kernel
a-alveyblanc Aug 1, 2023
3778edf
First version of grad routine
a-alveyblanc Aug 1, 2023
45e859e
Initial working version of tensor product gradient operator application
a-alveyblanc Aug 1, 2023
22ffacd
Add 3 dimensional test and order 3 test for 2D and 3D
a-alveyblanc Aug 1, 2023
d0bd17e
Add arg names to geometric factor application, refine some comments
a-alveyblanc Aug 1, 2023
ef667bb
Divergence operator version 0.0
a-alveyblanc Aug 2, 2023
eed2516
Prototype of divergence kernel. Needs work, but it passes currently i…
a-alveyblanc Aug 4, 2023
1e40a11
Remove random import included by CoC autocomplete
a-alveyblanc Aug 4, 2023
f2b0275
Generate einsum specification dynamically instead of using if-else
a-alveyblanc Aug 5, 2023
036681c
Rename vandermonde and vandermonde derivative matrices
a-alveyblanc Aug 5, 2023
7ad9017
Give einsums a single source of truth. Still only valid for dim <= 3
a-alveyblanc Aug 7, 2023
c645dbe
Move TP array context to array_context.py, other minor changes
a-alveyblanc Aug 10, 2023
dcd7ca0
Update tensor product grad test to match the other grad test case
a-alveyblanc Aug 10, 2023
683cdd8
Update tensor product divergence test to match original test case.
a-alveyblanc Aug 10, 2023
1563950
Divergence kernel functioning again
a-alveyblanc Aug 14, 2023
e1380fe
Update some comments, begin weak form matrices work
a-alveyblanc Aug 21, 2023
33a54e4
TMP: Use outside actx in TP grad
inducer Sep 6, 2023
e446bb7
Add TP transform cartoon
inducer Sep 6, 2023
264192c
Temporary changes to get tensor product gradient working again
a-alveyblanc Sep 12, 2023
ba03b3f
Tensor product array context related changes
a-alveyblanc Sep 13, 2023
8263333
Update example
a-alveyblanc Sep 14, 2023
6b50028
Add code for printing generated differentiation code
a-alveyblanc Sep 14, 2023
5d36bfb
Update strong tp diff example
a-alveyblanc Sep 18, 2023
92991a3
Version 0.1 of weak gradient computation
a-alveyblanc Sep 30, 2023
49116ab
Weak form divergence version 0.1
a-alveyblanc Sep 30, 2023
d87c19f
Move TP array contexts. Add acoustic pulse TP example
a-alveyblanc Oct 1, 2023
aec8bdb
Merge branch 'inducer:main' into tensor-product-differentiation-opera…
a-alveyblanc Oct 8, 2023
c14f08d
Merge branch 'inducer:main' into tensor-product-differentiation-opera…
a-alveyblanc Oct 17, 2023
b8e24c4
Start simplifying einsum
a-alveyblanc Oct 8, 2023
645a504
Remove unnecessary code
a-alveyblanc Nov 3, 2023
9b70d0b
Correct arg names in weak form gradient
a-alveyblanc Nov 3, 2023
ed8aacf
Correct arg names in weak form operator application
a-alveyblanc Nov 3, 2023
b1c312d
Same as last commit
a-alveyblanc Nov 3, 2023
0cb1830
Offload simplicial grad/div to their own function. Slight refactors.
a-alveyblanc Dec 7, 2023
984a984
More refactoring. Add FIXMEs for future changes (another PR probably)
a-alveyblanc Dec 7, 2023
77d6470
Chain geometric factors for strong form. Move tp metadata processing …
a-alveyblanc Dec 10, 2023
c3f7543
Add chained einsums to grad and div. Uncover some new bugs
a-alveyblanc Dec 11, 2023
06f5738
Small adjustment to buggy code
a-alveyblanc Dec 11, 2023
e31a8c4
Add math explanation and some other small changes
a-alveyblanc Dec 11, 2023
e293bd0
Update comment
a-alveyblanc Dec 11, 2023
0792c0b
Update requirements.txt to accurately reflect requirements
a-alveyblanc Dec 11, 2023
58c39bd
Run test_inverse_metric for tensor product meshes
inducer Dec 11, 2023
b8b239c
test_inverse_metric: Rotate mesh in non-affine case
inducer Dec 11, 2023
be8a26b
Rotate mesh in test_gradient
inducer Dec 11, 2023
3f8a886
Add TP divergence thm test. Make diff. op. einsums more general
a-alveyblanc Dec 12, 2023
e136538
Fixed tensor product operators, needs major refactor but it works for…
a-alveyblanc Dec 16, 2023
82b2c09
Update weak form div
a-alveyblanc Dec 16, 2023
2bb9228
Update pulse example. Update grudge tensor product gradient
a-alveyblanc Jan 4, 2024
7e389a7
Start updating div to be a single einsum
a-alveyblanc Jan 8, 2024
9951d65
start move to single axis operator application
a-alveyblanc Jan 16, 2024
f722b27
Update div
a-alveyblanc Jan 22, 2024
4b1e0b6
temp changes
a-alveyblanc Jan 27, 2024
47d6674
Working div recast as series of single axis operator applications
a-alveyblanc Jan 27, 2024
201c018
Update _apply_inverse_mass_operator for TP elements
a-alveyblanc Jan 28, 2024
b2e909e
Minor formatting
a-alveyblanc Jan 28, 2024
220c483
Merge branch 'inducer:main' into tensor-product-differentiation-opera…
a-alveyblanc Feb 11, 2024
fdcf4b8
Add simplicial test back to div convergence test
a-alveyblanc Feb 11, 2024
d9dbffe
Start working on TP DOF axis tagging (nothing working yet)
a-alveyblanc Feb 29, 2024
d611023
another TP DOF axis tagging commit (still nothing working yet)
a-alveyblanc Feb 29, 2024
f9a5330
Start implementing call_loopy in grudge eager actx
a-alveyblanc Mar 1, 2024
f216355
Revert eager changes. Remove unnecessary code in lazy actx. Update ex…
a-alveyblanc Mar 1, 2024
90d7092
Update TP operator and data tagging
a-alveyblanc Apr 2, 2024
8ebccc7
Update a comment
a-alveyblanc Apr 2, 2024
29be28d
Add TensorProductOperatorAxisTag
a-alveyblanc Apr 2, 2024
607d368
Add some clarifying comments
a-alveyblanc Apr 2, 2024
f03230d
Update docs. Testing IgnoredForEqualityTag
a-alveyblanc Apr 2, 2024
cadb98a
Rename some tags, change to IgnoredForPropagationTag in TP operator a…
a-alveyblanc Apr 3, 2024
a662445
Small change
a-alveyblanc Apr 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
163 changes: 147 additions & 16 deletions grudge/op.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,38 @@
)


# {{{ Temporary tools for tensor product operators
# NOTE: Will possibly be removed in a future version of tensor product operator
# development since (I think) it is not entirely necessary
from pytools.tag import Tag
class OutputIsTensorProductDOFArrayOrdered(Tag):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this to somewhere near the array context.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "I'm a tensor product" tags should probably have axis granularity (instead of whole-array).

pass


from grudge.array_context import PyOpenCLArrayContext
class TensorProductArrayContext(PyOpenCLArrayContext):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this part of the "normal" grudge array context.

def transform_loopy_program(self, t_unit):
if len(t_unit.callables_table) == 1:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could do this for all?

knl = t_unit.default_entrypoint
if knl.tags_of_type(OutputIsTensorProductDOFArrayOrdered):
new_args = []
for arg in knl.args:
if arg.is_output:
arg = arg.copy(dim_tags=(
f"N{len(arg.shape)-1},"
+ ",".join(f"N{i}"
for i in range(len(arg.shape)-1))
))

new_args.append(arg)

knl = knl.copy(args=new_args)
t_unit = t_unit.with_kernel(knl)

return super().transform_loopy_program(t_unit)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also going to have a lazy tentacle.

# }}}


# {{{ common derivative "kernels"

def _single_axis_derivative_kernel(
Expand Down Expand Up @@ -202,19 +234,97 @@ def _gradient_kernel(actx, out_discr, in_discr, get_diff_mat, inv_jac_mat, vec,
*, metric_in_matvec):
# See _single_axis_derivative_kernel for comments on the usage scenarios
# (both strong and weak derivative) and their differences.

def compute_tensor_product_grad(actx, grp, diff_mat, vec, ijm):
"""Exploits tensor product structure to differentiate each coordinate
axis using a single differentiation matrix of shape (nnodes1d, nnodes1d)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specify what this does with math. :)

"""

actx_tp = TensorProductArrayContext(
actx.queue,
allocator=actx.allocator,
force_device_scalars=actx._force_device_scalars)

from modepy.tools import (
reshape_array_for_tensor_product_space,
unreshape_array_for_tensor_product_space)

# reshape u to expose tensor product structure
vec = reshape_array_for_tensor_product_space(grp.space, vec)

# apply differentiation matrix to vec
# check len(vec.shape) since shape is expected to be
# (nelements, nnodes1d, nnodes1d)
if len(vec.shape) == 3:
specs = ["il,elj->eij",
"jl,eil->eij"]
elif len(vec.shape) == 4:
specs = ["il,eljk->eijk",
"jl,eilk->eijk",
"kl,eijl->eijk"]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make this dimension-independent?

else:
specs = None
assert specs is not None
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use assert only for invariants that you think you can prove.


diff_mat = get_diff_mat(actx, grp, grp)
grad = make_obj_array([
actx_tp.einsum(
spec,
diff_mat,
vec,
arg_names=("diff_mat", "vec"),
tagged=(FirstAxisIsElementsTag(),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FirstAxisIsElementsTag() is old-timey, should not use in new code.

OutputIsTensorProductDOFArrayOrdered()))
for spec in specs
])

# unreshape grad to apply geometric factors
# NOTE: In a future version, do not reshape before application of
# geometric factors. Can possibly "chain" the einsum. For example, the
# simplicial case below has einsum with spec
# ("xrei,rij,ei->ei")
# for the strong local gradient case
grad = make_obj_array([
unreshape_array_for_tensor_product_space(grp.space, grad[i])
for i in range(grad.shape[0])
])

# apply geometric factors to current grad
# FIXME: using einsum spec ("xrei,xei->xei") throws error:
# "Loopy does not directly support object arrays"
grad = make_obj_array([
actx_tp.einsum(
"rei,ei->ei",
ijm[i],
grad[i],
tagged=(FirstAxisIsElementsTag(),)),
arg_names=("inv_jac_t", "vec")
for i in range(grad.shape[0])
])

return grad

from meshmode.discretization.poly_element import \
TensorProductElementGroupBase
per_group_grads = [

compute_tensor_product_grad(actx, in_grp, get_diff_mat, vec_i, ijm_i)
if isinstance(in_grp, TensorProductElementGroupBase)

# r for rst axis
# x for xyz axis
actx.einsum("xrej,rij,ej->xei" if metric_in_matvec else "xrei,rij,ej->xei",
ijm_i,
get_diff_mat(
actx,
out_element_group=out_grp,
in_element_group=in_grp
),
vec_i,
arg_names=("inv_jac_t", "ref_stiffT_mat", "vec"),
tagged=(FirstAxisIsElementsTag(),))
else actx.einsum(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Factor this into a function also.

"xrej,rij,ej->xei" if metric_in_matvec else "xrei,rij,ej->xei",
ijm_i,
get_diff_mat(
actx,
out_element_group=out_grp,
in_element_group=in_grp
),
vec_i,
arg_names=("inv_jac_t", "ref_stiffT_mat", "vec"),
tagged=(FirstAxisIsElementsTag(),))

for out_grp, in_grp, vec_i, ijm_i in zip(
out_discr.groups, in_discr.groups, vec,
inv_jac_mat)]
Expand Down Expand Up @@ -259,16 +369,37 @@ def _reference_derivative_matrices(actx: ArrayContext,
# _reference_stiffness_transpose_matrices.
assert out_element_group is in_element_group

from meshmode.mesh import TensorProductElementGroup

@keyed_memoize_in(
actx, _reference_derivative_matrices,
lambda grp: grp.discretization_key())
def get_ref_derivative_mats(grp):
from meshmode.discretization.poly_element import diff_matrices
return actx.freeze(
actx.tag_axis(
1, DiscretizationDOFAxisTag(),
actx.from_numpy(
np.asarray(diff_matrices(grp)))))

from meshmode.discretization.poly_element import \
TensorProductElementGroupBase
if isinstance(grp, TensorProductElementGroupBase):
import modepy as mp
import numpy.linalg as la

space1d = grp.space.bases[0]
shape1d = grp.shape.bases[0]

nodes1d = mp.edge_clustered_nodes_for_space(space1d, shape1d)
basis1d = mp.basis_for_space(space1d, shape1d)

vdm1d = mp.vandermonde(basis1d.functions, nodes1d)
vdm_p1d = mp.vandermonde(basis1d.gradients, nodes1d)[0]

return actx.freeze(actx.from_numpy(vdm_p1d @ la.inv(vdm1d)))
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Should use the "official" source of truth for nodes/basis, in the element group.
  • Should make a way to get the "official" 1D nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possibly fixed in meshmode PR 384


else:
from meshmode.discretization.poly_element import diff_matrices
return actx.freeze(
actx.tag_axis(
1, DiscretizationDOFAxisTag(),
actx.from_numpy(
np.asarray(diff_matrices(grp)))))
return get_ref_derivative_mats(out_element_group)


Expand Down
76 changes: 76 additions & 0 deletions test/test_op.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@

import pytest

from grudge.discretization import make_discretization_collection
from grudge.array_context import PytestPyOpenCLArrayContextFactory
from arraycontext import pytest_generate_tests_for_array_contexts
pytest_generate_tests = pytest_generate_tests_for_array_contexts(
Expand Down Expand Up @@ -159,6 +160,81 @@ def get_flux(u_tpair):
assert (eoc_rec.order_estimate() >= order - 0.5
or eoc_rec.max_error() < 1e-11)


@pytest.mark.parametrize("form", ["strong"])
@pytest.mark.parametrize("dim", [2, 3])
@pytest.mark.parametrize("order", [2, 3])
@pytest.mark.parametrize(("vectorize", "nested"), [
(False, False)
])
def test_tensor_product_gradient(actx_factory, form, dim, order, vectorize,
nested, visualize=False):
"""A "one-dimensional tensor product element" does not make sense, so the
one-dimensional case is excluded from this test.
"""
actx = actx_factory()
from pytools.convergence import EOCRecorder
eoc_rec = EOCRecorder()

from meshmode.mesh import TensorProductElementGroup
from meshmode.discretization.poly_element import \
LegendreGaussLobattoTensorProductGroupFactory as LGL
for n in [4, 6, 8]:
mesh = mgen.generate_regular_rect_mesh(
a=(-1,)*dim,
b=(1,)*dim,
nelements_per_axis=(n,)*dim,
group_cls=TensorProductElementGroup)

import grudge.dof_desc as dd
dcoll = make_discretization_collection(
actx,
mesh,
discr_tag_to_group_factory={
dd.DISCR_TAG_BASE: LGL(order)})


def f(x):
if dim == 2:
ret = actx.np.cos(np.pi*x[0]) + actx.np.sin(np.pi*x[1])
elif dim == 3:
ret = actx.np.cos(np.pi*x[0]) + actx.np.sin(np.pi*x[1]) \
+ actx.np.sin(np.pi*x[2])
else:
ret = None
assert ret is not None

return ret


def grad_f(x):
ret = make_obj_array([dcoll.zeros(actx) for _ in range(dim)])

if dim == 2:
ret[0] = -np.pi*actx.np.sin(np.pi*x[0])
ret[1] = np.pi*actx.np.cos(np.pi*x[1])
elif dim == 3:
ret[0] = -np.pi*actx.np.sin(np.pi*x[0])
ret[1] = np.pi*actx.np.cos(np.pi*x[1])
ret[2] = np.pi*actx.np.cos(np.pi*x[2])

return ret


x = actx.thaw(dcoll.nodes())
u = f(x)
ref_grad = grad_f(x)
grad = op.local_grad(dcoll, u)

rel_linf_error = actx.to_numpy(op.norm(dcoll, ref_grad - grad, np.inf) /
op.norm(dcoll, ref_grad, np.inf))
eoc_rec.add_data_point(1./n, rel_linf_error)

print("L^inf error:")
print(eoc_rec)
assert (eoc_rec.order_estimate() >= order - 0.5 or
eoc_rec.max_error() < 1e-11)

# }}}


Expand Down