Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor product differentiation operators #313

Conversation

a-alveyblanc
Copy link
Contributor

@a-alveyblanc a-alveyblanc commented Aug 1, 2023

This PR seeks to special-case local gradient, divergence, and derivative operators for tensor product finite elements

  • Single axis derivatives?

grudge/op.py Outdated


from grudge.array_context import PyOpenCLArrayContext
class TensorProductArrayContext(PyOpenCLArrayContext):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this part of the "normal" grudge array context.

grudge/op.py Outdated
# NOTE: Will possibly be removed in a future version of tensor product operator
# development since (I think) it is not entirely necessary
from pytools.tag import Tag
class OutputIsTensorProductDOFArrayOrdered(Tag):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this to somewhere near the array context.

grudge/op.py Outdated
from grudge.array_context import PyOpenCLArrayContext
class TensorProductArrayContext(PyOpenCLArrayContext):
def transform_loopy_program(self, t_unit):
if len(t_unit.callables_table) == 1:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could do this for all?

grudge/op.py Outdated
knl = knl.copy(args=new_args)
t_unit = t_unit.with_kernel(knl)

return super().transform_loopy_program(t_unit)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also going to have a lazy tentacle.

grudge/op.py Outdated
Comment on lines 258 to 264
if len(vec.shape) == 3:
specs = ["il,elj->eij",
"jl,eil->eij"]
elif len(vec.shape) == 4:
specs = ["il,eljk->eijk",
"jl,eilk->eijk",
"kl,eijl->eijk"]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make this dimension-independent?

grudge/op.py Outdated
"kl,eijl->eijk"]
else:
specs = None
assert specs is not None
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use assert only for invariants that you think you can prove.

grudge/op.py Outdated
diff_mat,
vec,
arg_names=("diff_mat", "vec"),
tagged=(FirstAxisIsElementsTag(),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FirstAxisIsElementsTag() is old-timey, should not use in new code.

grudge/op.py Outdated
# NOTE: Will possibly be removed in a future version of tensor product operator
# development since (I think) it is not entirely necessary
from pytools.tag import Tag
class OutputIsTensorProductDOFArrayOrdered(Tag):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "I'm a tensor product" tags should probably have axis granularity (instead of whole-array).

grudge/op.py Outdated
Comment on lines 385 to 394
space1d = grp.space.bases[0]
shape1d = grp.shape.bases[0]

nodes1d = mp.edge_clustered_nodes_for_space(space1d, shape1d)
basis1d = mp.basis_for_space(space1d, shape1d)

vdm1d = mp.vandermonde(basis1d.functions, nodes1d)
vdm_p1d = mp.vandermonde(basis1d.gradients, nodes1d)[0]

return actx.freeze(actx.from_numpy(vdm_p1d @ la.inv(vdm1d)))
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Should use the "official" source of truth for nodes/basis, in the element group.
  • Should make a way to get the "official" 1D nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possibly fixed in meshmode PR 384


# {{{ tensor product div

def compute_tensor_product_div(actx, grp, diff_mat, vec, ijm):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The result of the simplicial version of these operators lives on the the base discretization. The TPE versions produce results that live on the quadrature discretization, preventing overintegration from working as expected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you referring to the use of in_grp and out_grp between the two versions?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you referring to the use of in_grp and out_grp between the two versions?

That's definitely a code difference I can see, but I'm actually referring to the observation that if you send in vec that is on the quadrature discretization, this routine produces a result that lives on the quadrature discretization instead of one that lives on the base discretization as the simplicial version does.

Copy link
Contributor Author

@a-alveyblanc a-alveyblanc Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you send in vec that is on the quadrature discretization, this routine produces a result that lives on the quadrature discretization instead of one that lives on the base discretization

I think the reason this happens is because the routine only considers in_grp. I'm not sure why I wrote this to only use in_grp, but I think I remember some odd behavior (or outright errors) when grabbing the 1D operators.

Copy link
Collaborator

@MTCam MTCam Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There seems to be a TPE case in this routine only when in_group == out_group:

https://github.com/a-alveyblanc/grudge/blob/a662445c43d38d97324cb9744d144e0fa9a47a42/grudge/op.py#L755

Copy link
Contributor Author

@a-alveyblanc a-alveyblanc Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember writing it that way, but I can't remember exactly why. I'll look into it a bit and report back

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants