-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor product differentiation operators #313
Tensor product differentiation operators #313
Conversation
grudge/op.py
Outdated
|
||
|
||
from grudge.array_context import PyOpenCLArrayContext | ||
class TensorProductArrayContext(PyOpenCLArrayContext): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make this part of the "normal" grudge
array context.
grudge/op.py
Outdated
# NOTE: Will possibly be removed in a future version of tensor product operator | ||
# development since (I think) it is not entirely necessary | ||
from pytools.tag import Tag | ||
class OutputIsTensorProductDOFArrayOrdered(Tag): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this to somewhere near the array context.
grudge/op.py
Outdated
from grudge.array_context import PyOpenCLArrayContext | ||
class TensorProductArrayContext(PyOpenCLArrayContext): | ||
def transform_loopy_program(self, t_unit): | ||
if len(t_unit.callables_table) == 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could do this for all?
grudge/op.py
Outdated
knl = knl.copy(args=new_args) | ||
t_unit = t_unit.with_kernel(knl) | ||
|
||
return super().transform_loopy_program(t_unit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also going to have a lazy tentacle.
grudge/op.py
Outdated
if len(vec.shape) == 3: | ||
specs = ["il,elj->eij", | ||
"jl,eil->eij"] | ||
elif len(vec.shape) == 4: | ||
specs = ["il,eljk->eijk", | ||
"jl,eilk->eijk", | ||
"kl,eijl->eijk"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make this dimension-independent?
grudge/op.py
Outdated
"kl,eijl->eijk"] | ||
else: | ||
specs = None | ||
assert specs is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use assert
only for invariants that you think you can prove.
grudge/op.py
Outdated
diff_mat, | ||
vec, | ||
arg_names=("diff_mat", "vec"), | ||
tagged=(FirstAxisIsElementsTag(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FirstAxisIsElementsTag()
is old-timey, should not use in new code.
grudge/op.py
Outdated
# NOTE: Will possibly be removed in a future version of tensor product operator | ||
# development since (I think) it is not entirely necessary | ||
from pytools.tag import Tag | ||
class OutputIsTensorProductDOFArrayOrdered(Tag): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "I'm a tensor product" tags should probably have axis granularity (instead of whole-array).
grudge/op.py
Outdated
space1d = grp.space.bases[0] | ||
shape1d = grp.shape.bases[0] | ||
|
||
nodes1d = mp.edge_clustered_nodes_for_space(space1d, shape1d) | ||
basis1d = mp.basis_for_space(space1d, shape1d) | ||
|
||
vdm1d = mp.vandermonde(basis1d.functions, nodes1d) | ||
vdm_p1d = mp.vandermonde(basis1d.gradients, nodes1d)[0] | ||
|
||
return actx.freeze(actx.from_numpy(vdm_p1d @ la.inv(vdm1d))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Should use the "official" source of truth for nodes/basis, in the element group.
- Should make a way to get the "official" 1D nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
possibly fixed in meshmode PR 384
…ncluded convergence tests
|
||
# {{{ tensor product div | ||
|
||
def compute_tensor_product_div(actx, grp, diff_mat, vec, ijm): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result of the simplicial version of these operators lives on the the base discretization. The TPE versions produce results that live on the quadrature discretization, preventing overintegration from working as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to the use of in_grp
and out_grp
between the two versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to the use of
in_grp
andout_grp
between the two versions?
That's definitely a code difference I can see, but I'm actually referring to the observation that if you send in vec that is on the quadrature discretization, this routine produces a result that lives on the quadrature discretization instead of one that lives on the base discretization as the simplicial version does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you send in vec that is on the quadrature discretization, this routine produces a result that lives on the quadrature discretization instead of one that lives on the base discretization
I think the reason this happens is because the routine only considers in_grp
. I'm not sure why I wrote this to only use in_grp
, but I think I remember some odd behavior (or outright errors) when grabbing the 1D operators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There seems to be a TPE case in this routine only when in_group == out_group:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember writing it that way, but I can't remember exactly why. I'll look into it a bit and report back
This PR seeks to special-case local gradient, divergence, and derivative operators for tensor product finite elements