Skip to content

Purpose of tensor_jacobian_product in adjoint optimization examples #2098

Answered by smartalecH
Andeloth asked this question in Q&A
Discussion options

You must be logged in to vote

What is the purpose of doing that versus just populating it with the gradient?

Chain rule.

The adjoint solver doesn't know what other operations you perform on your design variables before you provide them to the solver itself. But your optimization algorithm needs to know the gradient w.r.t. the actual design parameters.

For example, if you filter and threshold your design variables ($\rho$), s.t. $\bar{\rho}=m(\rho)$ (where $m()$ is the final mapping function), the adjoint solver only ever sees $\bar{\rho}$. Which means the gradient it provides is $\frac{\partial f}{\partial \bar{\rho}}$, when you really want $\frac{\partial f}{\partial \rho}$.

So tensor_jacobian_products backpropagates

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by Andeloth
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
2 participants