-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[documentation] Explain coloring and detector kwargs in sparse backends #249
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #249 +/- ##
==========================================
- Coverage 95.48% 95.44% -0.04%
==========================================
Files 13 13
Lines 1439 1427 -12
==========================================
- Hits 1374 1362 -12
Misses 65 65 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but in general it might be useful to add more (@ref)
and (@extref)
(from DocumenterInterLinks.jl)
docs/src/mixed.md
Outdated
@@ -1,4 +1,4 @@ | |||
# Build a hybrid NLPModel | |||
# Build an hybrid NLPModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope, it was correct before
H = hess(nlp, x) | ||
``` | ||
|
||
The available backends for sparse derivatives (`SparseADJacobian`, `SparseADHessian` and `SparseReverseADHessian`) have keyword arguments `detector` and `coloring` to specify the sparsity pattern detector and the coloring algorithm, respectively. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's rather weird to call it SparseADHessian
versus SparseReverseADHessian
, a better name would be forward-over-forward vs. forward-over-reverse if I understand correctly. But with a switch to DI those distinctions will disappear
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know that Tangi implement SparseReverseADHessian
.
The name is not good, we should switch to DI.jl asap!
c1b0919
into
JuliaSmoothOptimizers:main
Related to #247
@gdalle
I renamed the keyword argument for the coloring algorithm, but I believe it's safe to do so.
Since this option was not documented previously, I don't expect any users to rely on the
alg
option.