Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[documentation] Explain coloring and detector kwargs in sparse backends #249

Merged
merged 3 commits into from
Jun 18, 2024

Conversation

amontoison
Copy link
Member

@amontoison amontoison commented Jun 17, 2024

Related to #247
@gdalle

I renamed the keyword argument for the coloring algorithm, but I believe it's safe to do so.
Since this option was not documented previously, I don't expect any users to rely on the alg option.

Copy link
Contributor

github-actions bot commented Jun 17, 2024

Package name latest stable
CaNNOLeS.jl
DCISolver.jl
DerivativeFreeSolvers.jl
JSOSolvers.jl
NLPModelsIpopt.jl
OptimalControl.jl
OptimizationProblems.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

Copy link

codecov bot commented Jun 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.44%. Comparing base (ed9756e) to head (6619c1b).
Report is 6 commits behind head on main.

Current head 6619c1b differs from pull request most recent head 1371bc7

Please upload reports for the commit 1371bc7 to get more accurate results.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #249      +/-   ##
==========================================
- Coverage   95.48%   95.44%   -0.04%     
==========================================
  Files          13       13              
  Lines        1439     1427      -12     
==========================================
- Hits         1374     1362      -12     
  Misses         65       65              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@gdalle gdalle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, but in general it might be useful to add more (@ref) and (@extref) (from DocumenterInterLinks.jl)

@@ -1,4 +1,4 @@
# Build a hybrid NLPModel
# Build an hybrid NLPModel
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, it was correct before

H = hess(nlp, x)
```

The available backends for sparse derivatives (`SparseADJacobian`, `SparseADHessian` and `SparseReverseADHessian`) have keyword arguments `detector` and `coloring` to specify the sparsity pattern detector and the coloring algorithm, respectively.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's rather weird to call it SparseADHessian versus SparseReverseADHessian, a better name would be forward-over-forward vs. forward-over-reverse if I understand correctly. But with a switch to DI those distinctions will disappear

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know that Tangi implement SparseReverseADHessian.
The name is not good, we should switch to DI.jl asap!

docs/src/mixed.md Outdated Show resolved Hide resolved
@amontoison amontoison merged commit c1b0919 into JuliaSmoothOptimizers:main Jun 18, 2024
26 of 32 checks passed
@amontoison amontoison deleted the doc_smt branch June 18, 2024 04:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants