Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatibility issue with OrdinaryDiffEq due to sparsity analysis in v0.8.3 #280

Open
abavoil opened this issue Jul 23, 2024 · 18 comments
Open

Comments

@abavoil
Copy link

abavoil commented Jul 23, 2024

Hello @amontoison @gdalle @tmigot,

First, thank you for your work on this project! I recently updated to v0.8.3 and encountered an issue with my code. I use OrdinaryDiffEq to solve an ODE within the constraint function of my problem. Previously, before v0.8, no sparsity analysis was done by default, and everything worked smoothly. However, since v0.8, the default sparsity analysis seems to cause OrdinaryDiffEq's solve function to throw an error. This behaviour seems to cause #247 as well.

Here is a comparison of my code before and after the update:

Before:

model = ADNLPModel!(x -> obj(x, p), x, lb, ub, (c, x) -> cons!(c, x, p), lc, uc)

After:

my_backend = copy(ADNLPModels.predefined_backend[:default])
my_backend[:hessian_residual_backend]  = ADNLPModels.ForwardDiffADHessian
my_backend[:hessian_backend]           = ADNLPModels.ForwardDiffADHessian
my_backend[:jacobian_residual_backend] = ADNLPModels.ForwardDiffADJacobian
my_backend[:jacobian_backend]          = ADNLPModels.ForwardDiffADJacobian

model = ADNLPModel!(x -> obj(x, p), x, lb, ub, (c, x) -> cons!(c, x, p), lc, uc; my_backend...)

I understand that enabling sparsity by default can improve performance in many cases, but it seems to cause issues in this particular scenario. Would it be possible to provide an option to disable sparsity analysis, or is there another recommended approach to maintain compatibility?

Thank you for your time and assistance!

ping @jbcaillau

@gdalle
Copy link
Collaborator

gdalle commented Jul 23, 2024

Hi! Can you provide a complete MWE with all necessary packages, problem definition and error stacktrace?

@amontoison
Copy link
Member

amontoison commented Jul 23, 2024

@abavoil
I can quickly add an option backend = :dense such that when you can create an ADNLPModel with the previous default behavior.

model = ADNLPModel!(x -> obj(x, p), x, lb, ub, (c, x) -> cons!(c, x, p), lc, uc, backend = :dense)

@abavoil
Copy link
Author

abavoil commented Jul 24, 2024

Here is a MWE for OrdinaryDiffEq. I use the optimisation variable in different places, and I get a few different errors as a result.

using Pkg
Pkg.activate(; temp=true)
Pkg.add(["OrdinaryDiffEq", "ADNLPModels"])

using OrdinaryDiffEq, ADNLPModels

"""x[1] used as a global variable in the dynamics"""
function cons1!(out, x)
    solve(ODEProblem((u, p, t) -> [x[1]], [0.], 1))
    return out .= zero(x)
end

"""x[1] used in the initial condition"""
function cons2!(out, x)
    solve(ODEProblem((u, p, t) -> zero(u), [x[1]], 1))
    return out .= zero(x)
end

"""x[1] used in the tspan"""
function cons3!(out, x)
    solve(ODEProblem((u, p, t) -> zero(u), [0.], x[1]))
    return out .= zero(x)
end

"""x[1] used in the parameters"""
function cons4!(out, x)
    solve(ODEProblem((u, p, t) -> zero(u), [0.], 1, x[1]))
    return out .= zero(x)
end

for c! in [cons1!, cons2!, cons3!, cons4!]
    c!(zeros(1), zeros(1))  # OK
    try ADNLPModel!(x ->  zeros(1), zeros(1), c!, zeros(1), zeros(1)) catch e println(e) end
end


"""
All at once:
 - x[1] used as a global variable in the dynamics
 - x[2] used in the initial condition
 - x[3] used in the tspan
 - x[4] used in the parameters"""
function cons!(out, x)
    solve(ODEProblem((u, p, t) -> p .* x[1] .* one.(u), [x[2]], x[3], x[4]))
    return out .= zero(x)
end

cons!(zeros(4), zeros(4))  # OK
try ADNLPModel!(x ->  zeros(1), zeros(4), cons!, zeros(4), zeros(4)) catch e println(e) end

Prior to 0.8, these examples raised no error and behaved as expected.

Environment:
  • Output of using Pkg; Pkg.status()
Status `/private/var/folders/d0/h4w4kz912fg8ktj96vh4rbn1m20lhk/T/jl_nRC0qe/Project.toml`
⌃ [54578032] ADNLPModels v0.7.2
  [1dea7af3] OrdinaryDiffEq v6.86.0
Info Packages marked with ⌃ have new versions available and may be upgradable.
  • Output of using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
Status `/private/var/folders/d0/h4w4kz912fg8ktj96vh4rbn1m20lhk/T/jl_nRC0qe/Manifest.toml`
⌃ [54578032] ADNLPModels v0.7.2
  [47edcb42] ADTypes v1.6.1
  [7d9f7c33] Accessors v0.1.37
  [79e6a3ab] Adapt v4.0.4
  [ec485272] ArnoldiMethod v0.4.0
  [4fba245c] ArrayInterface v7.12.0
  [4c555306] ArrayLayouts v1.10.2
  [62783981] BitTwiddlingConvenienceFunctions v0.1.6
  [2a0fbf3d] CPUSummary v0.2.6
  [d360d2e6] ChainRulesCore v1.24.0
  [fb6a15b2] CloseOpenIntervals v0.1.13
⌅ [ffa27691] ColPack v0.3.0
  [38540f10] CommonSolve v0.2.4
  [bbf7d656] CommonSubexpressions v0.3.0
  [f70d9fcc] CommonWorldInvalidations v1.0.0
  [34da2185] Compat v4.15.0
  [a33af91c] CompositionsBase v0.1.2
  [2569d6c7] ConcreteStructs v0.2.3
  [187b0558] ConstructionBase v1.5.6
  [adafc99b] CpuId v0.3.1
  [9a962f9c] DataAPI v1.16.0
  [864edb3b] DataStructures v0.18.20
  [e2d170a0] DataValueInterfaces v1.0.0
  [2b5f629d] DiffEqBase v6.151.5
  [163ba53b] DiffResults v1.1.0
  [b552c78f] DiffRules v1.15.1
  [a0c0ee7d] DifferentiationInterface v0.5.9
  [ffbed154] DocStringExtensions v0.9.3
  [4e289a0a] EnumX v1.0.4
  [f151be2c] EnzymeCore v0.7.7
  [d4d017d3] ExponentialUtilities v1.26.1
  [e2ba6199] ExprTools v0.1.10
⌅ [6b7a57c9] Expronicon v0.8.5
  [7034ab61] FastBroadcast v0.3.4
  [9aa1b823] FastClosures v0.3.2
  [29a986be] FastLapackInterface v2.0.4
  [1a297f60] FillArrays v1.11.0
  [6a86dc24] FiniteDiff v2.23.1
  [f6369f11] ForwardDiff v0.10.36
  [069b7b12] FunctionWrappers v1.1.3
  [77dc65aa] FunctionWrappersWrappers v0.1.3
  [46192b85] GPUArraysCore v0.1.6
  [c145ed77] GenericSchur v0.5.4
  [86223c79] Graphs v1.11.2
  [3e5b6fbb] HostCPUFeatures v0.1.17
  [615f187c] IfElse v0.1.1
  [d25df0c9] Inflate v0.1.5
  [3587e190] InverseFunctions v0.1.15
  [92d709cd] IrrationalConstants v0.2.2
  [82899510] IteratorInterfaceExtensions v1.0.0
  [692b3bcd] JLLWrappers v1.5.0
  [ef3ab10e] KLU v0.6.0
  [ba0b0d4f] Krylov v0.9.6
  [10f19ff3] LayoutPointers v0.1.17
  [5078a376] LazyArrays v2.1.9
  [d3d80556] LineSearches v7.2.0
  [5c8ed15e] LinearOperators v2.8.0
  [7ed4a6bd] LinearSolve v2.30.2
  [2ab3a3ac] LogExpFunctions v0.3.28
  [bdcacae8] LoopVectorization v0.12.171
  [d8e11817] MLStyle v0.4.17
  [1914dd2f] MacroTools v0.5.13
  [d125e4d3] ManualMemory v0.1.8
  [bb5d69b7] MaybeInplace v0.1.3
  [46d2c3a1] MuladdMacro v0.2.4
  [a4795742] NLPModels v0.21.1
  [d41bc354] NLSolversBase v7.8.3
  [77ba4419] NaNMath v1.0.2
  [8913a72c] NonlinearSolve v3.13.1
  [6fe1bfb0] OffsetArrays v1.14.1
  [bac558e1] OrderedCollections v1.6.3
  [1dea7af3] OrdinaryDiffEq v6.86.0
  [65ce6f38] PackageExtensionCompat v1.0.2
  [d96e819e] Parameters v0.12.3
  [f517fe37] Polyester v0.7.15
  [1d0040c9] PolyesterWeave v0.2.2
  [d236fae5] PreallocationTools v0.4.22
  [aea7be01] PrecompileTools v1.2.1
  [21216c6a] Preferences v1.4.3
  [3cdcf5f2] RecipesBase v1.3.4
  [731186ca] RecursiveArrayTools v3.26.0
  [f2c3362d] RecursiveFactorization v0.2.23
  [189a3867] Reexport v1.2.2
  [ae029012] Requires v1.3.0
  [37e2e3b7] ReverseDiff v1.15.3
  [7e49a35a] RuntimeGeneratedFunctions v0.5.13
  [94e857df] SIMDTypes v0.1.0
  [476501e8] SLEEFPirates v0.6.43
  [0bca4576] SciMLBase v2.45.0
  [c0aeaf25] SciMLOperators v0.3.8
  [53ae85a6] SciMLStructures v1.4.1
  [efcf1570] Setfield v1.1.1
  [727e6d20] SimpleNonlinearSolve v1.11.0
  [699a6c99] SimpleTraits v0.9.4
  [ce78b400] SimpleUnPack v1.1.0
  [47a9eef4] SparseDiffTools v2.19.0
  [0a514795] SparseMatrixColorings v0.3.5
  [e56a9233] Sparspak v0.3.9
  [276daf66] SpecialFunctions v2.4.0
  [aedffcd0] Static v1.1.1
  [0d7ed370] StaticArrayInterface v1.5.1
  [90137ffa] StaticArrays v1.9.7
  [1e83bf80] StaticArraysCore v1.4.3
  [7792a7ef] StrideArraysCore v0.5.7
  [2efcf032] SymbolicIndexingInterface v0.3.26
  [3783bdb8] TableTraits v1.0.1
  [bd369af6] Tables v1.12.0
  [8290d209] ThreadingUtilities v0.5.2
  [a759f4b9] TimerOutputs v0.5.24
  [d5829a12] TriangularSolve v0.2.1
  [410a4b4d] Tricks v0.1.8
  [781d530d] TruncatedStacktraces v1.4.0
  [3a884ed6] UnPack v1.0.2
  [3d5dd08c] VectorizationBase v0.21.70
  [19fa3120] VertexSafeGraphs v0.2.0
⌅ [f218ff0c] ColPack_jll v0.3.0+0
  [1d5cc7b8] IntelOpenMP_jll v2024.2.0+0
  [856f044c] MKL_jll v2024.2.0+0
  [efe28fd5] OpenSpecFun_jll v0.5.5+0
  [1317d2d5] oneTBB_jll v2021.12.0+0
  [0dad84c5] ArgTools v1.1.1
  [56f22d72] Artifacts
  [2a0f44e3] Base64
  [ade2ca70] Dates
  [8ba89e20] Distributed
  [f43a241f] Downloads v1.6.0
  [7b1f6079] FileWatching
  [9fa8497b] Future
  [b77e0a4c] InteractiveUtils
  [4af54fe1] LazyArtifacts
  [b27032c2] LibCURL v0.6.4
  [76f85450] LibGit2
  [8f399da3] Libdl
  [37e2e46d] LinearAlgebra
  [56ddb016] Logging
  [d6f4376e] Markdown
  [a63ad114] Mmap
  [ca575930] NetworkOptions v1.2.0
  [44cfe95a] Pkg v1.10.0
  [de0858da] Printf
  [3fa0cd96] REPL
  [9a3f8284] Random
  [ea8e919c] SHA v0.7.0
  [9e88b42a] Serialization
  [1a1011a3] SharedArrays
  [6462fe0b] Sockets
  [2f01184e] SparseArrays v1.10.0
  [10745b16] Statistics v1.10.0
  [4607b0f0] SuiteSparse
  [fa267f1f] TOML v1.0.3
  [a4e569a6] Tar v1.10.0
  [8dfed614] Test
  [cf7118a7] UUIDs
  [4ec0a83e] Unicode
  [e66e0078] CompilerSupportLibraries_jll v1.1.1+0
  [deac9b47] LibCURL_jll v8.4.0+0
  [e37daf67] LibGit2_jll v1.6.4+0
  [29816b5a] LibSSH2_jll v1.11.0+1
  [c8ffd9c3] MbedTLS_jll v2.28.2+1
  [14a3606d] MozillaCACerts_jll v2023.1.10
  [4536629a] OpenBLAS_jll v0.3.23+4
  [05823500] OpenLibm_jll v0.8.1+2
  [bea87d4a] SuiteSparse_jll v7.2.1+1
  [83775a58] Zlib_jll v1.2.13+1
  [8e850b90] libblastrampoline_jll v5.8.0+1
  [8e850ede] nghttp2_jll v1.52.0+1
  [3f19e933] p7zip_jll v17.4.0+2
Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

@gdalle
Copy link
Collaborator

gdalle commented Jul 24, 2024

Thanks!

First of all, the objective function in ADNLPModel! should be scalar-valued, not vector-valued. Fixing that makes the cons4! case work. As for the others, here are the stacktraces:

julia> ADNLPModel!(x ->  zero(eltype(x)), zeros(1), cons1!, zeros(1), zeros(1))
ERROR: TypeError: in typeassert, expected Float64, got a value of type SparseConnectivityTracer.GradientTracer{BitSet}
Stacktrace:
  [1] setindex!(A::Vector{Float64}, x::SparseConnectivityTracer.GradientTracer{BitSet}, i1::Int64)
    @ Base ./array.jl:1021
  [2] _unsafe_copyto!(dest::Vector{…}, doffs::Int64, src::Vector{…}, soffs::Int64, n::Int64)
    @ Base ./array.jl:299
  [3] unsafe_copyto!
    @ ./array.jl:353 [inlined]
  [4] _copyto_impl!
    @ ./array.jl:376 [inlined]
  [5] copyto!
    @ ./array.jl:363 [inlined]
  [6] copyto!
    @ ./array.jl:385 [inlined]
  [7] copyto_axcheck!
    @ ./abstractarray.jl:1177 [inlined]
  [8] Vector{Float64}(x::Vector{SparseConnectivityTracer.GradientTracer{BitSet}})
    @ Base ./array.jl:673
  [9] convert
    @ ./array.jl:665 [inlined]
 [10] setproperty!
    @ ./Base.jl:40 [inlined]
 [11] initialize!(integrator::OrdinaryDiffEq.ODEIntegrator{…}, cache::OrdinaryDiffEq.Tsit5ConstantCache)
    @ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/kMujN/src/perform_step/low_order_rk_perform_step.jl:733
 [12] initialize!(integrator::OrdinaryDiffEq.ODEIntegrator{…}, cache::OrdinaryDiffEq.DefaultCache{…})
    @ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/kMujN/src/perform_step/composite_perform_step.jl:34
 [13] __init(prob::ODEProblem{…}, alg::CompositeAlgorithm{…}, timeseries_init::Tuple{}, ts_init::Tuple{}, ks_init::Tuple{}, recompile::Type{…}; saveat::Tuple{}, tstops::Tuple{}, d_discontinuities::Tuple{}, save_idxs::Nothing, save_everystep::Bool, save_on::Bool, save_start::Bool, save_end::Nothing, callback::Nothing, dense::Bool, calck::Bool, dt::Float64, dtmin::Float64, dtmax::Float64, force_dtmin::Bool, adaptive::Bool, gamma::Rational{…}, abstol::Nothing, reltol::Nothing, qmin::Rational{…}, qmax::Int64, qsteady_min::Int64, qsteady_max::Int64, beta1::Nothing, beta2::Nothing, qoldinit::Rational{…}, controller::Nothing, fullnormalize::Bool, failfactor::Int64, maxiters::Int64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), internalopnorm::typeof(LinearAlgebra.opnorm), isoutofdomain::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), unstable_check::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), verbose::Bool, timeseries_errors::Bool, dense_errors::Bool, advance_to_tstop::Bool, stop_at_next_tstop::Bool, initialize_save::Bool, progress::Bool, progress_steps::Int64, progress_name::String, progress_message::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), progress_id::Symbol, userdata::Nothing, allow_extrapolation::Bool, initialize_integrator::Bool, alias_u0::Bool, alias_du0::Bool, initializealg::OrdinaryDiffEq.DefaultInit, kwargs::@Kwargs{…})
    @ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:524
 [14] __init (repeats 4 times)
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:11 [inlined]
 [15] #__solve#500
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:6 [inlined]
 [16] __solve
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:1 [inlined]
 [17] solve_call(_prob::ODEProblem{…}, args::CompositeAlgorithm{…}; merge_callbacks::Bool, kwargshandle::Nothing, kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:612
 [18] solve_call
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:569 [inlined]
 [19] solve_up(prob::ODEProblem{…}, sensealg::Nothing, u0::Vector{…}, p::SciMLBase.NullParameters, args::CompositeAlgorithm{…}; kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1080
 [20] solve_up
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1066 [inlined]
 [21] #solve#51
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003 [inlined]
 [22] solve
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993 [inlined]
 [23] #__solve#505
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:547 [inlined]
 [24] __solve
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:546 [inlined]
 [25] #__solve#72
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1394 [inlined]
 [26] __solve
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1386 [inlined]
 [27] #solve_call#44
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:612 [inlined]
 [28] solve_call(::ODEProblem{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:569
 [29] solve_up(::ODEProblem{…}, ::Nothing, ::Vector{…}, ::SciMLBase.NullParameters; kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1072
 [30] solve_up
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1066 [inlined]
 [31] solve(::ODEProblem{…}; sensealg::Nothing, u0::Nothing, p::Nothing, wrap::Val{…}, kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003
 [32] solve(::ODEProblem{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993
 [33] cons1!(out::Vector{SparseConnectivityTracer.GradientTracer{…}}, x::Vector{SparseConnectivityTracer.GradientTracer{…}})
    @ Main ~/Downloads/Untitled-1.jl:9
 [34] trace_function
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/pattern.jl:39 [inlined]
 [35] jacobian_pattern
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/pattern.jl:214 [inlined]
 [36] jacobian_sparsity
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/adtypes.jl:46 [inlined]
 [37] #compute_jacobian_sparsity#53
    @ ~/.julia/packages/ADNLPModels/73jMi/src/sparsity_pattern.jl:24 [inlined]
 [38] compute_jacobian_sparsity
    @ ~/.julia/packages/ADNLPModels/73jMi/src/sparsity_pattern.jl:18 [inlined]
 [39] ADNLPModels.SparseADJacobian(nvar::Int64, f::Function, ncon::Int64, c!::typeof(cons1!); x0::Vector{…}, coloring::SparseMatrixColorings.GreedyColoringAlgorithm{…}, detector::SparseConnectivityTracer.TracerSparsityDetector{…}, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/sparse_jacobian.jl:24
 [40] macro expansion
    @ ~/.julia/packages/ADNLPModels/73jMi/src/ad.jl:173 [inlined]
 [41] macro expansion
    @ ./timing.jl:395 [inlined]
 [42] ADNLPModels.ADModelBackend(nvar::Int64, f::var"#57#58", ncon::Int64, c!::typeof(cons1!); backend::Symbol, matrix_free::Bool, show_time::Bool, gradient_backend::Type, hprod_backend::Type, jprod_backend::Type, jtprod_backend::Type, jacobian_backend::Type, hessian_backend::Type, ghjvprod_backend::Type, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/ad.jl:169
 [43] ADNLPModel!(f::Function, x0::Vector{…}, c!::Function, lcon::Vector{…}, ucon::Vector{…}; y0::Vector{…}, name::String, minimize::Bool, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/nlp.jl:192
 [44] ADNLPModel!(f::Function, x0::Vector{Float64}, c!::Function, lcon::Vector{Float64}, ucon::Vector{Float64})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/nlp.jl:175
 [45] top-level scope
    @ ~/Downloads/Untitled-1.jl:31
Some type information was truncated. Use `show(err)` to see complete types.

julia> ADNLPModel!(x ->  zero(eltype(x)), zeros(1), cons2!, zeros(1), zeros(1))
ERROR: TypeError: non-boolean (SparseConnectivityTracer.GradientTracer{BitSet}) used in boolean context
Stacktrace:
  [1] nonstiffchoice(reltol::SparseConnectivityTracer.GradientTracer{BitSet})
    @ OrdinaryDiffEq.OrdinaryDiffEqDefault ~/.julia/packages/OrdinaryDiffEq/kMujN/lib/OrdinaryDiffEqDefault/src/default_alg.jl:58
  [2] default_autoswitch
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/lib/OrdinaryDiffEqDefault/src/default_alg.jl:90 [inlined]
  [3] AutoSwitchCache
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/composite_algs.jl:39 [inlined]
  [4] initialize!(integrator::OrdinaryDiffEq.ODEIntegrator{…}, cache::OrdinaryDiffEq.DefaultCache{…})
    @ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/kMujN/src/perform_step/composite_perform_step.jl:30
  [5] __init(prob::ODEProblem{…}, alg::CompositeAlgorithm{…}, timeseries_init::Tuple{}, ts_init::Tuple{}, ks_init::Tuple{}, recompile::Type{…}; saveat::Tuple{}, tstops::Tuple{}, d_discontinuities::Tuple{}, save_idxs::Nothing, save_everystep::Bool, save_on::Bool, save_start::Bool, save_end::Nothing, callback::Nothing, dense::Bool, calck::Bool, dt::Float64, dtmin::Float64, dtmax::Float64, force_dtmin::Bool, adaptive::Bool, gamma::Rational{…}, abstol::Nothing, reltol::Nothing, qmin::Rational{…}, qmax::Int64, qsteady_min::Int64, qsteady_max::Int64, beta1::Nothing, beta2::Nothing, qoldinit::Rational{…}, controller::Nothing, fullnormalize::Bool, failfactor::Int64, maxiters::Int64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), internalopnorm::typeof(LinearAlgebra.opnorm), isoutofdomain::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), unstable_check::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), verbose::Bool, timeseries_errors::Bool, dense_errors::Bool, advance_to_tstop::Bool, stop_at_next_tstop::Bool, initialize_save::Bool, progress::Bool, progress_steps::Int64, progress_name::String, progress_message::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), progress_id::Symbol, userdata::Nothing, allow_extrapolation::Bool, initialize_integrator::Bool, alias_u0::Bool, alias_du0::Bool, initializealg::OrdinaryDiffEq.DefaultInit, kwargs::@Kwargs{…})
    @ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:524
  [6] __init (repeats 4 times)
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:11 [inlined]
  [7] __solve(::ODEProblem{…}, ::CompositeAlgorithm{…}; kwargs::@Kwargs{})
    @ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:6
  [8] __solve
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:1 [inlined]
  [9] solve_call(_prob::ODEProblem{…}, args::CompositeAlgorithm{…}; merge_callbacks::Bool, kwargshandle::Nothing, kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:612
 [10] solve_call
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:569 [inlined]
 [11] solve_up(prob::ODEProblem{…}, sensealg::Nothing, u0::Vector{…}, p::SciMLBase.NullParameters, args::CompositeAlgorithm{…}; kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1080
 [12] solve_up
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1066 [inlined]
 [13] #solve#51
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003 [inlined]
 [14] solve
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993 [inlined]
 [15] #__solve#505
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:547 [inlined]
 [16] __solve
    @ ~/.julia/packages/OrdinaryDiffEq/kMujN/src/solve.jl:546 [inlined]
 [17] #__solve#72
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1394 [inlined]
 [18] __solve
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1386 [inlined]
 [19] #solve_call#44
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:612 [inlined]
 [20] solve_call(::ODEProblem{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:569
 [21] solve_up(::ODEProblem{…}, ::Nothing, ::Vector{…}, ::SciMLBase.NullParameters; kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1072
 [22] solve_up
    @ ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1066 [inlined]
 [23] solve(::ODEProblem{…}; sensealg::Nothing, u0::Nothing, p::Nothing, wrap::Val{…}, kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003
 [24] solve(::ODEProblem{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993
 [25] cons2!(out::Vector{SparseConnectivityTracer.GradientTracer{…}}, x::Vector{SparseConnectivityTracer.GradientTracer{…}})
    @ Main ~/Downloads/Untitled-1.jl:15
 [26] trace_function
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/pattern.jl:39 [inlined]
 [27] jacobian_pattern
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/pattern.jl:214 [inlined]
 [28] jacobian_sparsity
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/adtypes.jl:46 [inlined]
 [29] #compute_jacobian_sparsity#53
    @ ~/.julia/packages/ADNLPModels/73jMi/src/sparsity_pattern.jl:24 [inlined]
 [30] compute_jacobian_sparsity
    @ ~/.julia/packages/ADNLPModels/73jMi/src/sparsity_pattern.jl:18 [inlined]
 [31] ADNLPModels.SparseADJacobian(nvar::Int64, f::Function, ncon::Int64, c!::typeof(cons2!); x0::Vector{…}, coloring::SparseMatrixColorings.GreedyColoringAlgorithm{…}, detector::SparseConnectivityTracer.TracerSparsityDetector{…}, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/sparse_jacobian.jl:24
 [32] macro expansion
    @ ~/.julia/packages/ADNLPModels/73jMi/src/ad.jl:173 [inlined]
 [33] macro expansion
    @ ./timing.jl:395 [inlined]
 [34] ADNLPModels.ADModelBackend(nvar::Int64, f::var"#59#60", ncon::Int64, c!::typeof(cons2!); backend::Symbol, matrix_free::Bool, show_time::Bool, gradient_backend::Type, hprod_backend::Type, jprod_backend::Type, jtprod_backend::Type, jacobian_backend::Type, hessian_backend::Type, ghjvprod_backend::Type, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/ad.jl:169
 [35] ADNLPModel!(f::Function, x0::Vector{…}, c!::Function, lcon::Vector{…}, ucon::Vector{…}; y0::Vector{…}, name::String, minimize::Bool, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/nlp.jl:192
 [36] ADNLPModel!(f::Function, x0::Vector{Float64}, c!::Function, lcon::Vector{Float64}, ucon::Vector{Float64})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/nlp.jl:175
 [37] top-level scope
    @ ~/Downloads/Untitled-1.jl:32
Some type information was truncated. Use `show(err)` to see complete types.

julia> ADNLPModel!(x ->  zero(eltype(x)), zeros(1), cons3!, zeros(1), zeros(1))
ERROR: Function isnan requires primal value(s).
A dual-number tracer for local sparsity detection can be used via `local_jacobian_pattern`.
Stacktrace:
  [1] isnan(t::SparseConnectivityTracer.GradientTracer{BitSet})
    @ SparseConnectivityTracer ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/overload_dual.jl:18
  [2] _any(f::typeof(isnan), itr::Tuple{…}, ::Colon)
    @ Base ./reduce.jl:1220
  [3] any(f::Function, itr::Tuple{SparseConnectivityTracer.GradientTracer{…}, SparseConnectivityTracer.GradientTracer{…}})
    @ Base ./reduce.jl:1235
  [4] get_concrete_tspan(prob::ODEProblem{…}, isadapt::Bool, kwargs::@Kwargs{}, p::SciMLBase.NullParameters)
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1287
  [5] get_concrete_problem(prob::ODEProblem{…}, isadapt::Bool; kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1169
  [6] solve_up(::ODEProblem{…}, ::Nothing, ::Vector{…}, ::SciMLBase.NullParameters; kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1070
  [7] solve_up(::ODEProblem{…}, ::Nothing, ::Vector{…}, ::SciMLBase.NullParameters)
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1066
  [8] solve(::ODEProblem{…}; sensealg::Nothing, u0::Nothing, p::Nothing, wrap::Val{…}, kwargs::@Kwargs{})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:1003
  [9] solve(::ODEProblem{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/c8MAQ/src/solve.jl:993
 [10] cons3!(out::Vector{SparseConnectivityTracer.GradientTracer{…}}, x::Vector{SparseConnectivityTracer.GradientTracer{…}})
    @ Main ~/Downloads/Untitled-1.jl:21
 [11] trace_function
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/pattern.jl:39 [inlined]
 [12] jacobian_pattern
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/pattern.jl:214 [inlined]
 [13] jacobian_sparsity
    @ ~/.julia/packages/SparseConnectivityTracer/QlV0S/src/adtypes.jl:46 [inlined]
 [14] #compute_jacobian_sparsity#53
    @ ~/.julia/packages/ADNLPModels/73jMi/src/sparsity_pattern.jl:24 [inlined]
 [15] compute_jacobian_sparsity
    @ ~/.julia/packages/ADNLPModels/73jMi/src/sparsity_pattern.jl:18 [inlined]
 [16] ADNLPModels.SparseADJacobian(nvar::Int64, f::Function, ncon::Int64, c!::typeof(cons3!); x0::Vector{…}, coloring::SparseMatrixColorings.GreedyColoringAlgorithm{…}, detector::SparseConnectivityTracer.TracerSparsityDetector{…}, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/sparse_jacobian.jl:24
 [17] macro expansion
    @ ~/.julia/packages/ADNLPModels/73jMi/src/ad.jl:173 [inlined]
 [18] macro expansion
    @ ./timing.jl:395 [inlined]
 [19] ADNLPModels.ADModelBackend(nvar::Int64, f::var"#61#62", ncon::Int64, c!::typeof(cons3!); backend::Symbol, matrix_free::Bool, show_time::Bool, gradient_backend::Type, hprod_backend::Type, jprod_backend::Type, jtprod_backend::Type, jacobian_backend::Type, hessian_backend::Type, ghjvprod_backend::Type, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/ad.jl:169
 [20] ADNLPModel!(f::Function, x0::Vector{…}, c!::Function, lcon::Vector{…}, ucon::Vector{…}; y0::Vector{…}, name::String, minimize::Bool, kwargs::@Kwargs{})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/nlp.jl:192
 [21] ADNLPModel!(f::Function, x0::Vector{Float64}, c!::Function, lcon::Vector{Float64}, ucon::Vector{Float64})
    @ ADNLPModels ~/.julia/packages/ADNLPModels/73jMi/src/nlp.jl:175
 [22] top-level scope
    @ ~/Downloads/Untitled-1.jl:33
Some type information was truncated. Use `show(err)` to see complete types.

For context, the new sparsity detection mechanism uses SparseConnectivityTracer.jl, developed by @adrhill and myself. It works by operator overloading using a new Number subtype called Tracer, and this type "remembers" which inputs have influenced it. Our Tracer is a bit more restrictive than usual numbers, because it doesn't support comparisons or things like isnan, which explains the errors we get for cons2! and cons3! (replacing [0.] with [zero(eltype(x))] makes cons1! behave the same as cons2!), unless you switch to a local tracer.

If (and only if) you are 100% confident that the sparsity pattern of your constraints and objective is independent from the input value, you can try passing TracerLocalSparsityDetector() instead of TracerSparsityDetector() to the ADNLPModels.jl sparsity detection pipeline (see this doc page). In that case, I suggest picking a random x0 instead of zeros(n) to avoid accidental zeros.

In the long run, some fixes might be possible on our end, like:

  • picking a reasonable output for isnan(::Tracer)
  • figuring out why reltol has type ::Tracer in the diffeq

@amontoison
Copy link
Member

@abavoil
It seems that what I wanted to do is already implemented:

model = ADNLPModel!(x -> obj(x, p), x, lb, ub, (c, x) -> cons!(c, x, p), lc, uc, backend = :generic)

Can you try it?

@abavoil
Copy link
Author

abavoil commented Jul 24, 2024

Can you try it?

Building the model works and solving it gives the numerical result I had in v0.7.3! In predefined_backends.jl I saw those GenericForwardDiffAD... when I was looking for ForwardDiffAD... and thought generic was some kind of abstract backend without even trying... Thank you!

@amontoison
Copy link
Member

amontoison commented Jul 24, 2024

I agree that the name is misleadling.
I will simplify that with the next major release and the support of DifferentiationInterface.jl.

I'm glad that backend = :generic solved your issue. I should add a note about it in the documentation.

@jbcaillau
Copy link

jbcaillau commented Jul 24, 2024

Hi @gdalle @amontoison, many thanks for the feedback. Regarding this point:

It works by operator overloading using a new Number subtype called Tracer, and this type "remembers" which inputs have influenced it. Our Tracer is a bit more restrictive than usual numbers, because it doesn't support comparisons or things like isnan, which explains the errors we get for cons2! and cons3! (replacing [0.] with [zero(eltype(x))] makes cons1! behave the same as cons2!), unless you switch to a local tracer.

I think that the use case raised by @abavoil is rather common: optimisation model that calls a solver (here an ODE solver) to evaluate the objective and / or the constraints. Dual numbers typically pass through this seamlessly using mere overloading 1, which does not seem to be the case for your specific type. Worth looking at it (although @abavoil case is low dim, dense, and solved by :generic).

Footnotes

  1. even if it is not the proper way to compute the sensitivity for an ODE, but that's another story

@gdalle
Copy link
Collaborator

gdalle commented Jul 24, 2024

Dual numbers typically pass through this seamlessly using mere overloading

Well that's not entirely true: OrdinaryDiffEq and plenty of other packages are designed specifically to work with ForwardDiff. You can deduce it from the number of occurrences of ForwardDiff.something in the code base, or the existence of auxiliary packages like PreallocationTools which are mentioned in the FAQ. So there are indeed places in which overloading alone does not suffice. In such places, people have usually gone out of their way to support Dual numbers, but less so for our very recent Tracers.

Worth looking at it

We are looking at it, but some of the limitations I mentioned are intrinsic to sparsity detection. We want to deduce a sparsity pattern that is "global", meaning "independent of the point at which you evaluate the function". From there, it follows that a Tracer number should not carry a specific value. And in turn this means comparisons like x < y or checks like isnan(x) are likely to error.
There are sometimes workarounds, like using ifelse(x < y, a, b) instead of if x < y; a; else; b; end (because this is more compatible with operator overloading), and we could probably define isnan(::Tracer) = false without the sky falling down on us. But fundamentally, global Tracers will always be limited in what they can do because they represent a function over its whole input domain.
On the other hand, local Tracers do carry a value, but they yield a sparsity pattern that is only valid at the current point. In most cases that's not what we want, but if you can prove (through insider knowledge of your function) that this local pattern is in fact valid globally, then you can go ahead and use that.
Does this make things clearer?

@jbcaillau
Copy link

Dual numbers typically pass through this seamlessly using mere overloading

Well that's not entirely true: OrdinaryDiffEq and plenty of other packages are designed specifically to work with ForwardDiff. You can deduce it from the number of occurrences of ForwardDiff.something in the code base, or the existence of auxiliary packages like PreallocationTools which are mentioned in the FAQ. So there are indeed places in which overloading alone does not suffice. In such places, people have usually gone out of their way to support Dual numbers, but less so for our very recent Tracers.

Worth looking at it

We are looking at it, but some of the limitations I mentioned are intrinsic to sparsity detection. We want to deduce a sparsity pattern that is "global", meaning "independent of the point at which you evaluate the function". From there, it follows that a Tracer number should not carry a specific value. And in turn this means comparisons like x < y or checks like isnan(x) are likely to error. There are sometimes workarounds, like using ifelse(x < y, a, b) instead of if x < y; a; else; b; end (because this is more compatible with operator overloading), and we could probably define isnan(::Tracer) = false without the sky falling down on us. But fundamentally, global Tracers will always be limited in what they can do because they represent a function over its whole input domain. On the other hand, local Tracers do carry a value, but they yield a sparsity pattern that is only valid at the current point. In most cases that's not what we want, but if you can prove (through insider knowledge of your function) that this local pattern is in fact valid globally, then you can go ahead and use that. Does this make things clearer?

@gdalle Crystal clear: thanks for the input 🙏🏽. Naive point: If a workaround like isnan(::Tracer) = false allows overloading at the price of missing some sparsity features, that's probably acceptable.

Was not aware of OrdinaryDiffEq (and others) having tailored (thus not extendible) features specific to ForwardDiff. Another incentive for a higher level / uniform AD backend 🤞🏾.

@gdalle
Copy link
Collaborator

gdalle commented Jul 24, 2024

I'll let @adrhill weigh in on isnan and co, but my gut says that even if we define isnan(::Tracer), we'll eventually run into the same error for every one of the four scenarios above. This error happens in the choice of the algorithm, when the specified tolerance is compared with a fixed threshold:
https://github.com/SciML/OrdinaryDiffEq.jl/blob/6774e5d159af328b2f52e1e5ee64b09dd03229d7/lib/OrdinaryDiffEqDefault/src/default_alg.jl#L57-L64
I think that at some point, the user-provided tolerance gets converted to a Tracer through promotion, and that's why the comparison fails. That's hard to get around, but maybe if you specify the solving algorithm yourself, you can avoid this comparison?
Of course I suspect solving this bug will only lead us to another place further down the road where comparisons happen. If that's the case, maybe local sparsity detection is a solution.

Regardless of everything I have said above, if you don't like the new system for sparsity detection, you can always use the old one! Alexis and I have made this whole thing parametrizable, so just use Symbolics.SymbolicsSparsityDetector() instead of SparseConnectivityTracer.TracerSparsityDetector() in your sparse model creation, and you'll be back to the old ways. 100x slower, sure, but with a slightly different set of abilities that might suit you better.

@tmigot
Copy link
Member

tmigot commented Jul 24, 2024

To add some more information, the original idea of the backend = :generic is to provide backend that are not type-dependent, and therefore it skips most of the computations done at the model instantiation.

@abavoil I am not 100% sure to understand the use case here. Do you solve this problem with Hessian evaluation?

If you use no Jacobian and Hessian matrices, because either you rely on matrix-vector product only or you just skip them, there is also the keyword matrix_free = true. If you do use Hessian evaluation, then computing the dense Hessian will be problematic for large problems.

@amontoison
Copy link
Member

@tmigot Is the option matrix_free = true documented?

@tmigot
Copy link
Member

tmigot commented Jul 24, 2024

https://jso.dev/ADNLPModels.jl/dev/performance/

@jbcaillau
Copy link

To add some more information, the original idea of the backend = :generic is to provide backend that are not type-dependent, and therefore it skips most of the computations done at the model instantiation.

@abavoil I am not 100% sure to understand the use case here. Do you solve this problem with Hessian evaluation?

If you use no Jacobian and Hessian matrices, because either you rely on matrix-vector product only or you just skip them, there is also the keyword matrix_free = true. If you do use Hessian evaluation, then computing the dense Hessian will be problematic for large problems.

Hi @tmigot, thanks for the feedback on this option. Regarding the problem solved by @abavoil :

  • constraints evaluation requires solving an ODE, and OrdinaryDiffEq is used; works fine with ForwardDiff (though propagating dual numbers through a variable step size RK scheme is not the best way to AD - see comment above)
  • yes, dense Hessian are needed; but since we use an ODE solver, dimension is very low ($1e1$ or so)

@abavoil any further comments?

@abavoil
Copy link
Author

abavoil commented Jul 25, 2024

To complement what @jbcaillau said:

I use automatic differentiation (ForwardDiff.jl for now) in the evolution function of an ODE (solved with OrdinaryDiffEq.jl), from which the non-linear constraints of an NLP are computed. To solve this NLP, Ipopt (NLPModelsIpopt.jl) differentiates its constraints, applying another layer of automatic differentiation.

Between ForwardDiff.jl and OrdinaryDiffEq.jl, I can't expect much from custom number types that infer the structure of my problem. For now, I rely solely on ForwardDiff.jl because it handles multiple layers of differentiation reliably. I hope it clarifies my use case 😄

@tmigot
Copy link
Member

tmigot commented Jul 25, 2024

Ok, thank you for the additional informations

@adrhill
Copy link

adrhill commented Jul 25, 2024

SparseConnectivityTracer implements two tracer types:

  1. AbstractTracer
  2. A Dual type that wraps a primal value and an AbstractTracer

These have two different applications:

  • AbstractTracer (used via TracerSparsityDetector)
    • computes global sparsity patterns of f(x) over the entire input domain x
    • by design these can only support very limited amounts of control flow (ifelse, but much much more), since they have to compute a conservative global sparsity pattern
  • Dual (used via TracerLocalSparsityDetector)
    • computes local sparsity patterns of f(x) at a specific input x
    • by design, these support control flow, since they only compute local sparsity at a specific input x

Dual numbers typically pass through this seamlessly using mere overloading, which does not seem to be the case for your specific type.

Using TracerLocalSparsityDetector, our Dual number type seamlessly passes though isnan using mere overloading:
https://github.com/adrhill/SparseConnectivityTracer.jl/blob/ac94586c854726370a0e464ee62074774295a929/src/overloads/dual.jl#L9

However, if we don't have a primal value, as is the case with the non-dual TracerSparsityDetector, we can't return either true or false–we don't know whether the primal is NaN or not.

This is by design: if we made isnan return true or false, we would enter branches of an if-else-block and not return the global pattern anymore. This is the case for all is* functions, e.g. iseven, iszero, etc.
The exception where non-dual tracers support some control-flow is the aforementioned ifelse function. There, we trace through both expressions and compute a union of both output patterns.

TLDR:

  • The limitation of global sparsity tracing via operator-overloading (TracerSparsityDetector) is that we can't guarantee a correct, conservative sparsity pattern if your function has control flow. We therefore prefer to throw an error rather than to return a wrong result.
  • You can still use local sparsity tracing via TracerLocalSparsityDetector, which supports control flow via a Dual number type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants