You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NeuralPDE sometimes fails when the boundary conditions are trivial (i.e., the dependent variable doesn't show up in the boundary conditions).
Expected behavior
NeuralPDE should work even when there aren't boundary conditions or the boundary conditions are trivial.
Minimal Reproducible Example 👇
Based on the direct function test in NeuralPDE's test files:
using NeuralPDE
using Optimization, OptimizationOptimisers, Random, Lux
import OptimizationOptimJL: BFGS
Random.seed!(110)
@parameters x
@variablesu(..)
func(x) =@.2+abs(x -0.5)
eq = [u(x) ~func(x)]
bcs = [
[u(0) ~u(0)],
[0~0],
[]
]
x0 =0
x_end =2
dx =0.001
domain = [x ∈ (x0, x_end)]
xs =collect(x0:dx:x_end)
func_s =func(xs)
hidden =10
chain =Chain(Dense(1, hidden, tanh), Dense(hidden, hidden, tanh), Dense(hidden, 1))
strategies = [
GridTraining(0.01)
StochasticTraining(3000)
QuasiRandomTraining(1000)
QuadratureTraining()
]
for bc in bcs
for strategy in strategies
discretization =PhysicsInformedNN(chain, strategy)
@named pde_system =PDESystem(eq, bc, domain, [x], [u(x)])
prob =trydiscretize(pde_system, discretization)
catch e
println("bc = ", bc, " and strategy = ", strategy, " errored in discretize: ", e)
continueend
res =try
res =solve(prob, Adam(0.05), maxiters =300)
prob =remake(prob, u0 = res.u)
res =solve(prob, BFGS(initial_stepnorm =0.01), maxiters =500)
catch e
println("bc = ", bc, " and strategy = ", strategy, " errored in solve: ", e)
continueendif!≈(discretization.phi(xs', res.u), func(xs'), rtol=0.01)
println("bc = ", bc, " and strategy = ", strategy, " gave bad solution")
elseprintln("bc = ", bc, " and strategy = ", strategy, " works fine")
endendprintln()
end
Error & Stacktrace ⚠️
The above code gives the output below. bc = [u(0) ~ u(0)] works for any training strategy.
bc = [0 ~ 0] works for GridTraining (thanks to #910) and QuadratureTraining, but not for StochasticTraining or QuasiRandomTraining, which error in discretize when trying to define the training set.
bc = [] errors in solve for all training strategies.
bc = Any[u(0) ~ u(0)] and strategy = GridTraining{Float64}(0.01) works fine
bc = Any[u(0) ~ u(0)] and strategy = StochasticTraining(3000, 3000) works fine
bc = Any[u(0) ~ u(0)] and strategy = QuasiRandomTraining{QuasiMonteCarlo.LatinHypercubeSample}(1000, 1000, QuasiMonteCarlo.LatinHypercubeSample(TaskLocalRNG()), true, 0) works fine
bc = Any[u(0) ~ u(0)] and strategy = QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 0.001, 1.0e-6, 1000, 100) works fine
bc = Any[0 ~ 0] and strategy = GridTraining{Float64}(0.01) works fine
bc = Any[0 ~ 0] and strategy = StochasticTraining(3000, 3000) errored in discretize: ArgumentError("reducing over an empty collection is not allowed; consider supplying `init` to the reducer")
bc = Any[0 ~ 0] and strategy = QuasiRandomTraining{QuasiMonteCarlo.LatinHypercubeSample}(1000, 1000, QuasiMonteCarlo.LatinHypercubeSample(TaskLocalRNG()), true, 0) errored in discretize: ArgumentError("reducing over an empty collection is not allowed; consider supplying `init` to the reducer")
bc = Any[0 ~ 0] and strategy = QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 0.001, 1.0e-6, 1000, 100) works fine
bc = Any[] and strategy = GridTraining{Float64}(0.01) errored in solve: MethodError(zero, (Any,), 0xffffffffffffffff)
bc = Any[] and strategy = StochasticTraining(3000, 3000) errored in solve: MethodError(zero, (Any,), 0xffffffffffffffff)
bc = Any[] and strategy = QuasiRandomTraining{QuasiMonteCarlo.LatinHypercubeSample}(1000, 1000, QuasiMonteCarlo.LatinHypercubeSample(TaskLocalRNG()), true, 0) errored in solve: MethodError(zero, (Any,), 0xffffffffffffffff)
bc = Any[] and strategy = QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 0.001, 1.0e-6, 1000, 100) errored in solve: MethodError(zero, (Any,), 0xffffffffffffffff)
Environment (please complete the following information):
Julia Version 1.11.1
Commit 8f5b7ca12ad (2024-10-1610:53 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU:4×Intel(R) Core(TM) i7-7Y75 CPU @ 1.30GHz
WORD_SIZE:64
LLVM: libLLVM-16.0.6 (ORCJIT, skylake)
Threads:4 default, 0 interactive, 2 GC (on 4 virtual cores)
Environment:
JULIA_NUM_THREADS = auto
JULIA_EDITOR = code
Additional context
@ChrisRackauckas, we spoke about this on a call yesterday. NeuralLyapunov sometimes creates a boundary condition like 0 ~ 0 or 0 ~ 1e-16, which is why I want this fixed.
The text was updated successfully, but these errors were encountered:
Describe the bug 🐞
NeuralPDE sometimes fails when the boundary conditions are trivial (i.e., the dependent variable doesn't show up in the boundary conditions).
Expected behavior
NeuralPDE should work even when there aren't boundary conditions or the boundary conditions are trivial.
Minimal Reproducible Example 👇
Based on the direct function test in NeuralPDE's test files:
Error & Stacktrace⚠️
The above code gives the output below.
bc = [u(0) ~ u(0)]
works for any training strategy.bc = [0 ~ 0]
works forGridTraining
(thanks to #910) andQuadratureTraining
, but not forStochasticTraining
orQuasiRandomTraining
, which error indiscretize
when trying to define the training set.bc = []
errors insolve
for all training strategies.Environment (please complete the following information):
using Pkg; Pkg.status()
using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
versioninfo()
Additional context
@ChrisRackauckas, we spoke about this on a call yesterday. NeuralLyapunov sometimes creates a boundary condition like
0 ~ 0
or0 ~ 1e-16
, which is why I want this fixed.The text was updated successfully, but these errors were encountered: