Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Galvanostatic mode #131

Open
j-fu opened this issue Oct 12, 2024 · 9 comments
Open

Galvanostatic mode #131

j-fu opened this issue Oct 12, 2024 · 9 comments

Comments

@j-fu
Copy link
Member

j-fu commented Oct 12, 2024

Continue discussion from #119

@j-fu
Copy link
Member Author

j-fu commented Oct 12, 2024

Just tried out your MWE, it works. Just a hint: with recent SciML versions, it is sufficient to use OrdinaryDiffEqBDF instead of the whole of DifferentialEquations.

Will take some time to think about galvanostatic (next week is very busy...)

@JanSuntajs
Copy link

JanSuntajs commented Oct 14, 2024

Thank you; for now, I probably can make it work just by assigning another species for the voltage and then look for solutions satisfying Laplace equation with homogenous Neumann b. c. at the center and a properly chosen Dirichlet b. c. at the external boundary (this should yield spatially constant solutions). This is of course a bit redundant since it doubles the storage requirements, but probably still feasible in 1D.

Best,
Jan

@JanSuntajs
Copy link

JanSuntajs commented Oct 16, 2024

Just a quick update:

I did manage to find a temporary solution/workaround along the lines of what I've explained above; namely, I've assigned another species index to the voltage and formulated the PDE for voltage as a Laplace equation with a zero Neumann boundary condition at one end. At the outer boundary, I used Roots.find_zero(...) do determine the voltage based on the boundary flux and boundary concentration. There were some issues with the handling of Dual types stemming from the use of automatic differentiation, but for now, the code works (I was also able to make it work with my own implementation making use of ForwardDiff but I could not make it work with the tools from NLsolve or NonlinearSolve). When I find the time, I will prepare some MWE in case someone may be interested in this.

@JanSuntajs
Copy link

@j-fu on a somehow related note to the above discussion, I am now trying to expand the above functionality. I am wondering, whether it is possible access global solution vector from within the standard functions such as storage!(...), reaction!(...), flux!(...). Namely, on a given node, is it possible to access degrees of freedom that are not defined on a given node or on nodes directly adjacent to it? I know this can be done by the means of the generic operators, but I am not yet skilled enough with them, particularly regarding the manual sparsity detection. I am also unsure about how to correctly write down the generic operator, are the changes of coordinate systems (for instance, cartesian vs. curvilinear coordinates) handled automatically or should I do them on my own?

Thank you and best regards,

Jan

@j-fu
Copy link
Member Author

j-fu commented Nov 7, 2024

Conceptually, there is no way to access the global solution vector from the callbacks. However you could trick around this by having a SystemState (since v2.0) as part of the userdata. The drawback is that there will be no differentiation against the corresponding DOFs, in the best case resulting in detoriation of Newton convergence to linear. The generic operator has been created to handle this situation in a consistent manner.

I know it is not well documented, so if you have a minimal example covering the situation you need I would take the occasion to update the docs.
When accessing node volumes and edge coefficients, these are precalculated with regard to the coordinate system.

Also, there is the possibility to detect the sparsity pattern automatically. This step may take some time, but is needed only once in a simulation.

@JanSuntajs
Copy link

JanSuntajs commented Nov 7, 2024

Thank you, I will try to come up with a useful example; for now, it puzzles me a bit that I do not get the same solutions if I just try to rewrite the reaction term as a generic operator. At this point, I do not yet try to access dofs in non-adjacent nodes, I just try to come up with an equivalent formulation in terms of a generic operator, but to no avail. I will dig into this some more and then provide a minimal example.

Ultimately, what I'd like to achieve is the following: above, I've mentioned a workaround that enabled me to simulate the galvanostatic mode by essentially looking for constant (in space) solutions of the Laplace equation. Now, I also wanted to implement the term simulating the double-layer. In a simplified form, the governing equation is the following:

$$ \frac{\partial U}{\partial t} = C \left(j_{app} - j_{BV} \right), $$

where $j_{app}=j_{app}(t)$ is the applied external current and $j_{BV}(c_s, U)$ is the Butler-Volmer current (with $U$ being the voltage and $c_s$ the surface concentration). Due to the boundary-layer effects, the boundary condition for the concentration at the external boundary changes, as the influx is now only governed by $j_{BV}$. I wanted to handle this by adding $U$ to the storage term. Since I did not know how to handle time dependence inside a generic operator, I put $\frac{1}{C} j_{app}(t)$ into reaction term and I tried to package $\frac{-1}{C} j_{BV}(c_s, U)$ into a generic operator.

For the voltage boundary conditions, I set up homogenous Neumann boundary conditions to enforce uniform spatial profiles and I tried using stiff solvers (Rosenbrock23, Rodas4, Rodas4P etc.) to handle the system. For now, I obtain a uniform profile of voltage if I manually select a constant surface concentration in $j_{BV}$ and treat everything inside the reaction term, while the same approach fails if I try to break it into reaction and generic terms. The fact I am using a spherically symmetric grid for both the concentration and voltage (to avoid the hassle of two grids) may also play a role here (along with the basic crux of the problem, which is treating a 0-d problem for the voltage as a 1-d one). I hope this helps somewhat.

Best,
Jan

@JanSuntajs
Copy link

@j-fu, in the attached jupyter notebook, I have implemented a MWE that displays the difference between implementing the same equation using the reaction!(f, u, node, data) function and via generic_operator(...). In the latter case, the solver fails to reproduce the results of the former one. That is even before I tried calculating with generic operators (also included in the notebook) which would consider the surface concentration that I mentioned in above posts.

I am also attaching the manifest and project files.
double_layer.zip

Please let me know if everything works as expected and whether anything should be changed.

Regards,
Jan

@j-fu
Copy link
Member Author

j-fu commented Nov 11, 2024

Ok, I tried to run it, but I get "UndefVarError: data_dl not defined" in the last cell.

Generally it would be more convenient for me to share code with a git repo, and I also have a slight preference for pluto notebooks.
But in fact I did not use jupyter for quite some time since I switched to pluto, and I just realized that installing the jupyter stack via conda is quite smooth nowadays, so I might be ok with it.

@JanSuntajs
Copy link

JanSuntajs commented Nov 11, 2024

I have shared with you the project's git repo which remains private for now. The selected notebook can be found under notebooks/05_double_layer.ipynb and the mentioned error was corrected. The project's readme contains basic setup instructions which should hopefully get one going.

For convenience, I also include the zip archive here:
mwe.zip

(it is beneficial to install IJulia package to the global Julia environment if working with Jupyter notebooks). Since I have not used Pluto before, I can also prepare some scripts in case the above does not work).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants