Recommended executioner and preconditioner settings for conjugate heat transfer #1030
Replies: 2 comments 2 replies
-
Hi @mattfalcone1997, is it possible to share your input file you are using for Another user has also reported issues with rapid divergence unless using lock-step time advancement, so I have it on my to-do list to add some additional heat flux/temperature data transfer options which might help stabilize things. But in either case, 100x longer for the heat conduction sounds very long and should be something we can get faster. |
Beta Was this translation helpful? Give feedback.
-
Hi @aprilnovak This didn't seem to speed the solution up that much. I've managed to use some limited subcycling which has helped (1 solid step for every 2-3 fluid steps). I will have a look into profiling the solver to see where it is spending most of the execution time. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I am running simulations of doubly periodic channel flow with solid walls, using nekRS for the fluid and MOOSE for the solid. I am running on a cluster with 128 cores per node and 4 GPUs per node, with (hopefully) the CPUs for the solid and GPUs for nekRS. However, I have noticed poor performance for the solid, with it taking ~100 times longer than the fluid (0.1s vs 10s). The solid mesh is 4.75 million elements and the fluid mesh has 22 million GLL points. I also find that sub-cycling even with relatively low ratios of time steps leads to divergence.
Firstly, I ought to check that the command I am running is correct
mpirun -np 4 cardinal-opt -i solid.i --nekrs-backend CUDA --n-threads=32 --distributed-mesh
I am wondering if there are some recommended settings for the transient heat conduction solve to help reduce the times between fluid and solid solves. For example:
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions