-
Notifications
You must be signed in to change notification settings - Fork 504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compute material volumes in mesh elements based on raytracing #3129
base: develop
Are you sure you want to change the base?
Conversation
It appears that you only ray-trace in a single direction within that bounding box. I'd recommend doing all three directions to avoid errors that come from think slices aligned in the direction of ray fire. Uniform spacing of the rays can exacerbate this, as a given material/cell could be completely missed, even if it takes up a non-trivial fraction of the volume. |
@gonuke Good point, I'll take a stab at implementing that |
Fantastic PR @paulromano. I'm looking forward to diving into this. Along the lines of @gonuke's point, I wonder if there's a model we might be able to use to compare volume estimates for random ray and the raytracing method. For streaming gaps that are off-axis I wonder if RR might capture those even better. Maybe @jtramm has thoughts there? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very interesting pull request here @paulromano, seems like a useful feature! Like Patrick I also wonder how this performs against random ray for certain geometries. One issue with using random ray is that we'd need a very high ray count to hit every mesh cell, and it's not even guaranteed that we will, so I think the approach you have taken here is a good one generally speaking.
double d1 = width[ax1]; | ||
double d2 = width[ax2]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we are assigning a rectangular cross sectional area to the rays, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, we are assigning a rectangular cross sectional area
// Particle crosses surface | ||
// TODO: off-by-one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the off-by-one situation you need to handle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has to do with surface half-spaces; see the discussion here: #1174 (comment)
Co-authored-by: Olek <[email protected]>
Description
PR #2802 introduced a capability to compute material volumes within mesh elements. This feature worked by looping over each mesh element, sampling a bunch of points within the element and then doing a "find cell" operation on each point. I've found recently that while this works well for meshes with a modest amount of elements, when the mesh resolution becomes fine, it can become extremely slow.
This PR completely overhauls the algorithm for computing material volumes by raytracing across a bounding box that contains the mesh. This has the benefit of accumulating volume information much faster than doing "find cell" operations on random points. The method works as follows:
Mesh::bins_crossed
method to determine which mesh elements were crossed and what the corresponding track lengths over the elements were. Note that rays are spaced uniformly over the starting surface.The interface here now has three parts:
openmc_mesh_material_volumes
openmc.lib.Mesh.material_volumes
material_volumes
method on the normalopenmc.Mesh
class that usesopenmc.lib
under the hoodIn addition to changing the underlying method for material volume in mesh calculations, I've updated the return type as well to a new
MeshMaterialVolumes
class that provides the result data in multiple modalities. First, it can be indexed by material ID which gives an array whose size is equal to the number of mesh elements for which each value is the volume of that material in each element. If the user wants all material volumes for a single element, they can request this with theby_element(mesh_index)
method. Here's an example of what this would look like in practiceIn addition to providing user convenience, the
MeshMaterialVolumes
class stores the resulting material volumes very compactly. This required having a fixed value for the maximum number of materials in a given element, but the implementation is smart enough to reallocate to a larger value if the original value was insufficient.One limitation of the new implementation is that the mesh must be fully contained in the geometry. I couldn't think of a good way around this because if you are raytracing, you have to start within the geometry.
Performance
I've been playing around with the new implementation on a few models and I'm seeing very good results. For example, on the FNG dose rate benchmark problem from SINBAD, I get the following timings:
It's not quite an apples to apples comparison, but choosing the number of rays equal to the number of point samples is the best I could think of. In these cases, we see that the raytracing appoach is 12–17× faster.
Another model I've tested on is a simple CAD-based stellarator model with several layers (plasma, structure, blanket, etc.). On this model, I get the following timings:
In this case, you can see that the point sampling approach is excruciatingly slow for a mesh of this size. This is likely because point searches on the CAD geometry are much slower. To get reasonably converged material volumes would obviously require much more than 1 sample per element, meaning that the calculation would require hours of compute time. Raytracing here gives a 185× faster execution time.
Checklist