-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor!: Use double
for material
#3862
refactor!: Use double
for material
#3862
Conversation
## Walkthrough
Changes made to the ACTS project focus on enhancing precision by transitioning from `float` to `double` in various files, particularly in methods related to material properties and interactions. New constants, such as Avogadro's constant, have been added, and documentation comments have been improved. The modifications also include the addition of new static methods and the removal of unnecessary includes, streamlining the codebase. Overall, the updates aim to improve clarity, maintainability, and numerical accuracy across the project's material handling functionalities.
## Changes
| File Path | Change Summary |
|-----------|----------------|
| Core/include/Acts/Definitions/Units.hpp | - Added constant `kAvogadro` (Avogadro's constant). <br> - Updated comment for speed of light from single-line to documentation comment. |
| Core/include/Acts/Material/AccumulatedMaterialSlab.hpp | - Changed parameter and return types from `float` to `double` in methods `accumulate` and `totalVariance`. <br> - Updated member variable types from `float` to `double`. |
| Core/include/Acts/Material/Interactions.hpp | - Changed parameter and return types from `float` to `double` in multiple methods related to energy loss calculations. <br> - Removed unnecessary includes. |
| Core/include/Acts/Material/Material.hpp | - Added static methods `vacuumParameters()` and `almostVacuumParameters()`. <br> - Updated parameter types to `double` in methods `fromMolarDensity` and `fromMassDensity`. <br> - Changed return types of several getter methods from `float` to `double`. |
| Core/include/Acts/Material/MaterialComposition.hpp | - Changed fraction representation type from `float` to `double` in `ElementFraction` class. |
| Core/include/Acts/Material/MaterialSlab.hpp | - Changed thickness-related parameters from `float` to `double` in constructors and methods. |
| Core/include/Acts/Material/VolumeMaterialMapper.hpp | - Changed `mappingStep` variable type from `float` to `double`. <br> - Removed unnecessary includes. |
| Core/include/Acts/Utilities/Interpolation.hpp | - Removed unnecessary includes, simplifying dependencies. |
| Core/src/Material/AccumulatedMaterialSlab.cpp | - Changed parameter and return types from `float` to `double` in methods. |
| Core/src/Material/AverageMaterials.cpp | - Updated `combineSlabs` function to check for non-positive thickness values. |
| Core/src/Material/Interactions.cpp | - Changed parameter and return types from `float` to `double` in multiple methods. |
| Core/src/Material/Material.cpp | - Updated `fromMassDensity` and `fromMolarDensity` methods to accept `double` types. <br> - Removed `MaterialClassificationNumberIndices` enumeration. |
| Core/src/Material/MaterialMapUtils.cpp | - Replaced hardcoded maximum float vector with `Material::almostVacuumParameters()` for setting exterior bins. <br> - Removed unused includes. |
| Core/src/Material/MaterialSlab.cpp | - Changed thickness-related parameters from `float` to `double` in constructors and methods. |
| Core/src/Material/SurfaceMaterialMapper.cpp | - Changed `currentPathCorrection` variable type from `float` to `double`. |
| Core/src/Material/VolumeMaterialMapper.cpp | - Changed `remainder` and `vacuumThickness` variable types from `float` to `double`. |
| Core/src/Propagator/detail/PointwiseMaterialInteraction.cpp | - Changed local variables in `covarianceContributions` method from `float` to `double`. |
| Tests/UnitTests/Core/Material/AccumulatedMaterialSlabTests.cpp | - Updated `eps` constant from `float` to `double`. <br> - Adjusted assertions to use `double`. |
| Tests/UnitTests/Core/Material/AverageMaterialsTests.cpp | - Changed floating-point literals to integer literals in `Acts::MaterialSlab` initialization. |
| Tests/UnitTests/Core/Material/MaterialTests.cpp | - Removed assertion checking equality between original and reconstructed material objects. |
| Tests/UnitTests/Core/Utilities/MaterialMapUtilsTests.cpp | - Added new test case `materialmap_creation` for validating material mapping functionality. |
## Poem
> In the realm of code, precision we seek,
> From float to double, the changes unique.
> Constants added, comments refined,
> Clarity grows, in each line aligned.
> Material mappers, now sharper they be,
> Enhancing our journey, through physics we see! ✨ 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
I started to work on the same topic in In a first attempt, I changed all instances as well but it messed up the tests. To understand, where the problem comes from, I believe it is necessary, to do it in smaller steps. (What I tried above, but still too course.) Of course the problem lies in some rounding, since a lot of calculations are done in |
I already adapted the tests here. I don't think we should change any |
I don't think it is worth looking into these physmon changes in detail as there is no structure to it. They all look consistent to low stat fluctuations. We change how material interactions are numerically represented so such changes in physmon are expected. I can check how easy it is to remove the |
Accumulated changed from #3862 - use namespaces in source files - remove operator overloads (breaking) - move some implementations to source
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (33)
Core/src/Propagator/detail/PointwiseMaterialInteraction.cpp (2)
31-33
: Approve the transition to double precision for theta0 calculation, I do!Wise decision this is, hmm. Multiple scattering calculations, sensitive to numerical precision they are. Alignment with physics computations, this change brings.
Consider documenting the precision requirements in physics calculations, future padawans will thank you, they will.
Line range hint
31-43
: Architectural implications, discuss we must!Changed from float to double, these calculations have. But broader implications exist:
- Memory footprint: Doubled it has, though justified by precision needs
- Performance: On modern architectures, negligible impact this may have
- Numerical stability: Improved it has, especially for physics calculations
Consider these recommendations, you should:
- Document precision requirements in physics calculations
- Add unit tests comparing results with previous float implementations
- Monitor performance metrics in critical paths
Help with implementation of these recommendations, offer I do!
Core/src/Material/MaterialSlab.cpp (2)
Line range hint
57-64
: Clear and precise, the path of double precision is!Scale the thickness with greater precision, we now do. But document this change in the header file, we should, so that future Padawans understand our ways.
Consider adding a comment in the header file:
+ // Note: All thickness calculations use double precision to maintain accuracy void scaleThickness(double scale);
Line range hint
1-75
: Meditate on memory usage, we must!While greater precision these changes bring, consider the impact on memory usage, we should. For large-scale simulations, significant the difference might be. Monitor memory consumption in production, you must.
Consider:
- Benchmarking memory usage with large datasets
- Documenting the memory implications in the project documentation
- Adding memory usage metrics to performance monitoring
Core/src/Material/AccumulatedMaterialSlab.cpp (2)
28-31
: Improved precision in radiation length calculations, I sense!Double precision here, crucial it is! Complex calculations with radiation lengths and squared differences, more accurate they become. Floating-point errors, reduced they are.
Consider documenting the mathematical formula in comments, help future Padawans it will.
Line range hint
14-72
: A disturbance in the Force, I sense not! Good changes these are.Throughout the file, consistent use of double precision, maintained it is. Mathematical operations, more accurate they become. But remember these points, you must:
- Performance impact minimal it should be, as modern processors optimize for double
- Memory impact negligible it remains, as stated in PR objectives
- Numerical stability improved it is, especially in complex calculations
Consider adding a comment at file level, documenting the precision requirements and why double chosen it was. Help future maintainers, this will.
Core/src/Material/Material.cpp (2)
62-69
: Good practices observed, but room for improvement there is.Correct, the conversion to double precision is. Yet, enhance further we can.
- const double atomicMass = static_cast<double>(m_ar) * 1_u; - const double numberDensity = static_cast<double>(m_molarRho) * kAvogadro; + const double atomicMass = static_cast<double>(m_ar) * 1_u; + const double numberDensity = static_cast<double>(m_molarRho) * kAvogadro; + const double result = atomicMass * numberDensity; + return result;Clearer intent and easier debugging, this approach provides. Hmmmm.
Line range hint
72-76
: Inconsistent with the ways of the Force, this float literal is!In line 75, a float literal '0.9f' you use, yet double precision you seek. A disturbance in the Force, this creates.
- return 16_eV * std::pow(m_z, 0.9f); + return 16_eV * std::pow(m_z, 0.9);Remove the 'f' suffix, maintain double precision throughout we must.
Core/include/Acts/Material/MaterialSlab.hpp (1)
65-65
: Consider floating-point comparison wisdom, we should.Changed the comparison is, but careful with floating-point equality, one must be. Consider epsilon-based comparison for doubles, you might.
A suggestion, I offer:
- bool isValid() const { return m_material.isValid() && (m_thickness > 0); } + bool isValid() const { + constexpr double epsilon = 1e-7; + return m_material.isValid() && (m_thickness > epsilon); + }Core/include/Acts/Material/AccumulatedMaterialSlab.hpp (2)
100-100
: Suggest explicit initialization for unsigned values, I do.Changed to double, m_totalVariance has been. Good this is. Yet for m_totalCount, more explicit we should be.
Apply this change, you should:
- unsigned int m_totalCount = 0; + unsigned int m_totalCount = 0u;Clearer intent this shows, for unsigned numbers it does. Prevent implicit conversion warnings in stricter compilation modes, it may.
Also applies to: 102-102
Line range hint
1-102
: Document precision requirements in class documentation, we should.Strong in the ways of precision, this class becomes. Yet silent about its needs for double precision, the documentation remains. Add we should:
/// Accumulate material properties from multiple hits/track and multiple tracks. +/// Uses double precision for all floating-point calculations to maintain numerical +/// accuracy across multiple accumulation steps and avoid unnecessary conversions. /// /// This is a helper class for the `SurfaceMaterialMapper` to handle materialCore/src/Material/AverageMaterials.cpp (2)
50-56
: Handle the void with care, we do.Proper handling of vacuum case, I observe. Yet, clearer the magic number comparison could be. Consider a named constant for molar amount threshold, hmm?
+ // Threshold for considering material as vacuum + constexpr double VACUUM_MOLAR_THRESHOLD = 0.0; if (molarAmount <= 0) {
117-119
: Final wisdom in material creation, we find.Clean the construction is, yet validate the molar density range we should. Negative density, impossible it is.
const double molarDensity = molarAmount / thickness; + assert(molarDensity >= 0.0 && "Negative molar density impossible is!");
Core/include/Acts/Material/MaterialComposition.hpp (2)
Line range hint
40-47
: Update float literal in assert statement, hmmmm.Inconsistent with our ways, the float literal in assert remains. Change it we must, for consistency we seek.
- assert((0.0f <= f) && (f <= 1.0f) && "Relative fraction must be in [0,1]"); + assert((0.0 <= f) && (f <= 1.0) && "Relative fraction must be in [0,1]");
102-106
: Document precision guarantees, we should.Clear the code may be, yet wisdom in documentation we must share. Explain the precision guarantees and uint8_t storage limitations, helpful it would be.
Add documentation above the scale calculation:
+ // Note: While calculations use double precision, final storage is limited + // to uint8_t resolution (1/256 ~ 0.4% precision) double scale = double{std::numeric_limits<std::uint8_t>::max()} / total;Tests/UnitTests/Core/Utilities/MaterialMapUtilsTests.cpp (3)
Line range hint
21-60
: More diverse test values, we should have!Using identical values for all material parameters (i, i, i, i, i), limiting our test coverage we are. More comprehensive testing achieve, we must!
Consider this improvement, you should:
- material_rz.push_back(Material::fromMolarDensity(i, i, i, i, i)); + material_rz.push_back(Material::fromMolarDensity(i, i*2, i*3, i*4, i*5));
Line range hint
61-100
: Test the boundaries, we must!Missing edge cases, I sense. Add tests for boundary conditions and invalid positions, you should.
Add these test cases, you must:
// Test boundary conditions Vector3 posMax_xyz(2.99, 2.99, 2.99); // Just within bounds Vector3 posOutside_xyz(3.1, 2.0, 2.0); // Outside bounds // Verify behavior BOOST_CHECK_NO_THROW(mapper_xyz.getMaterial(posMax_xyz)); BOOST_CHECK_THROW(mapper_xyz.getMaterial(posOutside_xyz), std::runtime_error);
Line range hint
101-123
: Document the expected values, we should!Clear the tests are, but mysterious the expected values remain. Add comments explaining the expected material parameters at each test point, you must.
Add documentation like this, you should:
Vector3 pos1_xyz(1., 1., 1.); + // Expected material parameters at (1,1,1): + // - molar density: 13 (bin index 1*9 + 1*3 + 1) + // - other parameters follow the same pattern auto value1_xyz = mapper_xyz.getMaterial(pos1_xyz);Core/include/Acts/Material/Interactions.hpp (1)
107-124
: Combined effects, more precision they demand. Double precision, serve us well it will.When ionization and radiation together come, more significant digits we need. Approve this change, I do, for better numerical stability it brings.
Consider caching intermediate results in performance-critical paths, if repeated calculations occur they do.
Core/src/Material/MaterialMapUtils.cpp (2)
185-187
: Mmmmm... Symmetry in the Force, I sense!Mirror the RZ implementation perfectly, this XYZ change does. Consistent handling of exterior bins across coordinate systems, we have achieved.
Consider documenting the interpolation behavior at grid boundaries in the class documentation, you should. Help future Padawans understand the wisdom of using almost vacuum, it will.
Line range hint
90-187
: Strong with the Force, this refactoring approach is!Elegant solution for handling grid boundaries, you have found. Double precision and clear semantic meaning, it brings. But remember these points, you must:
- Document the expected numerical precision in interpolation results
- Consider adding validation tests for edge cases
- Monitor performance impact in material-heavy simulations
Consider creating a comprehensive guide for material property handling, you should. Help others understand the transition from float to double, it will. Document the reasoning behind almostVacuum versus true vacuum choices, we must.
Tests/UnitTests/Core/Material/AccumulatedMaterialSlabTests.cpp (1)
184-196
: Different properties, different tolerances they may need, young padawan.Switched to absolute comparisons with uniform epsilon, we have. But consider we should:
- Different physical properties, different scales they have
- For atomic properties like Z, stricter tolerance we might need
- For density calculations, different precision requirements there might be
Consider this approach, you should:
-constexpr double eps = 1e-10; +// Property-specific tolerances for absolute comparisons +constexpr double eps_density = 1e-10; // For molarDensity +constexpr double eps_length = 1e-10; // For X0, L0 +constexpr double eps_atomic = 1e-12; // For Z, ArThen use appropriate epsilon for each property:
- CHECK_CLOSE_ABS(average.material().Z(), 0.5 * unit.material().Z(), eps); + CHECK_CLOSE_ABS(average.material().Z(), 0.5 * unit.material().Z(), eps_atomic);Core/include/Acts/Material/VolumeMaterialMapper.hpp (1)
74-74
: Consider documenting precision requirements, you should.For future maintainers' clarity, document why
double
precision necessary is, hmm. Impact on performance and storage, minimal it may be.Add documentation like this, you could:
- double mappingStep = 1; + /// @brief Size of the step for the step extrapolation + /// @note Uses double precision to maintain consistency with other material + /// calculations and avoid unnecessary float-double conversions + double mappingStep = 1;Core/include/Acts/Definitions/Units.hpp (1)
265-266
: Wise addition of Avogadro's constant, this is.Correct value from the 2019 SI redefinition, you have used. With the material calculations refactor from float to double, align perfectly this does. Proper units through UnitConstants::mol, maintained they are.
Strong in the Force, this constant definition is. Help with precise material calculations, it will.
For material calculations that use this constant, remember you must:
- Convert input quantities to native units first
- Apply kAvogadro only after conversion
- Store results in double to maintain precision
Meditate on this example, you should:
// The path of the Jedi, this is double substanceAmount = inputMass * kAvogadro / (molarMass * UnitConstants::mol);Tests/UnitTests/Core/Material/AverageMaterialsTests.cpp (3)
189-190
: Approve these changes, I do, but a suggestion I have!Consistent with double precision transition, these changes are. However, improve readability we can, by using scientific notation for small values.
Consider this improvement:
- CHECK_CLOSE_REL(slab.material().X0(), 2 * mat.X0(), eps); - CHECK_CLOSE_REL(slab.material().L0(), 2 * mat.L0(), eps); + CHECK_CLOSE_REL(slab.material().X0(), 2 * mat.X0(), 1.0e-15); + CHECK_CLOSE_REL(slab.material().L0(), 2 * mat.L0(), 1.0e-15);Also applies to: 203-204, 206-206, 221-222, 224-224
233-234
: Approve these changes too, but a suggestion for clarity I have!Correct the transition to double precision is, but improve the readability of mathematical expressions we can.
Break down the complex expression for better readability, you should:
- CHECK_CLOSE_ABS( - slab.material().Z(), - std::exp((0.5 / 1.5) * std::log(12.0) + (1.0 / 1.5) * std::log(6.0)), - 1e-6); + const double weight1 = 0.5 / 1.5; // weight of first material + const double weight2 = 1.0 / 1.5; // weight of second material + const double expectedZ = std::exp( + weight1 * std::log(12.0) + // contribution from first material + weight2 * std::log(6.0)); // contribution from second material + CHECK_CLOSE_ABS(slab.material().Z(), expectedZ, 1e-6);Also applies to: 257-262, 274-279
Hmm, float precision remnants, I sense. Transition incomplete, it is!
In the Material-related files, several instances of float usage remain:
- In test files:
MaterialSlabTests.cpp
,MaterialTests.cpp
,AverageMaterialsTests.cpp
: Usingfloat::epsilon()
for precision checksMaterialGridHelperTests.cpp
: Float variables for grid calculationsMaterialCompositionTests.cpp
: Float constants and calculationsInteractionsTests.cpp
: Float variables for physics quantities- In core files:
Material.hpp
: Float-basedParametersVector
typedefInteractions.cpp
: Float literal in gamma calculationMaterial.cpp
: Float literal in power calculationMaterialComposition.hpp
: Float literals in assertionsComplete the transition to double precision, we must. Change these remaining float instances, essential it is!
🔗 Analysis chain
Line range hint
1-282
: Verify complete transition to double precision, we must!Search for remaining float usage in the codebase, essential it is.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for remaining float usage in material-related files echo "Searching for float usage in material-related files..." rg -g '**/Material/**' '\bfloat\b' --type cpp # Search for float literals in material-related files echo "Searching for float literals in material-related files..." rg -g '**/Material/**' '\b\d+\.\d+f\b' --type cppLength of output: 4724
Core/include/Acts/Material/Material.hpp (3)
43-48
: Consistent use ofdouble
inParametersVector
, consider you should.While moving to
double
for precision,ParametersVector
remainsfloat
, it does. AlignParametersVector
withdouble
, enhance consistency it will. Usingstd::numeric_limits<double>::infinity()
instead, preferable it might be.
50-55
: InalmostVacuumParameters()
,double
limits use you should.
std::numeric_limits<float>::max()
you use, but switching tostd::numeric_limits<double>::max()
advisable it is, for consistency withdouble
types elsewhere.
107-108
: Duplicate logic inisValid()
andisVacuum()
, address this you should.Both methods check
m_ar > 0
, the same they are. Clarify their distinct purposes, or consolidate them into one method, consider you might.Also applies to: 109-110
Core/src/Material/Interactions.cpp (3)
82-82
: Typo found: 'denonimator' should 'denominator' be.Correcting this spelling error improves code clarity.
Apply this diff to fix the typo:
- const double denonimator = 1 + 2 * rq.gamma * mfrac + mfrac * mfrac; + const double denominator = 1 + 2 * rq.gamma * mfrac + mfrac * mfrac;
Line range hint
405-407
: Address the TODO: Define '8_GeV' as a constant, we must.Using a named constant for
8_GeV
improves code readability and maintainability.Apply this diff to define the threshold:
+constexpr double MuonEnergyThreshold = 8_GeV; - // TODO magic number 8_GeV - if ((absPdg == PdgParticle::eMuon) && (8_GeV < energy)) { + if ((absPdg == PdgParticle::eMuon) && (MuonEnergyThreshold < energy)) {Assist further, can I?
493-497
: Reference source for Rossi-Greisen formula, missing it is.Adding a citation or comment explaining the source enhances code understanding.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (21)
Core/include/Acts/Definitions/Units.hpp
(1 hunks)Core/include/Acts/Material/AccumulatedMaterialSlab.hpp
(2 hunks)Core/include/Acts/Material/Interactions.hpp
(6 hunks)Core/include/Acts/Material/Material.hpp
(4 hunks)Core/include/Acts/Material/MaterialComposition.hpp
(3 hunks)Core/include/Acts/Material/MaterialSlab.hpp
(2 hunks)Core/include/Acts/Material/VolumeMaterialMapper.hpp
(1 hunks)Core/include/Acts/Utilities/Interpolation.hpp
(0 hunks)Core/src/Material/AccumulatedMaterialSlab.cpp
(3 hunks)Core/src/Material/AverageMaterials.cpp
(4 hunks)Core/src/Material/Interactions.cpp
(15 hunks)Core/src/Material/Material.cpp
(3 hunks)Core/src/Material/MaterialMapUtils.cpp
(2 hunks)Core/src/Material/MaterialSlab.cpp
(2 hunks)Core/src/Material/SurfaceMaterialMapper.cpp
(1 hunks)Core/src/Material/VolumeMaterialMapper.cpp
(3 hunks)Core/src/Propagator/detail/PointwiseMaterialInteraction.cpp
(2 hunks)Tests/UnitTests/Core/Material/AccumulatedMaterialSlabTests.cpp
(7 hunks)Tests/UnitTests/Core/Material/AverageMaterialsTests.cpp
(4 hunks)Tests/UnitTests/Core/Material/MaterialTests.cpp
(0 hunks)Tests/UnitTests/Core/Utilities/MaterialMapUtilsTests.cpp
(1 hunks)
💤 Files with no reviewable changes (2)
- Core/include/Acts/Utilities/Interpolation.hpp
- Tests/UnitTests/Core/Material/MaterialTests.cpp
🔇 Additional comments (54)
Core/src/Propagator/detail/PointwiseMaterialInteraction.cpp (1)
41-43
: Consistent with energy loss calculations, this change is!
For Landau distribution calculations, higher precision beneficial it is. Match the precision of upstream physics calculations, it does.
Verify the impact on physics calculations, we must:
✅ Verification successful
Consistent with high precision physics calculations, this change is!
Through the Force, seen I have that all energy loss calculations in the codebase use double
precision:
computeEnergyLossLandau
- double precision it usescomputeEnergyLossBethe
- double precision it maintainscomputeEnergyLossRadiative
- double precision it requires- All helper functions and relativistic calculations - double precision they employ
Matches the precision standards of the physics calculations throughout the codebase, this change does. No disturbance in the Force, I sense.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for related energy loss calculations to ensure consistent precision
rg -l "computeEnergyLoss" | xargs rg "float|double" -A 2
Length of output: 42944
Core/src/Material/MaterialSlab.cpp (2)
20-20
: Wise choice, this epsilon change is!
Double precision epsilon, a path to greater accuracy it provides. Multiplication by 2, a safety margin it creates, preventing the dark side of floating-point comparisons, hmmmm.
Line range hint 23-32
: Strong with the Force, these constructor changes are!
Careful handling of negative thickness, you maintain. But verify the impact of these changes on existing calculations, we must.
✅ Verification successful
Safe, these constructor changes are, young Padawan!
Through the Force, seen I have that these changes maintain proper precision handling:
- Consistent thickness validation across constructors, there is
- Proper epsilon checks in X0 and L0 calculations, implemented they are
- In tests and interactions, no precision-sensitive comparisons affected, I sense
The path to the light side, these changes follow. No impact on existing calculations or precision handling, detected there is not.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for direct comparisons that might need adjustment
rg -l "MaterialSlab.*=="
# Look for test files that might need updating
fd ".*test.*\.cpp" | xargs rg "MaterialSlab.*thickness"
Length of output: 415
Script:
#!/bin/bash
# Check the test implementations for precision-sensitive comparisons
rg -A 5 "MaterialSlab.*thickness" Tests/UnitTests/Core/Material/InteractionsTests.cpp Tests/UnitTests/Core/Material/AccumulatedMaterialSlabTests.cpp
# Search for any thickness-related calculations or comparisons
ast-grep --pattern 'thickness'
# Look for any epsilon comparisons in material-related tests
rg "eps.*Material" Tests/UnitTests/Core/Material/
Length of output: 31854
Core/src/Material/AccumulatedMaterialSlab.cpp (2)
68-70
: Changed the return type has been. Hmmmm.
Return type to double, changed it is. Consistent with internal calculations, this makes us. But careful we must be, for changes in client code this may require.
✅ Verification successful
Safe, this change is. Worry not, young padawan.
Through the Force, searched the codebase I have. All usages of totalVariance()
already structured they are with auto
declarations, hmm. Observe:
- In tests, structured bindings with
auto
used consistently they are - In MaterialMapping example, direct access through
.first
and.second
maintained it is - No explicit type declarations found that could break, there are
The path of least resistance, this change follows. Through auto
and structured bindings, protected from the type change, the client code is. A wise decision in the ways of the Force, this was.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usage of totalVariance to identify affected code
rg -A 2 "totalVariance()"
Length of output: 3174
16-17
: Approve the path correction type change, I do! Hmmmm.
Wise decision it is, to use double for pathCorrection. More precise calculations, this will bring. But verify other callers of this method, we must.
✅ Verification successful
The previous pattern didn't yield results. Let's try with ripgrep to get more context about the method calls and class usage. This will help us verify the impact of the parameter type change.
Safe, the path correction type change is. Verified, it has been!
Through the Force, seen I have that all callers of accumulate
method already use double precision:
- In
AccumulatedSurfaceMaterial
, double pathCorrection they pass - In
MaterialMapper
and its derivatives, double they employ - In test cases, double precision they maintain
No breaking changes this will cause, as the codebase already aligned with double precision it is. The path correction parameter, in harmony with the Force it flows.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for calls to accumulate method to ensure type compatibility
ast-grep --pattern 'accumulate($_, $_)'
Length of output: 41
Script:
#!/bin/bash
# Search for calls to accumulate method with ripgrep for better context
rg "accumulate\s*\(" -A 2 -B 2
# Also search for the class usage to understand the context
rg "AccumulatedMaterialSlab" -A 2 -B 2
Length of output: 49074
Core/src/Material/Material.cpp (2)
Line range hint 44-53
: Simple and elegant, this implementation is.
Clean and straightforward, the conversion to double remains. No complexity added, yet precision improved. Approve this change, I do.
Line range hint 18-40
: Approve the changes to fromMassDensity, I do! Hmmmm.
Wise, the conversion to double precision is. Clear documentation and proper use of units, you have maintained. The Force is strong with this implementation.
✅ Verification successful
Verified, the conversion to double precision is. Safe, this change remains.
Through the Force, seen I have that all mass density calculations properly handled are. Only one instance of float remains in Examples/Scripts/fullMaterial.C, but a test script it is, not core functionality. The Material class itself, double precision now uses throughout, maintaining consistency it does.
Key findings, these are:
- Core implementation in Material.cpp, fully converted to double precision it is
- All core interfaces in Material.hpp, double they use
- Test cases, proper double precision comparisons they maintain
- Documentation updated appropriately, explaining the density calculations it does
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining float usage in material calculations
rg -g '!*.md' -g '!*.txt' "float.*mass.*density"
Length of output: 50
Script:
#!/bin/bash
# Search for any mass density related calculations with broader pattern
rg -g '!*.md' -g '!*.txt' "massRho|massDensity|mass.*density" -A 2
# Check for any float declarations near mass density calculations
ast-grep --pattern 'float $$$massDensity$$$'
# Look for other Material class method signatures
ast-grep --pattern 'Material::$_($$$)'
Length of output: 10936
Core/include/Acts/Material/MaterialSlab.hpp (3)
48-53
: Approve these changes, I do! Wisdom in using double, there is.
From float to double, the path leads. Fewer conversions, better precision we gain. The Force is strong with this refactor.
78-80
: Clean and consistent, these changes are.
From float to double, the member variables transform. Simplified initialization, clarity brings. The dark side of type conversions, we avoid now.
62-62
: Verify the impact on dependent code, we must.
Changed to double, these methods have. But careful we must be, for ripples in the Force, these changes may cause.
Also applies to: 70-70, 72-72, 74-74
✅ Verification successful
Hmm, safe these changes are. No disturbance in the Force, I sense.
Through the Force, seen I have that these thickness-related methods are used only in tests and implementations, not in public interfaces that could break external code. The changes maintain consistency with the member variable type.
Key observations from the codebase search:
- Used primarily in test assertions and internal calculations
- No external API dependencies found
- All usages compatible with double precision
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for method calls that might be affected
rg -A 2 "thicknessInX0|thicknessInL0|thickness\(\)" --type cpp
Length of output: 55443
Core/include/Acts/Material/AccumulatedMaterialSlab.hpp (1)
49-49
: Approve the method signature changes, but verify the impact, I must.
Wise changes these are, from float to double they go. Consistent with the force of precision, they are. Yet careful we must be, for changes in the public API, ripples through the codebase they create.
Run this script to find affected code paths we must:
Also applies to: 92-92
✅ Verification successful
The Force guides me to try a different approach, as the previous ast-grep pattern might have been too specific. Using ripgrep shall reveal more about these method usages, it will.
Hmm, deeper into the test files we must look, young Padawan. Verify the alignment of tests with these changes, we shall. The Force suggests we examine if float remnants still linger in the implementation.
Safe these changes are, proceed we shall.
Through the Force, seen I have that these changes ripple harmoniously through the codebase:
- Test files already use MaterialSlab without explicit float types
- Implementation file shows no lingering float declarations
- All dependent files properly handle the accumulate() and totalVariance() methods
Balanced the Force is, as the changes from float to double maintain consistency across the entire codebase. No disturbance in the Force do I sense.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for calls to accumulate() and totalVariance()
echo "=== Searching for accumulate() calls ==="
ast-grep --pattern 'accumulate($_, $_)'
echo "=== Searching for totalVariance() calls ==="
ast-grep --pattern 'totalVariance()'
Length of output: 268
Script:
#!/bin/bash
# Search with ripgrep for more context
echo "=== Searching for accumulate method calls ==="
rg -l "\.accumulate\("
echo "=== Searching for totalVariance method calls ==="
rg -l "\.totalVariance\("
echo "=== Checking implementation details ==="
rg -A 5 "AccumulatedMaterialSlab::(accumulate|totalVariance)"
Length of output: 2206
Script:
#!/bin/bash
# Check if the test files have been updated for the float to double changes
echo "=== Checking test files for float/double usage ==="
rg -C 2 "accumulate\(" "Tests/UnitTests/Core/Material/AccumulatedMaterialSlabTests.cpp"
echo "=== Checking implementation files for float/double usage ==="
rg -C 2 "float" "Core/src/Material/AccumulatedMaterialSlab.cpp"
Length of output: 1905
Core/src/Material/AverageMaterials.cpp (4)
15-24
: Wise handling of edge cases, I sense.
Proper defensive programming this is, returning early when thickness less than or equal to zero becomes. Clean and clear, the code flow remains.
41-46
: Caution with material equality comparison, we must exercise.
Though efficient the path is when materials are equal, floating-point comparison treacherous can be. A comment explaining the reliability of mat1 == mat2
, helpful would be.
60-69
: Validate the ways of radiation, we must.
Correct the calculations appear, yet verify we should that negative values impossible are. Unit tests for these calculations, essential they are.
✅ Verification successful
Safe from the Dark Side, these calculations are, hmmmm.
Validation strong with this code, it is. See I do:
- Negative thickness values, checked they are with explicit runtime error throw
- Zero thickness cases, handled gracefully they are
- Early returns for single-slab cases, wisdom they show
- Protection against division by zero through epsilon checks, implemented it is
The Force is balanced in these calculations, young padawan. No further action required, as proper safeguards already exist in the MaterialSlab constructor and AverageMaterials implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for related test cases
rg -l "thicknessInX0|thicknessInL0" --type cpp | grep -i "test"
Length of output: 477
Script:
#!/bin/bash
# Check test cases for validation of negative values and edge cases
rg -A 10 "TEST.*AverageMaterials" Tests/UnitTests/Core/Material/AverageMaterialsTests.cpp
# Check implementation for any existing validation
ast-grep --pattern 'thicknessInX0() {
$$$
}'
Length of output: 471
Script:
#!/bin/bash
# Search for more test cases and validation patterns
rg -A 15 "combineSlabs.*thickness" Tests/UnitTests/Core/Material/AverageMaterialsTests.cpp
# Check MaterialSlab implementation for validation
rg -A 5 "class MaterialSlab" Core/include/Acts/Material/MaterialSlab.hpp
Length of output: 372
Script:
#!/bin/bash
# Check MaterialSlab source for thickness validation
rg -A 10 "thicknessInX0|thicknessInL0" Core/src/Material/MaterialSlab.cpp
# Check AverageMaterials source for validation
rg -A 10 "thickness.*=" Core/src/Material/AverageMaterials.cpp
# Look for any assertions or checks
rg "ACTS_CHECK|assert.*thickness" Core/src/Material/
Length of output: 3045
105-112
: Complex the ways of atomic numbers are, hmm.
Wise implementation of logarithmic averaging for Z, I see. Yet, numerical stability for very small Z values, verify we must.
Consider adding debug assertions for Z value ranges, prevent undefined behavior we must.
✅ Verification successful
Let me search for more specific test cases related to atomic number calculations.
Let me try a different path to find the atomic number averaging tests.
Wise implementation for atomic numbers, it is indeed. Safe and tested, the approach is.
Through the Force, seen the test cases I have. Logarithmic averaging for Z values, properly implemented it is:
- For non-zero Z values, logarithmic averaging with thickness weights is used
- For zero Z values, linear averaging provides the fallback
- Unit tests verify both cases, including thickness-weighted combinations
The numerical stability concerns unfounded they are. Test suite shows:
- Proper handling of vacuum (Z=0) cases
- Correct thickness-weighted averaging for different material combinations
- Validation against known reference values with 1e-6 precision
Debug assertions unnecessary they are, when solid test coverage exists. Trust in the code's wisdom, we can.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for Z-related test cases
rg "Z\(\)" -A 5 --type cpp | grep -i "test"
Length of output: 92867
Script:
#!/bin/bash
# Search for test cases specifically related to atomic number averaging
rg "Z\(\)" -A 10 "Tests/UnitTests/Core/Material/AverageMaterials"
Length of output: 152
Script:
#!/bin/bash
# Search for test cases specifically related to atomic number averaging in the entire test directory
rg "material\(\)\.Z\(\)" -A 10 "Tests/UnitTests/Core/Material/"
Length of output: 5059
Core/include/Acts/Material/MaterialComposition.hpp (2)
Line range hint 1-130
: Consider validation in nuclear interaction calculations, we must.
Wise the changes are, maintaining memory efficiency while improving precision. Yet validate we must, the impact on nuclear interaction calculations, critical they are.
✅ Verification successful
Validated the changes are, worry we need not.
Examined the codebase thoroughly, I have. Clear it is that:
- Comprehensive unit tests exist, validating both fraction and weight calculations they do
- Used primarily in material mapping analysis tools, the class is
- Nuclear interaction calculations, handled through separate analysis tools they are, impacted minimally by precision changes
The float to double transition, affect only the internal calculations it does, while maintaining the uint8_t storage for memory efficiency. Preserved, the core functionality is.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for nuclear interaction calculation usage
rg -l "MaterialComposition.*nucle[ar]"
Length of output: 40
Script:
#!/bin/bash
# Search for nuclear interaction related files and MaterialComposition usage
rg -l "MaterialComposition" | xargs rg -l "nuclear"
# Search for files that might use MaterialComposition for interactions
rg -l "MaterialComposition" | xargs rg -l "interaction"
# Look for test files validating MaterialComposition
fd ".*Test.*\.cpp" | xargs rg -l "MaterialComposition"
# Search for documentation about MaterialComposition usage
fd ".*\.md" | xargs rg -l "MaterialComposition"
Length of output: 443
Script:
#!/bin/bash
# Examine the unit tests for MaterialComposition
cat Tests/UnitTests/Core/Material/MaterialCompositionTests.cpp
# Check the documentation for usage context
cat docs/examples/howto/analysis_apps.rst
# Look for actual usage in implementation files
rg -g "!*.hpp" -g "!*.rst" -g "!*.md" "MaterialComposition" -A 5 -B 5
Length of output: 20766
61-63
: Approve the changes, I do. Yet verify comparison operations, we must.
Wise is the change to double, maintaining precision it does. But careful we must be, for comparison operations throughout the codebase this may affect.
✅ Verification successful
The initial search might have been too restrictive. Let us broaden our search to understand how fraction() is used throughout the codebase, hmm? Different patterns we must try, for the Force guides us to deeper understanding.
Safe the change is, verified it has been
Through the Force, seen I have that fraction() is used only in tests, hmm? Simple comparisons with CHECK_CLOSE_REL they make, which tolerant of floating-point precision differences they are. No direct comparisons or operator overloads found were. The change from float to double, impact the tests it will not, as epsilon-based comparisons they use.
- Tests/UnitTests/Core/Material/MaterialCompositionTests.cpp: Uses CHECK_CLOSE_REL with epsilon tolerance
- No strict equality comparisons found
- No operator overloads depending on fraction() discovered
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for comparison operations using fraction()
rg -A 2 "fraction\(\)\s*[<>=]"
Length of output: 32
Script:
#!/bin/bash
# Search for any usage of fraction() method with broader context
rg -A 5 "fraction\(\)"
# Also search for potential operator overloads or comparisons
ast-grep --pattern 'operator$_($$$) { $$$ fraction() $$$ }'
# Look for direct uses in conditional statements
ast-grep --pattern 'if ($$$fraction()$$$) { $$$ }'
Length of output: 2622
Tests/UnitTests/Core/Utilities/MaterialMapUtilsTests.cpp (2)
Line range hint 1-20
: Hmmmm, proper setup this is!
Clean and well-structured, the test initialization is. Using doubles instead of floats, wisdom you show.
Line range hint 1-124
: Strong with the Force, this test case is!
Well-structured and comprehensive, your test case is. The transition from float to double, smoothly handled it has been. Yet, strengthen it further with boundary tests and better documentation, we can.
Verify the float to double transition impact, we must:
✅ Verification successful
Hmmmm... Verified our concerns are, young Padawan
Through the Force, seen I have that the float-to-double transition, well-managed it is. The remaining float usages, in examples and plugins only do they reside, not in core material functionality. The test case, properly using double it is.
Points of wisdom, I share:
- In core functionality and tests, no problematic float usage remains
- The float occurrences found, in non-critical paths they exist:
- Examples directory: for demonstration purposes only
- JSON plugin: controlled conversion it has
- ROOT IO: specific use case it serves
Through the Force, sense I do that the test case structure remains strong and true to its purpose.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining float usage in material-related files
rg -t cpp "float.*material|material.*float" --ignore-case
Length of output: 768
Core/include/Acts/Material/Interactions.hpp (3)
87-94
: For high-energy calculations, double precision essential it is.
Particularly wise, this change for radiative effects is. At high energies, where Bremsstrahlung dominates, numerical precision critical becomes. Well documented, the physics processes are.
Line range hint 30-74
: Approve these changes, I do. Yet verify precision impact, we must.
Wise decision it is, to use double for energy loss calculations. More precise, the Bethe formula and Landau distribution calculations shall be. Yet careful we must remain, about numerical stability across all energy ranges.
Run these tests, we shall:
✅ Verification successful
The previous search was too narrow. Let me broaden it to find relevant test files and actual test implementations, along with any numerical comparison tolerances that might need adjustment with the switch to double precision.
Found the test files we have, but see the implementation we must, to ensure proper handling of numerical precision in the energy loss calculations, hmm.
Approve the float to double conversion, I do. Safe and beneficial, this change is.
Through the Force, seen the implementation I have. Proper numerical handling exists in the energy loss calculations, hmm. Tests with appropriate tolerances present they are, and implementation shows careful handling of physical quantities it does. The switch to double precision:
- Improves accuracy in relativistic calculations it does
- Maintains consistency with other physics calculations in the codebase
- Reduces unnecessary conversions between float and double it will
No risk of numerical instability exists, as proper validation through unit tests I see. The tolerance values in tests (1e-8) already suitable for double precision they are.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for test files validating these energy loss calculations
rg -l "TEST.*EnergyLoss(Bethe|Landau)"
# Look for floating-point comparison operators that might need adjustment
rg "float.*==" --type cpp
Length of output: 281
Script:
#!/bin/bash
# Search for test files containing energy loss tests with broader pattern
rg -l "TEST.*Material.*Energy"
# Look for actual test implementations of energy loss calculations
rg "TEST.*Material.*Energy" -A 10
# Find any numerical comparison tolerances in test files
rg "(BOOST_CHECK|BOOST_TEST|EXPECT).*(_CLOSE|within|epsilon)" --type cpp
Length of output: 18601
Script:
#!/bin/bash
# Search for the implementation of energy loss calculations
rg -A 10 "computeEnergyLoss(Bethe|Landau)" Core/src/Material/
# Look for the actual implementation file
fd -t f "Interactions.cpp" Core/src/Material/
Length of output: 7249
133-135
: For scattering calculations, precision matters it does.
The path of particles, accurately we must track. Double precision for theta0 calculations, help us it will. Yet verify impact on tracking performance, we should.
Run these tests, we shall:
✅ Verification successful
Safe, the transition to double precision is, young padawan.
Through the Force, seen I have that well-tested this change is. Multiple validation points exist:
-
Unit tests for multiple scattering thoroughly check the behavior they do:
- Against vacuum (zero scattering)
- With anti-particles (symmetric scattering)
- With varied material thickness
- With different momenta
-
Implementation used across the codebase it is:
- In Highland scattering calculations
- In Gaussian mixture models
- In track fitting components
- In material interaction simulations
No performance benchmarks specific to scattering found were, but robust numerical validation present is. The double precision transition, natural it is for this physics calculation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for tracking performance tests that might be affected
rg -l "TEST.*Scattering.*Performance"
# Look for existing precision-sensitive comparisons
ast-grep --pattern 'BOOST_CHECK_CLOSE($_, $_)'
Length of output: 88
Script:
#!/bin/bash
# Search for tests involving multiple scattering with broader patterns
rg -l "MultipleScattering|scattering.*test"
# Look for numerical comparisons in tests with context
rg "BOOST_CHECK|EXPECT.*scatter" -A 3
# Find usage of computeMultipleScatteringTheta0
rg "computeMultipleScatteringTheta0" -A 2
# Search for performance benchmarks
fd "benchmark" -e cpp -e hpp
Length of output: 104870
Script:
#!/bin/bash
# Search for implementation and tests of multiple scattering
rg "computeMultipleScatteringTheta0" -B 5 -A 10
# Look for performance-related tests or benchmarks
rg "BENCHMARK.*scatter|TEST.*scatter.*performance" -A 3
# Find any numerical precision tests related to scattering
rg "CHECK_CLOSE.*scatter|CHECK_SMALL.*scatter" -A 2
Length of output: 16531
Core/src/Material/MaterialMapUtils.cpp (1)
90-92
: Wise choice, using almostVacuum for interpolation handling, it is!
Handle the edges correctly with almost vacuum parameters, we now do. Consistent with the float to double transition, this change is. Approve this modification, I do.
Verify the impact on interpolation behavior, we must:
✅ Verification successful
Safe and proper, this change is, hmmmm
Clear evidence, I have found. The almostVacuumParameters() method, properly implemented it is, with maximum float values and unit density. Correctly used in both RZ and XYZ mappers, it is. The interpolation edges, handle gracefully it will.
Tests for material mapping and interpolation, abundant they are in the repository. MaterialMapUtilsTests.cpp, present it is, to validate this functionality.
The Force is strong with this implementation, as:
- Consistent with Material class design, it is
- Properly handles edge cases in interpolation, it does
- Same approach in both coordinate systems, it maintains
- Float to double transition, respect it does
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for interpolation tests that might be affected
rg -l "interpolation.*material" "Tests/"
# Check for other usages of almostVacuumParameters
ast-grep --pattern 'Material::almostVacuumParameters()'
Length of output: 303
Script:
#!/bin/bash
# Search for tests related to material interpolation
rg -A 5 "interpolation.*material|material.*interpolation" Tests/
# Check for setExteriorBins usage in tests
rg -A 3 "setExteriorBins" Tests/
# Look for Material class definition to understand almostVacuumParameters
ast-grep --pattern 'class Material {
$$$
almostVacuumParameters() {
$$$
}
$$$
}'
Length of output: 194
Script:
#!/bin/bash
# Search for Material class definition and almostVacuumParameters
rg -A 10 "class Material" Core/
# Look for test files containing Material in their name
fd "Material.*Test" Tests/
# Check implementation details around the changed area
rg -B 3 -A 3 "almostVacuumParameters" Core/
Length of output: 13117
Tests/UnitTests/Core/Material/AccumulatedMaterialSlabTests.cpp (2)
27-27
: Wise choice, the precision increase is, hmmmm.
For double comparisons, appropriate the epsilon value of 1e-10 is. Aligned with our mission to improve numerical accuracy, this change is.
76-77
: Consistent in the Force, these changes are.
From float to double, the thickness comparisons have moved. But verify we must, that all thickness comparisons throughout the codebase follow this path.
Also applies to: 90-91, 110-111, 125-126, 139-140
✅ Verification successful
The previous search might have been too specific. Let's broaden our search to understand the context better and verify if the changes are consistent across the codebase. We'll look for all thickness comparisons and any remaining float literals in the test file.
Consistent in the Force, these changes are, hmmmm.
Through the codebase I have searched, and found that all thickness comparisons now use double literals, not float literals they do. In AccumulatedMaterialSlabTests.cpp and across the codebase, no remaining float literals in thickness comparisons exist. The change from 3.0f to 3.0 aligns with the way of the Force, it does.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining float literals in thickness comparisons
rg -l "thicknessIn[XL]0\(\).*(f|F)[^a-zA-Z]" --type cpp
Length of output: 57
Script:
#!/bin/bash
# Search for all thickness comparisons to understand the pattern better
rg "thicknessIn[XL]0\(\)" --type cpp -A 1
# Also check for float literals in the test file specifically
rg "\.0f|\.0F" Tests/UnitTests/Core/Material/AccumulatedMaterialSlabTests.cpp
Length of output: 17820
Core/include/Acts/Material/VolumeMaterialMapper.hpp (1)
74-74
: Approve the type change, I do. Wise decision, this is!
From float
to double
, the change aligns with the broader initiative to reduce unnecessary conversions, hmm. Consistent with the PR objectives, this modification is.
Search for dependencies and usages, we must:
✅ Verification successful
The previous results show Python bindings usage, but deeper investigation of C++ implementation we need, hmm! Search for the implementation details and numerical operations, we must.
Safe and beneficial, this change from float to double is!
Examined the usage thoroughly, I have. In the implementation, mappingStep
used for precise calculations of material properties and positions, it is. The Force reveals:
- In
VolumeMaterialMapper.cpp
, for critical numerical operations used it is:- Computing volume steps and remainders
- Scaling thickness calculations
- Direction normalization
No precision-sensitive comparisons or float-specific operations found, there are. Already working with double
in calculations like direction.norm()
, the code is. Documentation shows mappingStep
measured in millimeters, where double precision beneficial it can be.
The path to reducing float-double conversions, clear it is. Approve this change without reservations, I do!
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find usages of mappingStep to verify impact
# Expect: No direct float comparisons or assumptions
# Search for direct usages of mappingStep
ast-grep --pattern 'mappingStep'
# Search for potential float comparisons
rg -A 2 'mappingStep.*=='
Length of output: 782
Script:
#!/bin/bash
# Search for the class and method definitions to understand the context
ast-grep --pattern 'class VolumeMaterialMapper {
$$$
}'
# Find where mappingStep is used in calculations
rg -l "mappingStep" | xargs rg -A 3 "mappingStep"
# Look for float to double conversions in nearby code
rg "static_cast<float|static_cast<double" Core/include/Acts/Material/
Length of output: 4739
Core/include/Acts/Definitions/Units.hpp (2)
258-258
: Documentation style, improved it has been.
Consistent with other constants in this file, the documentation style now is. Approve this change, I do.
265-266
: Verify the usage of kAvogadro in material calculations, we must.
In the transition from float to double, important it is to ensure consistent usage.
Run this script to find material calculation usage, we shall:
✅ Verification successful
The broader search, conduct we must, as material files alone reveal not the full picture. See the usage patterns across the codebase, we shall.
Proper usage of kAvogadro in double precision calculations, confirmed it is
Examined the Force has revealed:
- In Material.cpp, all calculations with kAvogadro use double precision, they do
- Static casting to double before kAvogadro operations, performed it is
- No float declarations with kAvogadro, found we have
- Double precision computations explicitly commented, they are
The path to higher precision, maintained throughout the codebase it is. Hmmmm.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for kAvogadro usage in material-related files
echo "Material files using kAvogadro:"
rg -l "kAvogadro" "Core/include/Acts/Material/"
echo "\nCheck for any remaining float usage with kAvogadro:"
ast-grep --pattern 'float $var = $$$kAvogadro$$$'
Length of output: 287
Script:
#!/bin/bash
# Expand search to all source files, we must
echo "All files using kAvogadro, show us:"
rg -l "kAvogadro"
echo "\nContext of kAvogadro usage, reveal to us:"
rg -B2 -A2 "kAvogadro"
echo "\nDouble declarations with kAvogadro, find we shall:"
ast-grep --pattern 'double $var = $$$kAvogadro$$$'
Length of output: 1641
Tests/UnitTests/Core/Material/AverageMaterialsTests.cpp (1)
27-30
: Approve the initialization changes, I do!
From float literals to integer literals, the transition is. Wise choice this is, as it aligns with the PR's goal of using double
precision, it does.
Core/src/Material/SurfaceMaterialMapper.cpp (2)
280-280
: Approve this change, I do!
Wise decision it is, to use double
for path correction calculations. Better precision this brings, and with the Force of consistent types, flow more smoothly your code will.
280-280
: Verify the ripples in the Force, we must!
Throughout the codebase, consistency in numeric types, essential it is. Check we must, that all calculations with path correction, prepared for double
precision they are.
✅ Verification successful
Hmm, clear the path forward is!
Through the Force, I sense that all pathCorrection
variables and methods in the codebase consistently use double
. Even in the I/O layer where float
is used (RootMaterialTrackWriter
and RootMaterialTrackReader
), it is merely for storage optimization, not for calculations.
The change to double
aligns with the codebase's conventions, it does. No ripple effects through the Force, I detect.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for related path correction calculations
rg -A 3 "pathCorrection" --type cpp
# Look for float declarations in material-related files
rg "\bfloat\b.*pathCorrection" --type cpp
Length of output: 31442
Core/src/Material/VolumeMaterialMapper.cpp (3)
256-256
: Approve this change, I do!
From float
to double
for remainder
, a wise choice it is. More precise material thickness calculations, we shall have.
455-455
: Hmmmm... Good this change is!
The Force is strong with this one. Converting vacuumThickness
to double
, better precision in vacuum space calculations, it brings.
485-485
: Consistent with the Force, this change is!
Another vacuumThickness
to double
conversion, maintain consistency it does. For boundary surface calculations, crucial this precision is.
Verify the impact, we must:
✅ Verification successful
Let me search deeper into the Force, I must. More context about thickness variables and their usage patterns, I seek.
Consistent with the Force, this change is indeed!
Through the Force, seen I have that double
for thickness variables, the standard throughout the codebase has become. In Core/src/Material, all thickness calculations in double precision they are performed. Only in examples and JSON plugins, some float
remnants remain, but production code path they do not affect.
The change to double vacuumThickness
, aligned with the codebase's practices it is:
- In MaterialSlab.cpp, double precision for thickness the foundation is
- In Interactions.cpp, all thickness calculations in double precision they are
- In VolumeMaterialMapper.cpp, other thickness variables already as double they exist
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining float usage in material calculations
rg -A 2 "float.*thickness|thickness.*float"
# Look for potential precision-related test cases
rg -A 5 "TEST.*Material.*thickness"
Length of output: 705
Script:
#!/bin/bash
# Search for all thickness-related variables in material calculations
ast-grep --pattern 'double thickness'
# Search for thickness variable declarations and usages in VolumeMaterialMapper
rg -A 2 "thickness" Core/src/Material/VolumeMaterialMapper.cpp
# Find all material thickness-related code in the Core directory
rg -A 2 "thickness" Core/src/Material/
Length of output: 15864
Core/include/Acts/Material/Material.hpp (4)
57-63
: Helpful the ParametersIndices
enumeration is.
Clarifies parameter indices, it does. Improves readability and maintainability, this addition will.
80-81
: Constructor methods updated to double
, wisely done it is.
Methods fromMolarDensity
and fromMassDensity
now accept double
. Consistency with member variables achieved, precision enhanced.
Also applies to: 93-94
113-127
: Method return types to double
updated, consistency ensured you have.
Methods X0()
, L0()
, Ar()
, Z()
, molarDensity()
, molarElectronDensity()
, massDensity()
, and meanExcitationEnergy()
now return double
. Aligned with member variables, they are.
133-137
: Member variables to double
changed, greater precision achieved it is.
Variables m_x0
, m_l0
, m_ar
, m_z
, m_molarRho
now of type double
they are. Improves numerical accuracy, this change does.
Core/src/Material/Interactions.cpp (16)
12-12
: Include 'Units.hpp' necessary it is, good inclusion this is.
Adding Acts/Definitions/Units.hpp
ensures unit literals function properly.
25-29
: Physical constants with units defined, wise decision this is.
Defining Me
, K
, and PlasmaEnergyScale
with precise values and unit literals improves accuracy and readability.
33-36
: Member variables to 'double' updated, consistency achieved we have.
Changing variables in RelativisticQuantities
to double
enhances precision in calculations.
Line range hint 38-55
: Constructor parameters to 'double' changed, precision maintained it is.
Updating the RelativisticQuantities
constructor ensures consistent use of double
throughout.
47-48
: Calculation of mOverP
and pOverM
, correct it appears.
Using double
for these calculations enhances numerical stability.
50-50
: Beta squared computation accurate it is.
Formula for beta2
correctly implemented with updated types.
161-161
: Early return in case of invalid slab, appropriate this is.
Returning zero for invalid material ensures robustness.
174-195
: Function computeEnergyLossBethe
to 'double' updated, precision improved it is.
Parameters and return type changed to double
, aligning with precision enhancement goals.
225-232
: Derivatives computation, carefully implemented they are.
Mathematical expressions for derivatives updated to double
, ensuring accurate calculations.
251-252
: Logarithmic expressions in computeEnergyLossLandau
, correctly handled they are.
Adjustments align with the transition to double
precision.
283-287
: Derivative expressions in deriveEnergyLossLandauQOverP
, accurately updated they are.
Consistent use of double
enhances computational accuracy.
502-519
: Function computeMultipleScatteringTheta0
updated, consistent with physics it must be.
Ensure that the calculations for multiple scattering angles are accurate with the updated precision.
Consider verifying the returned values for known inputs.
510-510
: Early return for invalid slab, appropriate handling this is.
Consistent checks for slab validity maintain robustness.
338-338
: Function deriveBremsstrahlungLossMeanE
, verify the derivative we should.
Ensure the derivative with respect to energy is correctly calculated.
Run the following script to confirm the derivative expression:
✅ Verification successful
Verified, the derivative implementation is, hmmmm
Found the implementation I have, matches the expected formula it does. The function deriveBremsstrahlungLossMeanE
correctly implements (Me/mass)² as derivative, yes.
return (Me / mass) * (Me / mass);
Correct mathematical expression this is, for bremsstrahlung energy loss mean. Matches physics literature it does, hmmmm.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the derivative in 'deriveBremsstrahlungLossMeanE'.
# Expected: The function returns (Me/m)^2 as the derivative.
ast-grep --pattern $'inline double deriveBremsstrahlungLossMeanE(double mass) {
return (Me / mass) * (Me / mass);
}'
Length of output: 333
94-94
: Variable b
computation, verify correctness we must.
Ensure the expression for b
accurately reflects the intended formula.
Run the following script to double-check the calculation:
✅ Verification successful
Correct, the formula is. Trust in the physics, we must.
Verified the calculation has been, through multiple aspects of the Force:
- Comment above the line reveals the formula's meaning:
(m² + me²) / me = me (1 + (m/me)²)
Me
constant defined correctly as electron mass:0.5109989461_MeV
- Extensive test suite present, validating interactions against known physics values
- Formula used in energy loss calculations that match PDG (Particle Data Group) values within 5% accuracy
The mathematical expression aligns with the physics it represents, hmm, yes.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the computation of 'b' in line 94.
# Expected: No discrepancies in the formula for 'b'.
ast-grep --pattern $'const double b = Me * (1 + (mass / Me) * (mass / Me));'
Length of output: 172
Script:
#!/bin/bash
# Search for any other occurrences of variable 'b' and its usage
rg "const double b\s*=" -A 3
# Look for any documentation or comments about this calculation
rg -B 3 "const double b\s*="
# Search for Me variable definition
rg "Me\s*="
# Look for any tests or validation of this formula
rg "test.*Interactions"
Length of output: 909
Script:
#!/bin/bash
# Check the test file content
cat Tests/UnitTests/Core/Material/InteractionsTests.cpp
# Look for any physics documentation in the codebase
rg -g "*.md" -g "*.h" -B 2 -A 2 "electron mass|mass interaction"
Length of output: 6876
483-489
: Multiple scattering functions updated, correctness verified they must be.
Confirm that the formulas in theta0Highland
and theta0RossiGreisen
are accurate with the use of double
.
Run the following script to check the implementations:
✅ Verification successful
The previous ast-grep pattern didn't work well. Let's try with ripgrep to get more context about these multiple scattering functions and their implementations. This will help us verify the mathematical formulas and their usage of double precision.
Hmm, more context we need, yes. See the full implementation and surrounding documentation we must. The way of the Force is through thorough verification, particularly for physics formulas, it is.
Verified, the multiple scattering functions are, hmmmm.
Clear in the Force, I sense these implementations are. The theta0Highland function, follows RPP2018 equation 33.15 precisely, it does. Double precision, sufficient it is, for these calculations. Evidence in the implementation, I see:
- For Highland formula, correctly implemented the 13.6 MeV factor is, with proper logarithmic term handling, yes.
- For Rossi-Greisen formula, the 17.5 MeV factor and logarithmic scaling for electrons, properly applied they are.
- Both functions, handle relativistic corrections through q2OverBeta2 parameter, they do.
Strong with physics, these functions are. No disturbance in the Force, I sense. Continue on their path, they may.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify multiple scattering functions for accuracy.
# Expected: Functions match standard physics equations.
ast-grep --pattern 'inline double theta0Highland*'
ast-grep --pattern 'inline double theta0RossiGreisen*'
Length of output: 109
Script:
#!/bin/bash
# Search for the implementation using ripgrep for better context
rg -U "theta0Highland.*\{[\s\S]*?\}" --type cpp -A 10
rg -U "theta0RossiGreisen.*\{[\s\S]*?\}" --type cpp -A 10
# Also search for any tests or documentation related to these functions
rg "theta0Highland|theta0RossiGreisen" --type cpp
Length of output: 569
Script:
#!/bin/bash
# Get full context of the file around these functions
rg -U "theta0Highland|theta0RossiGreisen" Core/src/Material/Interactions.cpp -B 5 -A 15
# Look for any unit tests or test files
fd "test.*\.cpp" -x rg -l "theta0Highland|theta0RossiGreisen" {}
# Check for any documentation or comments about these formulas
rg -B 3 -A 1 "RPP|formula|equation" Core/src/Material/Interactions.cpp
Length of output: 4668
Quality Gate passedIssues Measures |
@AJPfleger @andiwand are we happy with this? If so, the references need to be updated, and then we can get this in! |
I am not happy with this, since we don't understand what causes the changes. The material was (supposedly) tested with
Then we need to repeat it with larger statistics.
There is a structure in the changes. We have shifts in numerous plots. If they are not significant due to the statistics, we need larger statistics. |
Well in this case we can close the PR then |
There is no good reason to use
float
for material properties and calculations, we just end up doing a bunch offloat
double
conversions. The only good reason forfloat
is to keep the storage low which is still the case with this PR.Summary by CodeRabbit
Release Notes
New Features
kAvogadro
for Avogadro's number to enhance calculations involving moles.Improvements
double
instead offloat
, enhancing precision in calculations related to material properties and interactions.Bug Fixes