-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flying focal point and cone-beam ziegler correction #144
Comments
Can I perform forward projection view-by-view?When I perform optimization like demo_leaptorch/test_recon_NN.py with my own cbct scan data. The projection resolution or volume size maybe large. Usually randomly select a view and forward projection, calculate loss, would save meomry cause I have 1200/2400 views or even more. Rather than directly project all views at once. |
LEAP does not support FFS directly, but as you suggested you can achieve this by defining multiple instances each with a different geometry. Note that centerRow and centerCol move the detector, not the source. There is no direct method to perform projections view-by-view. The best way to do this is to split up your data and define multiple instances each with its own geometry. |
Thanks for you prompt reply. I just try out set_modularbeam, but it seems not work well. Should I use set_conebeam twice? But can two leatct instance reconstruct one volume together? |
@kylechampley hello kyle, it seems that project view-by-view would x10 slower than project all views at once. Do you got any idea? I think I got the reason. You iterate different views in cuda, but I iterate different views in python, thats why it is so slow. |
Currently, volume masking does not improve the speed of forward and backprojection algorithms. I wonder if we could use something like empty space skipping to accelerate the FP and BP algs? |
The modular-beam geometry does not model curved detectors. Only the cone-beam geometry does. Instead of projecting view-by-view, why don't you project all views with the same focal spot position together? This should be much faster. You could use the subsetParameters class to divide the projections amongst the different focal spot positions. There are examples of how to use this in leapctype.py, for example here. |
thanks for your reply. I will try it later. another question is do you have any idea about cone beam artifacts? when i use optimization with leaptorch, it gives me conebeam artifacts as shown in video. but the fbp results is fine cone beam artifacts is limited
…---Original---
From: "Kyle ***@***.***>
Date: Thu, Jan 16, 2025 00:07 AM
To: ***@***.***>;
Cc: "Zhentao ***@***.******@***.***>;
Subject: Re: [LLNL/LEAP] Flying focal point (Issue #144)
The modular-beam geometry does not model curved detectors. Only the cone-beam geometry does.
Instead of projecting view-by-view, why don't you project all views with the same focal spot position together? This should be much faster. You could use the subsetParameters class to divide the projections amongst the different focal spot positions. There are examples of how to use this in leapctype.py, for example here.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Iterative reconstruction should give you reduced cone-beam artifacts, so maybe you are doing something wrong or not taking it to convergence. |
I give my basic idea in following. g_=forword projector(f_), loss (g,g_), backward and update. and I do it view by view. is there anything wrong?
…---Original---
From: "Kyle ***@***.***>
Date: Thu, Jan 16, 2025 00:26 AM
To: ***@***.***>;
Cc: "Zhentao ***@***.******@***.***>;
Subject: Re: [LLNL/LEAP] Flying focal point (Issue #144)
Iterative reconstruction should give you reduced cone-beam artifacts, so maybe you are doing something wrong or not taking it to convergence.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Those artifacts you are seeing in the SART reconstruction are not cone-beam artifacts; they are truncation artifacts from the fact that the patient extends past the field of view of the scanner. A description and solution to this problem is in the d34_iterative_reconstruction_long_object.py demo script. The weighted backprojection algorithm does not mitigate cone-beam artifacts. What it does is use zero-th order extrapolation off the top and bottom of the detector to fill in some of the missing information from some projections. |
New feature. Flying focal point. I here got a curved-plane cbct scan data with flying focal point which means the detector focal point varying between odd/even frame. It means that the centerRow, centerCol params are different to set in set_conebeam(.) for odd/even. How should I achieve it?
The text was updated successfully, but these errors were encountered: