-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume rendering #120
Comments
Chris: Let's talk in person. I'd love to pursue this and I am thrilled to see you working on this! |
I'm also thrilled to see this! There are a couple other use-cases and features I could add to the wishlist if you are interested (I don't want be a distraction). |
Steve: go for it. This is all good. |
Okay here are some of my wishlist items:
It would be amazing if we could pool our efforts somehow. I've been actively working on putting some of this into Slicer. |
@pieper for compositing segmented images, does MRIcroGL's mosaic, glass brain and clip plane styles match what you want? You can try these out with MRIcroGL by choosing the menu items "basic", "clip", and "mosaic" from the Scripting/Templates menu item. These show different ways that a background image can be superimposed on a background image. The shader will show overlays beneath the surface of the background image, but they are more transparent depending on depth beneath the surface. Below is the example of the "Clip" script. The "overlayDepth" slider allows the user to interactively choose the rate of transparency decay. If this is what you are looking for, this is easy to implement. If this is not what you want, can you provide bitmaps of what you are looking for. |
@neurolabusc please go ahead and send pull requests, I'd love to see what you have done and integrate it in. |
Hi @neurolabusc - that looks close to what I mean, but it's not clear to me that any of those modes are exactly what I mean. I'm thinking of something like a freesurfer aseg file, where each voxel in the labelmap is a is the index into a color lookup table that provides RGBA. Every ray step would need to look at the neighborhood of voxels to calculate the gradient / surface normal based on the whether the neighbor shares the same index value. There's some discussion here. |
@pieper the MRIcroGL While the results are the same from the user's perspective, I find the implementation of one pass shaders like Will Usher's and Philip Rideout must simpler to implement than the two-pass method of PRISM. There is no need for an extra frame buffer: one can infer the back face of a unit cube if you know the front face and the ray direction. |
Yes, pre-computed makes sense. The colored cortex looks like just the thing, but I'm not seeing the labelmap case on your page - I would love to try to replicate that in VTK / Slicer. |
Hi @neurolabusc - I agree with your point about indexed label maps and artifacts and let me suggest an approach. First, let's stipulate that if a structural scan threshold defines the surface, you can use it as the alpha channel and the dilated label map to define the color and that may or may not be what you want. That's what I did for this microCT: But in addition. I'd like to be able to render a smoothed surface of the segmentation independent of the structural reference volume. What I'd like to see is effectively performing the isosurface generation as part of the ray casting in the shader. That is, fitting a local surface based on the neighboring voxel values. This would be effectively the same as what goes on in current surface mesh generation pipelines. I agree this requires extra texture fetches but that's generally cheap and it is a constant overhead independent of surface complexity. The advantage of this would be that you could get realtime updates of complex geometries during interactive operations just by updating the texture (e.g. this would be useful in Slicer's Segment Editor, which currently runs a CPU surface generation pipeline that bogs down as the surface become complex). Do you know if such an implementation exists? I'm pretty sure I can implement this but would be happy if there were a starting point available. |
First of all, that is a major advantage of pre-computing your gradients is that you can apply a smooth prior to your Sobel. At some stage this also blurs out the surfaces, which you can see in the top right image above. I pre-compute my gradients using the GPU not the CPU. A clever optimization is that the GPU 3D texture reads compute a trilinear interpolation in hardware. Therefore, a carefully placed texture read is sampling 8 voxels with a chosen weighting between them. This extends the 1D (weighted sample of 2 pixels) and 2D (4 pixels) methods described here. You can see my WebGL blur shader here, which is run prior to the Sobel shader for gradient estimation. You can run my demo changing the blur width (dX/dY/dZ). You can also use the anatomical scan for normals, but ignore voxels where there is no color in the atlas. You want to make sure to reuse normals. There is still an issue with the fact that integer atlases do not handle partial volume, so you get some stair-case artifacts with surfaces that are parallel with the rays. With regards to getting a nice isosurface, you might want to try the latest release of MRIcroGL. All the GLSL shaders are text files in the /Resources/Shader folder. You can interactively edit them with a text editor. This allows rapid prototyping new effects. The
Will create a uniform named 'blend' with an initial value of 0.5, and the user can adjust the slider in the user interface between 0.0..1.0. The image below shows the included "tomography" shader which is designed to deal with the fact that the bone in CT is extremely bright, leading to stair-stepping with typical volume rendering. The two images below are the same shader, but with the Here is the AICHA atlas with the Tomography shader and full surfaceHardness: |
Thanks for the info Chris, those shaders look great. I'll go off an see about getting the effect I want Slicer/VTK but also keep an eye on your work here with BIS. Let's keep thinking about how we might factor some of the code out to work in multiple environments. |
Here would be my proposal for updating the shader, which is now very similar to FSLeyes.
|
@neurolabusc The whole project gets "built" using gulp, so bislib.js is created using webpack from all the component js files. I would be happy to describe the process for you -- but basically once you install all the prerequisites the development cycle boils down do typing gulp -- this runs a webserver + webpack in watch mode that looks for changes to the js code and rebuilds the bislib.js combi file which you access from localhost:8080 The volume rendering code is fed images from js/webcomponents/bisweb_orthogonalviewer.js which then calls code in js/coreweb/bis_3dVolume.js that then calls the sharders. Making changes to the GUI to add three modes is actually fairly straightforward. I can do that. Xenios |
The recent commits have started work on volume rendering, using the shader in bis_3dvolrenutils.js
While volume rendering can be done with WebGL1, the 3D textures of WebGL2 make this much more efficient. This is timely, as Chrome, Edge and Firefox all support WebGL2, and the Safari Technology Preview reveals that WebGL2 will no longer be disabled by default.
Can I suggest a couple of features that are illustrated here:
My suggestions are described in Engel et al.'s Real-Time Volume Graphics.
view_ray
for each fragment. It can be done in the vertex shader (as in my example, just eight times) or as a uniform (once). I admit this probably has virtually no impact on performance.wang_hash
. You can remove that to see the impact.sample_3d_texture()
is nested in 6if
conditionals. GPU shaders are poor at conditionals, and these should be avoided in your inner loop. In my code, only samples within the texture are sampled, so there is no need for a conditional.add_lighting
function in your shader requires 6 expensive texture lookups and will yield low precision gradients. Pre computed gradients will be better quality and require only a single texture lookup. My code shows how to create a 3D gradient texture which you do once and retain for all subsequent renderings.if (color.a >= 0.95)
I am happy to generate pull requests to improve your volume rendering methods if you wish. The bioImageSuite is an outstanding project and enhancing the volume rendering will have a great impact.
The text was updated successfully, but these errors were encountered: