-
-
Notifications
You must be signed in to change notification settings - Fork 597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gl_common: set viewports and projection matrices when working with framebuffers and shaders #349
gl_common: set viewports and projection matrices when working with framebuffers and shaders #349
Conversation
…amebuffers and shaders When we are drawing a texture to a framebuffer, we must ensure that framebuffer's width and height are greater than or equal to texture's width and height, otherwise the texture will be cropped, so we are setting viewport to texture's width and height after binding a framebuffer. When we are setting viewport to texture's width and height and using a shader, we must update shader's projection matrix with corresponding values (since projection matrices of all shaders are initialized with framebuffer's default width and height), otherwise the result of using a shader will look misplaced and/or scaled. Fixes at least #212.
Codecov Report
@@ Coverage Diff @@
## next #349 +/- ##
==========================================
- Coverage 32.10% 31.58% -0.52%
==========================================
Files 46 46
Lines 8516 8545 +29
==========================================
- Hits 2734 2699 -35
- Misses 5782 5846 +64
|
Thanks. I am aware of this problem too. But I don't think it's necessary to set project and viewport for every render. I am planning to set the viewport to a big value (say, 32768x32768) and just leave it there. I think that should work, but I haven't validated it yet. |
i'm not a gl expert, but won't setting the viewport to a huge value cause any performance issues? it's probably about allocating video memory? also you probably should set viewport not just to a huge value, but to a maximum available value ( should i keep this pull request open or you noted this solution and you will fix it yourself someday? |
No memory is allocated when you call glViewport. |
oh, yeah, my bad, viewport is about coordinates transformation btw, i roughly tested your suggestion: blindly commented out all viewport and projection matrix sets and initialized viewport and projection matrices of all shaders in |
The dual-filter kawase algorithm renders to differently sized fbos/textures. Commenting out the |
@tryone144 input coordinates --(*projection matrix)--> NDC --(viewport transformation)--> framebuffer coordinates What I intend to do is to have the projection matrix and the viewport transformation "cancel" each other out (ignoring z and w), and use framebuffer coordinates directly as input coordinates. This should work no matter how big the framebuffer is, thus we don't need to ever touch the project matrix and the viewport size. The only potentially problem is probably that things outside the framebuffer won't be clipped if we do this, so there might be performance loss? (not sure) but I think we can carefully draw within the bounds of the framebuffer. |
I will try this approach and check if I get it to work by scaling the coordinates themselves. |
I've got it working with setting the viewport in
However, I found a call to Possible problems I see with this approach:
|
GL_MAX_VIEWPORT_DIMS is generally not that big, probably don't really need to worry about that?
I actually cannot quite find the specific sections in the opengl spec about this, but I think fragments outside of the framebuffer is probably not going to be generated by rasterization. I wouldn't worry too much about this, but I will keep digging. |
For example, in 3.30 core profile, section 3.4.1 Basic Point Rasterization, it says:
If (x_w, y_w) is outside of the framebuffer to begin with, then presumably no fragment will be produced as there is no framebuffer pixel there. |
After digging some more through different forum posts, it seems that fragments outside the target buffer dimensions will be dropped, regardless whether it's a FBO or the backbuffer (link).
Should I split out the changes from #382 in its own PR and rebase ontop of that? Without these changes, the new blur-code has to update the projection matrix at least twice per window. With these changes, it suffices to update a single float uniform to scale the coordinates appropriately. |
Yes, please. |
- Query maximum supported dimensions of `glViewport()` when initializing so we don't have to worry about differently sized textures when rendering (usually the same as the maximum supported texture size, but dependend on the driver). - Set projection matrix in all shaders at startup to queried viewport dimensions. Allows using screen coordinates for all vertex positions without having to keep track of framebuffer dimensions. - Follow recommendations and set `glViewport()` to queried maximum dimensions for each draw call (`glDraw*()`, `glClear()`). Related: yshui#349
i guess i can close this pull request since you guys already figured out a better approach on this and @tryone144 made a better one. hope my work on this issue was helpful :) |
- Query maximum supported dimensions of `glViewport()` when initializing so we don't have to worry about differently sized textures when rendering (usually the same as the maximum supported texture size, but dependend on the driver). - Set projection matrix in all shaders at startup to queried viewport dimensions. Allows using screen coordinates for all vertex positions without having to keep track of framebuffer dimensions. - Follow recommendations and set `glViewport()` to queried maximum dimensions for each draw call (`glDraw*()`, `glClear()`). Related: yshui#349
- Query maximum supported dimensions of `glViewport()` when initializing so we don't have to worry about differently sized textures when rendering (usually the same as the maximum supported texture size, but dependend on the driver). - Set projection matrix in all shaders at startup to queried viewport dimensions. Allows using screen coordinates for all vertex positions without having to keep track of framebuffer dimensions. - Follow recommendations and set `glViewport()` to queried maximum dimensions for each draw call (`glDraw*()`, `glClear()`). Related: yshui#349
let's be honest: this is my first somewhat serious pull request and i can easily mess something up since i'm not experienced with pull requests and git overall
this fixes at least one known issue caused by not setting viewports and not updating projection matrices: when using frame opacity window contents are not drawn beyond framebuffer size (the result of decoupling) and when transparent frame not drawn beyond framebuffer size (the result of applying alpha if you fix only decoupling)
basically if you are working with framebuffers you must ensure that framebuffer's width and height are greater than or equal to width and height of a texture you are working with, otherwise it will be cropped. that's why we are setting viewport after binding a framebuffer.
but if we are changing viewport and using a shader we must also set projection matrix for it if shader uses it, otherwise result of using a shader will look scaled or misplaced since projection matrices of all shaders are initialized with root texture width and height
there is one caveat in _gl_compose if we are drawing on back fbo: we either drawing root texture to it which is general case or we are drawing window texture onto the root texture that is already in the framebuffer, in this case we can't let framebuffer be texture sized, it must have at least root texture size
basically the same written in commit description and i also made a comment to explain this caveat