Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gl_common: set viewports and projection matrices when working with framebuffers and shaders #349

Closed
wants to merge 1 commit into from
Closed

gl_common: set viewports and projection matrices when working with framebuffers and shaders #349

wants to merge 1 commit into from

Conversation

absolutelynothelix
Copy link
Collaborator

@absolutelynothelix absolutelynothelix commented Mar 22, 2020

let's be honest: this is my first somewhat serious pull request and i can easily mess something up since i'm not experienced with pull requests and git overall

this fixes at least one known issue caused by not setting viewports and not updating projection matrices: when using frame opacity window contents are not drawn beyond framebuffer size (the result of decoupling) and when transparent frame not drawn beyond framebuffer size (the result of applying alpha if you fix only decoupling)

basically if you are working with framebuffers you must ensure that framebuffer's width and height are greater than or equal to width and height of a texture you are working with, otherwise it will be cropped. that's why we are setting viewport after binding a framebuffer.

but if we are changing viewport and using a shader we must also set projection matrix for it if shader uses it, otherwise result of using a shader will look scaled or misplaced since projection matrices of all shaders are initialized with root texture width and height

there is one caveat in _gl_compose if we are drawing on back fbo: we either drawing root texture to it which is general case or we are drawing window texture onto the root texture that is already in the framebuffer, in this case we can't let framebuffer be texture sized, it must have at least root texture size

basically the same written in commit description and i also made a comment to explain this caveat

…amebuffers and shaders

When we are drawing a texture to a framebuffer, we must ensure that
framebuffer's width and height are greater than or equal to texture's
width and height, otherwise the texture will be cropped, so we are
setting viewport to texture's width and height after binding a
framebuffer.

When we are setting viewport to texture's width and height and using a
shader, we must update shader's projection matrix with corresponding
values (since projection matrices of all shaders are initialized with
framebuffer's default width and height), otherwise the result of using a
shader will look misplaced and/or scaled.

Fixes at least #212.
@codecov
Copy link

codecov bot commented Mar 22, 2020

Codecov Report

Merging #349 into next will decrease coverage by 0.51%.
The diff coverage is 0.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             next     #349      +/-   ##
==========================================
- Coverage   32.10%   31.58%   -0.52%     
==========================================
  Files          46       46              
  Lines        8516     8545      +29     
==========================================
- Hits         2734     2699      -35     
- Misses       5782     5846      +64     
Impacted Files Coverage Δ
src/backend/gl/gl_common.c 0.00% <0.00%> (ø)
src/opengl.h 0.00% <0.00%> (-54.77%) ⬇️
src/win.h 0.00% <0.00%> (-41.67%) ⬇️
src/x.h 47.36% <0.00%> (-31.58%) ⬇️
src/utils.h 52.63% <0.00%> (-21.06%) ⬇️
src/common.h 4.76% <0.00%> (-9.53%) ⬇️
src/picom.c 62.78% <0.00%> (+0.09%) ⬆️
src/log.h 68.75% <0.00%> (+25.00%) ⬆️

@yshui
Copy link
Owner

yshui commented Mar 26, 2020

Thanks. I am aware of this problem too. But I don't think it's necessary to set project and viewport for every render.

I am planning to set the viewport to a big value (say, 32768x32768) and just leave it there. I think that should work, but I haven't validated it yet.

@absolutelynothelix
Copy link
Collaborator Author

absolutelynothelix commented Mar 26, 2020

i'm not a gl expert, but won't setting the viewport to a huge value cause any performance issues? it's probably about allocating video memory?

also you probably should set viewport not just to a huge value, but to a maximum available value (GL_MAX_VIEWPORT_DIMS)

should i keep this pull request open or you noted this solution and you will fix it yourself someday?

@yshui
Copy link
Owner

yshui commented Mar 26, 2020

it's probably about allocating video memory?

No memory is allocated when you call glViewport.

@absolutelynothelix
Copy link
Collaborator Author

absolutelynothelix commented Mar 26, 2020

oh, yeah, my bad, viewport is about coordinates transformation

btw, i roughly tested your suggestion: blindly commented out all viewport and projection matrix sets and initialized viewport and projection matrices of all shaders in gl_resize with GL_MAX_VIEWPORT_DIMS, looks working well and fixing the issue, but i was testing on tryone's fork with dual kawase blur and that's broke it, haven't digged in it tho, maybe it's my bad and i commented out something odd.

@tryone144
Copy link
Collaborator

btw, i roughly tested your suggestion: blindly commented out all viewport and projection matrix sets and initialized viewport and projection matrices of all shaders in gl_resize with GL_MAX_VIEWPORT_DIMS, looks working well and fixing the issue, but i was testing on tryone's fork with dual kawase blur and that's broke it, haven't digged in it tho, maybe it's my bad and i commented out something odd.

The dual-filter kawase algorithm renders to differently sized fbos/textures. Commenting out the glViewport() calls in the blur-routine will definitely mess up the target coordinates. If my understanding of glViewport() is correct, the same effect should be possible by changing the projection matrix of the vertex shader in every pass.

@yshui
Copy link
Owner

yshui commented Apr 1, 2020

@tryone144
Correct me if I am wrong, but here is how I think coordinates transformation works:

input coordinates --(*projection matrix)--> NDC --(viewport transformation)--> framebuffer coordinates

What I intend to do is to have the projection matrix and the viewport transformation "cancel" each other out (ignoring z and w), and use framebuffer coordinates directly as input coordinates. This should work no matter how big the framebuffer is, thus we don't need to ever touch the project matrix and the viewport size.

The only potentially problem is probably that things outside the framebuffer won't be clipped if we do this, so there might be performance loss? (not sure) but I think we can carefully draw within the bounds of the framebuffer.

@tryone144
Copy link
Collaborator

blindly commented out all viewport and projection matrix sets and initialized viewport and projection matrices of all shaders in gl_resize with GL_MAX_VIEWPORT_DIMS, looks working well and fixing the issue.

I will try this approach and check if I get it to work by scaling the coordinates themselves.

@tryone144
Copy link
Collaborator

blindly commented out all viewport and projection matrix sets and initialized viewport and projection matrices of all shaders in gl_resize with GL_MAX_VIEWPORT_DIMS, looks working well and fixing the issue.

I've got it working with setting the viewport in gl_resize() to GL_MAX_VIEWPORT_DIMS and updateing the projection matrix in all shaders accordingly.

  • No more calls to glViewport() and projection matrix updates needed in the blur-code.
  • Rendering/Texture scaling for dual-kawase down-/upsample pass done with a new scale uniform.

However, I found a call to glViewport() in gl_average_texture_color() which seems to be mandatory for the function logic.

Possible problems I see with this approach:

  • All generated coordinates should be within screen/fbo-texture dimensions. Not sure of the performance impact if we were to draw outside these dimensions.
  • Possible loss of accuracy when converting to NDC and back.

@yshui
Copy link
Owner

yshui commented Apr 20, 2020

Possible loss of accuracy when converting to NDC and back.

GL_MAX_VIEWPORT_DIMS is generally not that big, probably don't really need to worry about that?
https://opengl.gpuinfo.org/displaycapability.php?name=GL_MAX_VIEWPORT_DIMS[0]

All generated coordinates should be within screen/fbo-texture dimensions. Not sure of the performance impact if we were to draw outside these dimensions.

I actually cannot quite find the specific sections in the opengl spec about this, but I think fragments outside of the framebuffer is probably not going to be generated by rasterization. I wouldn't worry too much about this, but I will keep digging.

@yshui
Copy link
Owner

yshui commented Apr 20, 2020

For example, in 3.30 core profile, section 3.4.1 Basic Point Rasterization, it says:

Point rasterization produces a fragment for each framebuffer pixel whose center lies inside a square centered at the point’s (x_w, y_w), with side length equal to the current point size.

If (x_w, y_w) is outside of the framebuffer to begin with, then presumably no fragment will be produced as there is no framebuffer pixel there.

@tryone144
Copy link
Collaborator

After digging some more through different forum posts, it seems that fragments outside the target buffer dimensions will be dropped, regardless whether it's a FBO or the backbuffer (link).

glViewport() is just responsible for NDC -> screen coordinate transformation and does no "clipping", so there should be no difference (link). Real clipping is only done for primitives with NDC outside the (-1,1) range, which are influenced by the projection matrix. These can now be outside the visible screen for "offscreen" windows without being clipped. But for the amount of triangles we actually render this should have negligible impact on performance.

Should I split out the changes from #382 in its own PR and rebase ontop of that? Without these changes, the new blur-code has to update the projection matrix at least twice per window. With these changes, it suffices to update a single float uniform to scale the coordinates appropriately.

@yshui
Copy link
Owner

yshui commented Apr 22, 2020

Should I split out the changes from #382 in its own PR and rebase ontop of that?

Yes, please.

tryone144 added a commit to tryone144/picom that referenced this pull request Apr 23, 2020
- Query maximum supported dimensions of `glViewport()` when initializing
so we don't have to worry about differently sized textures when
rendering (usually the same as the maximum supported texture size, but
dependend on the driver).
- Set projection matrix in all shaders at startup to queried viewport
dimensions. Allows using screen coordinates for all vertex positions
without having to keep track of framebuffer dimensions.
- Follow recommendations and set `glViewport()` to queried maximum dimensions
for each draw call (`glDraw*()`, `glClear()`).

Related: yshui#349
@absolutelynothelix
Copy link
Collaborator Author

i guess i can close this pull request since you guys already figured out a better approach on this and @tryone144 made a better one. hope my work on this issue was helpful :)

yshui pushed a commit to tryone144/picom that referenced this pull request Apr 23, 2020
- Query maximum supported dimensions of `glViewport()` when initializing
so we don't have to worry about differently sized textures when
rendering (usually the same as the maximum supported texture size, but
dependend on the driver).
- Set projection matrix in all shaders at startup to queried viewport
dimensions. Allows using screen coordinates for all vertex positions
without having to keep track of framebuffer dimensions.
- Follow recommendations and set `glViewport()` to queried maximum dimensions
for each draw call (`glDraw*()`, `glClear()`).

Related: yshui#349
zappolowski pushed a commit to zappolowski/picom that referenced this pull request Oct 16, 2020
- Query maximum supported dimensions of `glViewport()` when initializing
so we don't have to worry about differently sized textures when
rendering (usually the same as the maximum supported texture size, but
dependend on the driver).
- Set projection matrix in all shaders at startup to queried viewport
dimensions. Allows using screen coordinates for all vertex positions
without having to keep track of framebuffer dimensions.
- Follow recommendations and set `glViewport()` to queried maximum dimensions
for each draw call (`glDraw*()`, `glClear()`).

Related: yshui#349
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants