Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The background blur is allowed to sample things outside of the window #854

Open
LoganDark opened this issue Jul 19, 2022 · 15 comments
Open
Labels
help wanted SOMEBODY PLEASE HELP

Comments

@LoganDark
Copy link

LoganDark commented Jul 19, 2022

If you have a bright object in the background and your window is next to it, it'll show up in the window's background even though that makes no sense. Things that aren't actually under the window shouldn't influence the output of the blur.

This is a visual quality/fidelity bug, not any misconfiguration on my part, so I'm omitting the usual versioning and debugging info.

Experimental GLX backend with dual_kawase blur.

@yshui
Copy link
Owner

yshui commented Jul 19, 2022

If blur is only allowed to sample things below the window, what happens at the edge of the window, where the sample area extend outside of the window? What does other compositor do in this case?

@LoganDark
Copy link
Author

LoganDark commented Jul 19, 2022

If blur is only allowed to sample things below the window, what happens at the edge of the window, where the sample area extend outside of the window? What does other compositor do in this case?

What currently happens at the edge of the desktop? The out of bounds area simply isn't sampled, right?

I have my own blur implementation that handles edges correctly, but of course it runs on the CPU and takes like 30ms, so isn't acceptable for desktop compositing. I don't think any of its ideas are applicable to dual_kawase blur either. Neither are a traditional convolution afaict.

@yshui
Copy link
Owner

yshui commented Jul 19, 2022

The edge pixels are repeated. Because it's the easiest thing to do. Unlike a rolling sum, adjusting the convolution kernel or the weights in case of dual_kawase, based on how many neighbouring pixels are in range would be more complicated. And most likely would slow down the shader in general, not just for edge pixels.

I guess they are technically possible though.

@LoganDark
Copy link
Author

The edge pixels are repeated.

:(

Unlike a rolling sum, adjusting the convolution kernel or the weights in case of dual_kawase, based on how many neighbouring pixels are in range would be more complicated. And most likely would slow down the shader in general, not just for edge pixels.

Well, you could just clamp the edges like you do for the desktop - but I guess the reason you don't do that is that, since it's incorrect and causes edge bleeding, it looks much worse than just using pixels outside the window like you do now.

I guess this is a wontfix? I have stackblur on the GPU as well, but it's not interesting since it's not constant-time (it's just a naive "sample the surrounding area for every pixel"), so it may as well be a sad Gaussian blur.

@yshui
Copy link
Owner

yshui commented Jul 19, 2022

but I guess the reason you don't do that is that, since it's incorrect and causes edge bleeding, it looks much worse than just using pixels outside the window like you do now.

Exactly. I hope @tryone144 can chime in since they know more about dual kawase than me. But otherwise, yes, this is a comprise we decided to make.

@tryone144
Copy link
Collaborator

I am pretty sure we had an issue about the exact same thing before, but I can't find it right now.

All blur algorithms repeat the edge pixels at the outside screen borders to prevent a darkening around those edges. This is more or less to be expected.

In general however, we take a around blur-radius pixels outside the window into account for the background-blur. Especially in the dual_kawase case, this is done to ensure visual consistency. If we were to repeat the pixels at the window edge instead, these would get more weight relative to the other pixels inside the window's bounds. In turn, slight changes at these borders would lead to significant differences in the resulting blur. For example, this leads to visual flickering when moving windows around.

However, in the context of tiling WMs, where the windows are mostly static, I can certainly see how the aforementioned "bleeding" might be more irritating and clamping to the edges to be preferred. In a similar manner, clamping the blur to a single screen in multi-screen environments might also be preferred by some.

Both of these require changes to the blur shaders and likely come with a slight(?) performance penalty.

@LoganDark
Copy link
Author

All blur algorithms repeat the edge pixels at the outside screen borders to prevent a darkening around those edges. This is more or less to be expected.

Are you talking about all blur algorithms in general, or just the ones in picom?

Because I know for a fact that my blur algorithm doesn't do that, because I explicitly designed it not to. This isn't dual kawase blur, and I'm not proposing that picom add support for it, but I feel like it's a good example of what the "ideal" behavior would look like.

If we were to repeat the pixels at the window edge instead, these would get more weight relative to the other pixels inside the window's bounds. In turn, slight changes at these borders would lead to significant differences in the resulting blur. For example, this leads to visual flickering when moving windows around.

This is what I refer to as "edge bleeding". Here is a video demonstration showing the difference between incorrect edge treatment, and correct:

KjK33N7k6W.mp4

As you can see, the "incorrect" version suffers from horrible edge bleeding (and incorrect treatment of sRGB, but picom does just fine on that front). The "correct" version is my stackblur. Again, just an example.

You may be surprised to discover, if you're used to clamping, that the "correct" version from that video does not sample extra pixels. It just avoids sampling anything outside the area that is being blurred. In fact, the surrounding pixels are not even offered to the blur.

However, in the context of tiling WMs, where the windows are mostly static, I can certainly see how the aforementioned "bleeding" might be more irritating and clamping to the edges to be preferred. In a similar manner, clamping the blur to a single screen in multi-screen environments might also be preferred by some.

The bleeding is irritating in any environment. :)

I don't quite understand what you mean by "clamping to the edges". Just clamping the sampler will create the exact same bleeding effect, as trying to sample past the edge will just get the pixel on the edge, causing an effective extension. I think you mean the "correct" implementation above, but please clarify.

@tryone144
Copy link
Collaborator

Are you talking about all blur algorithms in general, or just the ones in picom?

Their implementation in picom.

I don't quite understand what you mean by "clamping to the edges". Just clamping the sampler will create the exact same bleeding effect, as trying to sample past the edge will just get the pixel on the edge, causing an effective extension. I think you mean the "correct" implementation above, but please clarify.

The "edge bleeding" mentioned above refers to the sampling of pixels outside the window extends (as in the outside bleeds into the window's background). The result looks like moving the window over a deterministic blur texture that does not care, where the window is currently positioned.

"Clamping to the edge" is exactly the incorrect behaviour shown in your example video — and you perfectly explained, why we are not doing this is picom (except for the actual screen borders). But this does get rid of the "edge bleeding" (outside bleeds into the blur-region).

The "correct" approach of adjusting the absolute pixel weight at the edges where fewer pixels are available should be possible with some changes to the convolution shader. With dual_kawase this needs fundamental changes to the algorithm itself, since it makes heavy use of the linear-interpolation sampling in hardware and would require special handling on all the "edge-cases" (which is detrimental to shader performance).

@LoganDark
Copy link
Author

LoganDark commented Jul 19, 2022

The "edge bleeding" mentioned above refers to the sampling of pixels outside the window extends (as in the outside bleeds into the window's background).

Oh. My definition of "edge bleeding" was the very edges of the image bleeding far too much into the output of the blur. So it was closer to the "clamping to the edge" definition you put forth, where the edges get assigned too much weight.

It seems like you're interpreting "edge bleeding" as something entirely different - this issue, where pixels outside the actual window are incorporated to the window's background. That's not what I intended so I apologize I didn't clarify it further earlier.

The result looks like moving the window over a deterministic blur texture that does not care, where the window is currently positioned.

Yes, that's what the issue is about. So you understood that correctly.

If you have 2 windows next to each other (not overlapping), the focused one will display the other one in its background. This is annoying. I understand why, but yeah, that's why I opened the issue.

The "correct" approach of adjusting the absolute pixel weight at the edges where fewer pixels are available should be possible with some changes to the convolution shader. With dual_kawase this needs fundamental changes to the algorithm itself, since it makes heavy use of the linear-interpolation sampling in hardware and would require special handling on all the "edge-cases" (which is detrimental to shader performance).

Hmm, disappointing :(. It already doesn't perform too well - Windows is able to blur an entire screen in real-time without even using my dedicated GPU, but picom has trouble blurring a single terminal window. Funnily enough, the performance actually seems to get worse over time even when I don't do anything. I can start picom up and have buttery-smooth performance (with blur) for a few seconds, then it starts lagging. This is with a small terminal window in the center of a (4K) screen. The lag never gets much worse, but it's definitely different from the first few seconds.

Anyway, that's outside the scope of this issue.

Solvable or wontfix?

@yshui
Copy link
Owner

yshui commented Aug 16, 2022

@tryone144 I wonder what the result would look like if we don't extend the blur area and use GL_MIRRORED_REPEAT instead of clamping to edge?

@yshui
Copy link
Owner

yshui commented Aug 26, 2022

@tryone144 another thing, maybe we could sort windows by "layers". each layer contains windows that don't overlap (e.g. for tiling wm, all windows are in one "layer"). and draw shadows and blur per-layer.

@LoganDark
Copy link
Author

LoganDark commented Aug 26, 2022

@tryone144 another thing, maybe we could sort windows by "layers". each layer contains windows that don't overlap (e.g. for tiling wm, all windows are in one "layer"). and draw shadows and blur per-layer.

This would indeed solve the problem of adjacent windows sampling each other, but it would cause huge edge flickers when windows move between layers, and also invalidate damage tracking in a lot of cases (probably)

@tryone144
Copy link
Collaborator

@tryone144 I wonder what the result would look like if we don't extend the blur area and use GL_MIRRORED_REPEAT instead of clamping to edge?

I am against changing the screen bounds back to the old behavior (see #428). GL_MIRRORED_REPEAT could distribute the sampling-error more over the blur-radius edge pixels instead of giving the edge line an implicitly increased weight. If this doesn't look weirder than the current approach, I am not against changing that.

If I understand correctly, we are trying to solve different problems here.

  1. We could try to extend the blur shaders to not sample outside a window's region and adjust the pixel weight accordingly (with increased computational overhead for the current kernel and dual_kawase algorithms) — at least for the gl backend. Xrender only has a fixed kernel implementation as far as I know.
  2. We keep the current behavior of interpreting the window's region like a window to a blurred version of the screen. Then we have to add some logic to either
    • only sample the desktop background (similar to transparent-clipping), based on rules
    • keep track which windows overlap and only sample those (i.e. keep layers of non-overlapping windows). We have to check how bad this artifacts when windows suddenly move over each other.

Both approaches remedy the "bleeding of outside windows into the blurred background", particularly noticeable for tiled windows and transparent panels.

The blur-shaders don't know of the window geometry yet — or at least only through the vertex positions. We cannot just switch to another wrap method, since the intermediate textures are the same dimension as the screen (or scaled accordingly), and the windows seldomly align with the screen dimensions. This "clamping" has to be done in the fragment shader — unless we want to keep (and resize) textures for each window specifically (and permanently bind/unbind the shader), which is more expensive than the current shared textures.

If we were to add some type of clamping, I'd suggest adding another option to clamp the blurred region to a specific screen as well (similar to the xinerama-shadow-crop option).

@GatoVuelta
Copy link

Will this ever be resolved?

@yshui
Copy link
Owner

yshui commented Oct 11, 2024

No guarantees, no one is working on this at this point, but I am open to contribution.

@yshui yshui added the help wanted SOMEBODY PLEASE HELP label Oct 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted SOMEBODY PLEASE HELP
Projects
None yet
Development

No branches or pull requests

4 participants