-
-
Notifications
You must be signed in to change notification settings - Fork 593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The background blur is allowed to sample things outside of the window #854
Comments
If blur is only allowed to sample things below the window, what happens at the edge of the window, where the sample area extend outside of the window? What does other compositor do in this case? |
What currently happens at the edge of the desktop? The out of bounds area simply isn't sampled, right? I have my own blur implementation that handles edges correctly, but of course it runs on the CPU and takes like 30ms, so isn't acceptable for desktop compositing. I don't think any of its ideas are applicable to dual_kawase blur either. Neither are a traditional convolution afaict. |
The edge pixels are repeated. Because it's the easiest thing to do. Unlike a rolling sum, adjusting the convolution kernel or the weights in case of dual_kawase, based on how many neighbouring pixels are in range would be more complicated. And most likely would slow down the shader in general, not just for edge pixels. I guess they are technically possible though. |
:(
Well, you could just clamp the edges like you do for the desktop - but I guess the reason you don't do that is that, since it's incorrect and causes edge bleeding, it looks much worse than just using pixels outside the window like you do now. I guess this is a wontfix? I have stackblur on the GPU as well, but it's not interesting since it's not constant-time (it's just a naive "sample the surrounding area for every pixel"), so it may as well be a sad Gaussian blur. |
Exactly. I hope @tryone144 can chime in since they know more about dual kawase than me. But otherwise, yes, this is a comprise we decided to make. |
I am pretty sure we had an issue about the exact same thing before, but I can't find it right now. All blur algorithms repeat the edge pixels at the outside screen borders to prevent a darkening around those edges. This is more or less to be expected. In general however, we take a around However, in the context of tiling WMs, where the windows are mostly static, I can certainly see how the aforementioned "bleeding" might be more irritating and clamping to the edges to be preferred. In a similar manner, clamping the blur to a single screen in multi-screen environments might also be preferred by some. Both of these require changes to the blur shaders and likely come with a slight(?) performance penalty. |
Are you talking about all blur algorithms in general, or just the ones in picom? Because I know for a fact that my blur algorithm doesn't do that, because I explicitly designed it not to. This isn't dual kawase blur, and I'm not proposing that picom add support for it, but I feel like it's a good example of what the "ideal" behavior would look like.
This is what I refer to as "edge bleeding". Here is a video demonstration showing the difference between incorrect edge treatment, and correct: KjK33N7k6W.mp4As you can see, the "incorrect" version suffers from horrible edge bleeding (and incorrect treatment of sRGB, but picom does just fine on that front). The "correct" version is my stackblur. Again, just an example. You may be surprised to discover, if you're used to clamping, that the "correct" version from that video does not sample extra pixels. It just avoids sampling anything outside the area that is being blurred. In fact, the surrounding pixels are not even offered to the blur.
The bleeding is irritating in any environment. :) I don't quite understand what you mean by "clamping to the edges". Just clamping the sampler will create the exact same bleeding effect, as trying to sample past the edge will just get the pixel on the edge, causing an effective extension. I think you mean the "correct" implementation above, but please clarify. |
Their implementation in picom.
The "edge bleeding" mentioned above refers to the sampling of pixels outside the window extends (as in the outside bleeds into the window's background). The result looks like moving the window over a deterministic blur texture that does not care, where the window is currently positioned. "Clamping to the edge" is exactly the incorrect behaviour shown in your example video — and you perfectly explained, why we are not doing this is picom (except for the actual screen borders). But this does get rid of the "edge bleeding" (outside bleeds into the blur-region). The "correct" approach of adjusting the absolute pixel weight at the edges where fewer pixels are available should be possible with some changes to the convolution shader. With dual_kawase this needs fundamental changes to the algorithm itself, since it makes heavy use of the linear-interpolation sampling in hardware and would require special handling on all the "edge-cases" (which is detrimental to shader performance). |
Oh. My definition of "edge bleeding" was the very edges of the image bleeding far too much into the output of the blur. So it was closer to the "clamping to the edge" definition you put forth, where the edges get assigned too much weight. It seems like you're interpreting "edge bleeding" as something entirely different - this issue, where pixels outside the actual window are incorporated to the window's background. That's not what I intended so I apologize I didn't clarify it further earlier.
Yes, that's what the issue is about. So you understood that correctly. If you have 2 windows next to each other (not overlapping), the focused one will display the other one in its background. This is annoying. I understand why, but yeah, that's why I opened the issue.
Hmm, disappointing :(. It already doesn't perform too well - Windows is able to blur an entire screen in real-time without even using my dedicated GPU, but picom has trouble blurring a single terminal window. Funnily enough, the performance actually seems to get worse over time even when I don't do anything. I can start picom up and have buttery-smooth performance (with blur) for a few seconds, then it starts lagging. This is with a small terminal window in the center of a (4K) screen. The lag never gets much worse, but it's definitely different from the first few seconds. Anyway, that's outside the scope of this issue. Solvable or wontfix? |
@tryone144 I wonder what the result would look like if we don't extend the blur area and use |
@tryone144 another thing, maybe we could sort windows by "layers". each layer contains windows that don't overlap (e.g. for tiling wm, all windows are in one "layer"). and draw shadows and blur per-layer. |
This would indeed solve the problem of adjacent windows sampling each other, but it would cause huge edge flickers when windows move between layers, and also invalidate damage tracking in a lot of cases (probably) |
I am against changing the screen bounds back to the old behavior (see #428). If I understand correctly, we are trying to solve different problems here.
Both approaches remedy the "bleeding of outside windows into the blurred background", particularly noticeable for tiled windows and transparent panels. The blur-shaders don't know of the window geometry yet — or at least only through the vertex positions. We cannot just switch to another wrap method, since the intermediate textures are the same dimension as the screen (or scaled accordingly), and the windows seldomly align with the screen dimensions. This "clamping" has to be done in the fragment shader — unless we want to keep (and resize) textures for each window specifically (and permanently bind/unbind the shader), which is more expensive than the current shared textures. If we were to add some type of clamping, I'd suggest adding another option to clamp the blurred region to a specific screen as well (similar to the |
Will this ever be resolved? |
No guarantees, no one is working on this at this point, but I am open to contribution. |
If you have a bright object in the background and your window is next to it, it'll show up in the window's background even though that makes no sense. Things that aren't actually under the window shouldn't influence the output of the blur.
This is a visual quality/fidelity bug, not any misconfiguration on my part, so I'm omitting the usual versioning and debugging info.
Experimental GLX backend with dual_kawase blur.
The text was updated successfully, but these errors were encountered: