Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feasability of "Composite Blending" filtering ? #103

Open
Ybalrid opened this issue Apr 8, 2023 · 3 comments
Open

Feasability of "Composite Blending" filtering ? #103

Ybalrid opened this issue Apr 8, 2023 · 3 comments

Comments

@Ybalrid
Copy link

Ybalrid commented Apr 8, 2023

Hi there!

I recently got myself an OSSC, and I am extremely happy about the result I am getting on an Amiga 500 + SCART RGB cable, and on a Sega Megadrive

The megadrive RGB signal is surprisingly crisp at least to me (despite it being a model 1 console). The results I am getting on a modern 4K TV, I could mistake them for the output of an emulator. Super sharp pixels!

Delving a bit into FPGA technology used in retrogaming, I was looking at what the MiSTer project is doing (I know, totally different thing, they are running systems on an FPGA, not doing extra low latency line doubling), and I got a bit curious about what the Genesis/MegaDrive has an option called "Composite blending". I have no first-hand experience with that option myself, but I find the idea interesting.

A lot of games on that system exploit dithering patterns and alternating pixels on foreground sprites to simulate wider color gamut and transparency. 2 things that the hardware is not technically able to do.

I know the OSSC's goal is to digitize the signal and increase resolution by just processing one line at a time and that the filtering options available in that scenario are extremely limited (basically, I understand how we can do the scanlines and the bob de-interlace methods in that scenario). I do not know how the filter is implemented on Mister, but I am wondering if, in a single line of the pixel, there is a way to try to apply that sort of "signal degradation" and to dose it somewhere in the middle between

Here's a video with a timecode that showcases some of the natural artifacts these low-quality video outputs are producing https://youtu.be/x0weL5XDpPs?t=314

I legitimately have no idea if there's space on the FPGA of the OSSC to try to add that sort of filtering after the DAC/sampling stage; I also do not know how it could be implemented exactly. I have a "vague" idea of how a post-processing shader would be able to blend colors like that. I have no clue how you would implement that with straight-up logic gates.

Any thoughts ? 🤔

@marqs85
Copy link
Owner

marqs85 commented Apr 25, 2023

Most likely that wouldn't need additional block RAM and could just utilize data from a couple previous samples stored in registers. The end result would probably depend on sampling rate (i.e. lineX mode), though. I can check at some point how that is implemented on the MD core, but it might take a while to get that on OSSC as the project is in its twilight years and thus not very high priority.

@Ybalrid
Copy link
Author

Ybalrid commented Apr 25, 2023

It was just a silly thought passing in my head, and I was like "I fell like that sort of processing is doable just in one scanline?". It may be a bit counter to what the goal of this project is to try to "un-crisp" a signal because it is "too clean" 🙂

@alanbork
Copy link

alanbork commented Apr 25, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants