-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reed Solomon FEC seems rather in-effective when using x26X compression #361
Comments
@alatteri Thanks for all the leg work on this. I am interested and vested in this as well as with our application FEC is a much better solution than say SRT as we are trying to keep glass to glass latency to a minimum. We use h.265 with RS and have had mixed results as your logs show. I am going to spend some time this week with TC and simulate jitter and random loss and see how LDGM preforms. |
First remark is that For further evaluation, I'd suggest fixing the input format to Commands used were then (both using 20% redundancy):
For 10% loss both were unable to reconstruct all, but R-S reconstructed successfully 686/774 frames (+ other 51 incomplete frames) but LDGM only 477/774 (taking first With 5% loss it was 759/774 for R-S and 722/744 for LDGM, |
This is correct, actually setting correct values for LDGM is a bit tricky; because of that the percent value was defined to simplify the setting to users by selecting a eligible "preset" for given frame size. But, as far as I know, LDGM works good only if there are a big number of packets per frame, effectively working for uncompressed and JPEG but not being eligible for H.264/HEVC, therefore there are no presets. Although the values for LDGM may look similar as for R-S (except the semantic difference), it is actually not so. For R-S, the actual values do not matter much, just their ratio (redundancy). And it also works very good because if there is 25% redundancy per frame, it really manages to repair the packet if received 80% of packets or more (per frame basis, the iptatable or netem 10% "dropper" will certainly drop 20% for some frames). For LDGM, both are not true - the actual numbers matter, eg. 1024:256 can give you very different results than 256:64 (that is just why percents were given). And second, it doesn't give you a guarantee that 25% redundancy catches 20% loss (it works statistically for high numbers/packets but not for low-bitrate stream) TL;DR: Unless you use high-bitrate streams which is R-S slow for, use R-S. In terms of correction strength, I believe that LDGM is at most as good as R-S since R-S should be optimal erasure code. And for lower-bit rate streams this is very optimistic assumption; it is also more elaborate to work with. I'll try to improve the wiki/help according this pieces of information. |
Hello Martin. I've tried 3 things.
|
Hi Alan, wishing all the best to the new year. I was able to reproduce your described behavior but let me explain, how FEC in UG works. To get the numbers below, run with
Currently, more of the generated FEC symbols are assembled to single packet if it fits. So it basically means that losing one packet in the case 2 and 3 means losing entire frame! It is worth noting, that testcard content is artificial and the default pattern can be compressed quite easily. You'd get entirely different results with real picture or eg. |
Hi Martin and team, Happy New Year as well. Hope you got a relaxing break and nice time off with your families. I don't totally understand the above response, but that is just my lack of technical knowledge. I have tested with real world footage and find it still quite fragile. Especially when image fade to black during a transition or anything with a lot of flat colors such as title cards and graphics. My assumption is that the content becomes highly compressible which leads to small "frame sizes" and then for some reason the FEC doesn't work well. If I recall, in my tests, uncompressed footage was highly resilient with FEC. Is there anything that can be done to improve this situation? Currently I wrap the output of UG into SRT, and the minimum multiple is 3 with up to 4 recommended. When going distance, accounting for the RTT multiplier, the whole process can add several additional frames worth of latency. Even here locally, with RTT of around 15ms, it adds an additional frame of latency @24fps. Thank you again for everything you do for the community. This project is such a great resource. Alan |
The symbol size is printed only once (or more precisely few times, because it is guarded by a thread-local variable and the sending may pick a different runner). This, however, doesn't give representative numbers when frame sizes differ (== compressed) because then may also FEC symbol sizes so print it unconditionally at least with debug2 log level. refers to GH-361
Hi Alan,
It is not your fault because the thing is that the behavior is not so straightforward and I am perhaps not good enough to describe.
exactly
From the UG perspective it depends. I've created a commit b257d13 (build including it can be taken here; hope you'll be permitted to download) which duplicates frame first packet. It is just an ad hoc improvement but I believe that it could improve the situation when there is low number of packets per video frame. I don't currently want it put to it to main repository because we will soon make a new release so I don't want it to be included yet. Anyways, I'd be glad for eventual feedback. Anyways, the situation for FEC is quite different in these scenarios: 1) low packet count per frame; 2) tens of packets; 3) hundred of packets. So it will be also useful for us to know the exact scenario. |
Hi Martin, Thank you. I will not be able to test this for at least 10 days, maybe a bit more. I will definitely give you feedback when I can. Thanks, |
Hi Martin, I have not yet had an opportunity to test your test version, but I'm just thinking, would an easy solution be to use small values for -m or -l so that there are more small packets to allow an FEC recovery, instead of less big packets, where even a single loss isn't recoverable? Seems like an in-efficient way to transmit data, but maybe an easy way? |
I just tried forcing the mtu to 512! and it does seem to make a difference (also re; the discussion I just opened about controlling how drops are rendered) |
Hi Martin, I got a chance to test It is definitely much better. One thing that I found interesting, and it is counter to my above post, using a large MTU helped stability a great deal. Even though my network is not configured for Jumbo Packets, and all hosts are using standard ethernet MTU. Both tests were run with See below.
|
Hi Martin... any thoughts on the above results from the FEC testing version you gave me? |
I, too, am still very interested about any improvements in this area. Right now my wifi-based test rigs still lose maybe one GOP every two minutes. It's not fatal, as long as a putative audience is tolerant. |
Duplicate first packet to increase resilliancy in cases when the traffic is low, usually a single packet of some inter-frame compression like H.264/HEVC. But it will similarly do the job when more packets per frame are used. First packet is duplicated instead of the last one because the last packet can have less symbols than the first if there is more than 1 packet, eg. `DDDD|DF` (D - primary data; F - FEC, | - packet bounadry). refers to GH-361
sorry for the delay, returning back to it now. Well, I've added the proposed change it to current code tree. Unfortunately, it is a bit hack but I don't have anything better now. The duplication is used whenever LDGM or Reed-Solomon is used as FEC. This can be disabled by As @armelvil noted, reducing packet sizes may increase the resiliency (the recovery capability) for lower bitrate traffic, indeed. |
Duplicate first packet to increase resilliancy in cases when the traffic is low, usually a single packet of some inter-frame compression like H.264/HEVC. But it will similarly do the job when more packets per frame are used. First packet is duplicated instead of the last one because the last packet can have less symbols than the first if there is more than 1 packet, eg. `DDDD|DF` (D - primary data; F - FEC, | - packet bounadry). refers to CESNETGH-361
Hello,
In my quest to lower latency, I am testing various built in FEC functions in UG. Currently I am wrapping the output of UG with srt-live-transmit, but inherently adds 3x (or more) RTT to the glass-to-glass latency. In my testing Reed Solomon seem to provide basically zero resilience from even less than 1% packet loss, but LDGM was high resilient even at 10% loss.
Tested both x264 and x265. x265 being much worse than x264, basically un-usable.
Encoder - Ubuntu Server 23.10
Input is HD SDI via BMD
uv -t decklink:codec=R12L -c libavcodec:encoder=libx26X:crf=22
Client - Ubuntu Desktop 23.10
uv -d vulkan_sdl2
I am testing on a local LAN with an avg ping of about 0.8ms, and no packet loss.
I am simulating packet loss using
sudo iptables -A INPUT -i enp86s0 -m statistic --mode random --probability N -j DROP
. (must dosudo ufw disable
for this to work)I've tried several permutations of RS FEC and all of them seem to provide no resiliency with even a 1% (N=0.01) loss in packets.
encoder log:
receiver log:
But if I use LDGM I can sustain 10% (N=0.1) loss reliably, with just a few hits every now and then. Of course 10% loss is an exaggerated scenario.
-f V:LDGM:200:250:5
encoder log:
receiver log:
I have not test this with audio. I have not analyzed how the additional amount of FEC data adds to the total data stream.
I have also not tried to optimize LDGM settings, so they could be wildly in-efficient.
Using percentage based coverage with LDGM seems to NOT work very well either. Smaller percentage amounts have same errors.
-f V:LDGM:10%
The text was updated successfully, but these errors were encountered: