-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using ffmpeg filter_complex to split output to two (or more) rtsp streams #1546
Comments
Please show go2rtc WebUI > Add > FFmpeg devices page. And let me know the models of your cameras. |
I'm actually developing this on a RPi with one v2 camera:
The RPi with the three cameras (not tried the complex_filter on that RPi as I thought I must have the rtsp output part wrong):
|
My ideal solution would allow the RPi to output h264 (for each of the three cameras) main stream at 2560x1440 and sub-stream at 1280x720, both streams with a timestamp. I don't think the RPi is going to be powerful enough for that though, so a compromise on main stream to 1920x1080 would be OK. |
You didn't say your camera models. I am looking for test cameras like yours myself. Maybe you can send a link to the shop. I don't think you should convert the stream in the go2rtc. You need to take the H264 codec directly from the camera and not change it. It's better to resize the picture at the entrance to Frigate. Frigate will have to convert the H264 codec to YUV anyway. And along with this step it can resize the picture. |
They are USB webcams from Ali Express: https://www.aliexpress.com/item/1005004404049549.html. They have IR cut and can control IR LEDs (from 12V PSU). Happy to share more details if you want. I did think about using the non-compressed output of the camera, but the max resolution is 1920x1080. It is a possibility though. I tried using one full resolution H264 stream from each camera at the very beginning. It works, and the RPi CPU is almost nothing (as you'd expect), but my camera system will have 24+ cameras and I would need a very powerful PC for Frigate. So instead, I want to make use of the RPi processing power and create more of a distributed system. Also, I made a mistake in the first post - the delay between main and sub-stream is >3 minutes (not seconds). Here is an example from this morning. This is when scrubbing with the mouse (sub-stream is used in Frigate): Here is when watching the video (main stream is used in Frigate): The time difference is 2 minutes and 49 seconds, so less than I said. So it may get bigger the longer the cameras have been running? |
Thanks! Looks like there are 3 cameras in the link:
What's yours? This seems to be a link to "The RPi with the three cameras". Becase "2560*1440 CMOS" in the link name. And what about model for this camera "I'm actually developing this on a RPi with one v2 camera"? You can't reject H264 to YUV transcoding on input to Frigate. It's going to be in any case. |
The difference is in what lens you want. I got mine with no lenses and bought lenses to suit each camera (depends on how wide angle you need). If you press View More and scroll down on the Ali Express website there are some examples and a graph showing the different lenses. Yes, this is for the the RPi with three cameras. The other RPi has the RPi Foundation high quality camera: https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/ (not v2 as that is something else, my bad) The pipeline for using non-compressed would be: YUV from camera -> ffmpeg complex filter {add timestamp text -> h264 compress -> output 1 rtsp, reduce to 1280x720 -> add timestamp text -> h264 compress -> output 2 rtsp} output 1 is full resolution h264 for recording in Frigate, output 2 is lower resolution for display and detection in Frigate. What I am trying now is: h264 from camera -> ffmpeg complex filter {no change to stream -> output 1 rtsp, decompress -> reduce to 1280x720 -> add timestamp text -> h264 compress -> output 2 rtsp} At the moment (with the two streams per camera) Frigate is using about 3-4% CPU per camera, so it is OK for 24+ cameras. |
How you plan to use output 1 and output 2? |
Thank you. I ordered the camera from Ali. |
Let me know if you want to use the IR cut and add IR LEDs as it took me a while to work out how to wire them. I will experiment with YUV and the latest master. Before I do, can you explain how to set two rtsp outputs (this is from the original question I asked)? I found exec.go takes the md5 of the name of the camera stream to make the URL to send to. With the second output, is that what I am missing? At the moment I am trying:
Should "Camera1_medium" be the md5 of this string? i.e. 73e5073b7420fb75abecb3d48525683c |
No. Your version should work. |
Ahh. OK, thanks. I noticed there are some changes since v1.9.7, so I will upgrade and see if that fixes it. Unless you have any other ideas? |
Splitting a single input into multiple outputs is a complex task, it has not been explored yet. |
OK, probably best for me to leave multiple output for now then.
Output 1 will be used for record and output 2 for display and detection (in Frigate):
This means Frigate doesn't have to decode the full resolution for detection and reduces the CPU by quite a lot. |
I got the camera you recommended. V4L2 now supports a bunch of new codecs. |
Just looked at the latest commit. From what I understand you can now do:
...instead of:
Is this correct? As you said, the v4l2 source will perform better than the ffmpeg source, however, even with 3 USB cameras, that was never an issue when there is no transcoding or timestamping happening. I think for my situation I should investigate the named pipe way (which you have mentioned in other discussions, but I have not seen any examples of). And if that doesn't perform well enough, to write a python script using OpenCV which outputs to two named pipes (using Python/OpenCV it would be possible to really optimise things - e.g. the generation of the timestamp bitmap only needs doing once for all the cameras, and only once per second). If this sounds the best way to you, do you have any examples using named pipes? I'm glad your USB camera works! I have had quite a few working for nearly a year, so they are reliable. One thing I have never got working is the audio - if you configure audio the camera will stop working after some time (minutes to hours). It fails on Windows too (when used a webcam), so I am pretty sure it is the camera. I have not tried with go2rtc though... |
This optimisation probably won't help in your case. But for other users it will allow to avoid unnecessary ffmpeg launching. |
I have been using the recommended way of generating sub-streams from a camera - sub-streams have the main stream as input, e.g.:
This works but:
So, I am experimenting with combining all the streams into one "exec" command to see if this solves some of the issues.
The problem I have is that I am not sure how to set the outputs of the ffmpeg command. I read that I can use an empty stream but I can't find an example, so tried this:
The important bit is I am using "{output}" for the main stream output, and "rtsp://127.0.0.1:8554/Camera1_sub" for the sub-stream.
The log shows:
I think I must be close, but some advice or an example would be fantastic.
Obviously, any pointers to what I should be doing instead to solve the issues is also appreciated!
The text was updated successfully, but these errors were encountered: