NNstreamer stream detection video over udp #4257
-
i can currently use the below command for object detection and stream the results to the same device where camera and display are connected: Now i want to do the objectdetection part on the first device where camera is connected and stream the detection results video to a second device where display is connected. Any idea how this can be achieved. I was able to use udp to send and receive only the video and not the detection part. Below is the plugins used: Display device: gst-launch-1.0 -v udpsrc port=4000 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! avdec_h264 ! autovideosink sync=false |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 5 replies
-
If you do not need to use udp, how about using edgesrc/edgesink. (Supports TCP and MQTT protocols.) Cam device pipeline (ximagesink is repaced with edgesink)
display device pipeline:
|
Beta Was this translation helpful? Give feedback.
-
Hello. This question is similar to mine so I decided to post here. Please let me know if I should open a new discussion and I will. The edgesrc solution works for situations that allow for installation of nnstreamer to clients. However, in our case, backwards compatibility with other client solution requires staying with standard gstreamer streaming elements such as UDP element. We are using the imx8mp board. We are using the following pipeline: ` I can use similar pipeline (no object detection, overlay, and composite) just video encoding and streaming to the same port and client. Everything is working there no issues. Please let me know if you can provide some suggestion as what could be wrong with the pipeline. |
Beta Was this translation helpful? Give feedback.
-
Yes, as long as the same tf-lite binary is linked, it should run. There's no conversion required because nnstreamer treats a neural network as a black box, handled by the corresponding framework (tf-lite in this case). You really need to write a minimal pipeline that shows the same sympton.
As above, I'd suggest to write a pipeline without tf-lite/tensor-filter, but with a media streams only. (you can create a stream from a static file, such as .png file). If I were you, I'd start with a minimal pipeline having imxcompositor and synthesized inputs (e.g., from .png/.bmp/.jpg or saved x-raw/video frames) |
Beta Was this translation helpful? Give feedback.
If you do not need to use udp, how about using edgesrc/edgesink. (Supports TCP and MQTT protocols.)
Cam device pipeline (ximagesink is repaced with edgesink)