PC1 is connected to a rtsp source via wifi.
PC1 is also connected to another network where PC2 is also connected
both running windows 10
the rtsp source is the RC controller of a drone. (wifi)
on PC 1 I use the following :
C:\gstreamer\1.0\msvc_x86_64\bin\gst-launch-1.0 rtspsrc location=rtsp://192.168.43.1:8554/fpv_stream latency=1 udp-reconnect=1 timeout=0 do-retransmission=false ! application/x-rtp ! rtph264depay ! h264parse ! queue ! avdec_h264 ! videoconvert ! video/x-raw,format=BGRx ! videoconvert ! autovideosink
to play drone's feed.
how can I relay the feed from PC1 to PC2 which is not (and cant be) connected to The RC controller?
what pipeline can I use to play the feed on PC2? can I use the same?
I have no experience on Gstreamer and I dont know what Im doing.
Thnx in advance.
Related
I am developing an application that streams h264 video with gstreamer using RTP and RTCP in c++. The video stream is successfully received and both the sender and receiver is generating SR/RR RTCP packets. My next goal was to utilize twcc (transport wide congestion control) for bandwidth management, which should be supported in gstreamer since version 1.18. I can not however figure out to enable this feature. My pipeline looks similar to this:
appsrc ! videoconvert ! h264enc ! rtph264pay ! rtpbin ! udpsink -> udpsrc ! rtpbin ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink
The rtpbin also has the appropriate RTCP pad links with its own udpsrc and udpsink on both sender and receiver side (not shown here).
As i understand twcc i need to set the appropriate rtp header extension, but i cannot figure out how to do that using gstreamer. I am also unsure how to make the recevier side send back the correct rtcp packets so that i can read the twcc-stats on the sender side.
Does anyone have an example on how i would make my gstreamer pipeline start using twcc?
I'm developing an app receiving an H.264 video stream from an RTSP camera and displaying and storing it to MP4 without transcoding. For the purpose of my current test, I record for 5 sec only.
My problem is that the MP4 is not playable. The resulting file varies in size from one run of the app to another showing something is very wrong (unexpected since the recording time is fixed).
Here are my pipelines:
rtspsrc location = rtsp://192.168.0.61:8554/quality_h264 latency=0 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! queue ! interpipesink name=cam1
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! decodebin ! autovideoconvert ! d3dvideosink sync=false
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! mp4mux ! filesink location=test.mp4
In a next step I will add more cameras and will need to be able to change which camera gets recorded to MP4 on the fly, as well as pause/resume the recording. For this reason, I've opted to use interpipesink/src. It's a set of gstreamer elements that allow communication between two independent pipelines. https://github.com/RidgeRun/gst-interpipe
A thread waits for 10 sec, then sends EOS on the 3rd pipeline (recording). Then, when the bus receives GST_MESSAGE_EOS it sets the pipeline state to NULL. I have checked with a pad probe that the EOS event is indeeed received on the sink pad of the filesink.
I send EOS using this code: gst_element_send_event(m_pipeline, gst_event_new_eos()); m_pipeline is the 3rd pipeline.
Those exact pipelines produce a playable MP4 when run with gst-launch adding -e at the end.
If I replace mp4mux by matroskamux in my app, the mkv is playable and has the expected size. However, there's something wrong with the timestamps as the player shows it starting at time 10 sec insteasd of 0. Do I need to edit the timestamps before passing the buffers to the mux (mp4mux or matroskamux)?
It looks to me as if the MP4 is not fully written, but I can't see what else I can do appart from sending EOS?
I'm opened to suggestions to restructure the app, in case the use of the interpipe elements may cause a problem (although I can't see why at the moment).
I'm using Gstreamer 1.18.2 on Windows 10 (x64).
I have a program that captures video from USB camera, process and stream to rtsp udp. I am using OpenCV with Gstreamer.
When I use the main thread to write out the frames, I can capture it with no problem using gst-launch.
However, when I create another thread to do the writing out the frame, nothing happens with gst-launch. I know the other thread is running because I am able to "imshow" the frames in that thread. Also, I am sure that the writer is open since I checked it before writing.
Writer pipeline : appsrc ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5015
Receiver: gst-launch-1.0 udpsrc port=5015 ! queue ! "application/x-rtp, media=(string)video, encoding-name=(string)H264, framerate=30/1" ! rtph264depay ! decodebin ! videoconvert ! autovideosink
This is already solved and is not related to multi-threading at all. It was in the composition of the pipeline. "port" keyword was not added in the ostream.
I'm trying to transmit via UDP a h264 encoded video using Gstreamer.
It works fine but only when I start the client before the server. I think it may be something related to key-frame, Its possible that the client is waiting for this mark, and when started at first the server it only sends one.
Here I attach server Gstreamer command, is there any parameter that indicate the number of frames between two keyframes?
gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw,width=1920,height=1080,format=(string)YV12,framerate=30/1" ! imxipuvideotransform ! "video/x-raw,width=1280,height=720,format=(string)I420,framerate=30/1" ! imxvpuenc_h264 idr-interval=0 ! rtph264pay pt=96 ! udpsink host=MULTICAST multicast-iface=eth0 force-ipv4=true port=5010 sync=false
Thanks a lot for the answers!
I'm using GStreamer to take a webcam feed, edit it through OpenCV, and stream it to a network. The pipeline I'm using isn't throwing any exceptions or errors, but it also isn't streaming. I haven't the faintest idea what could be wrong.
Here's the sample of code.
res = sprintf(pipeline2_str, "appsrc name=\"%s\" ! ffmpegcolorspace ! x264enc ! rtph264pay ! queue ! udpsink port=9001", app_src_name);
appsrc name is coming back from OpenCV.