I want to record video data coming from Camera(through RTSP H.264). Can anybody help me how to record rtsp stream using gstreamer?(Please provide gstreamer command line details). Recording will be in MPEG4 formate
Regards
Kiran
This will stream the video and output to your screen.
gst-launch rtspsrc location=rtsp://some.server/url ! decodebin ! xvimagesink
gst-launch uridecodebin uri=rtsp://some.server/url ! xvimagesink
To record the stream to your drive using MPEG4:
gst-launch rtspsrc location=rtsp://some.server/url ! decodebin ! mpegtsmux ! filesink location=file
rtspsrc from the reference manual and another useful resource.
Related
I'm writing a Qt 5.15 application that should play an RTP / MPETGS / H.264 video on Linux UbuntuĀ 20.04 (Focal Fossa).
I'm running GStreamer 1.16.3.
Since I'm new to GStreamer, I made everything step by step starting from official tutorials... at this moment I'm able to play an RTP / H.264 stream almost realtime.
Now the last step (adding MPEGTS support) seems to be the hardest.
My source to make a test is an MP4 H.264 QuickTime file, and I stream it over the network through gst-launch.
The working RTP / H.264 output pipeline is the following shell command:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5000;
To test the input pipeline without messing up the Qt/C++ code, I use another shell command like this:
gst-launch-1.0 -v udpsrc port=5000 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
AFAIK, if the shell input pipeline works, it will work in my C++ code (of course elements after avdec_h264 depends on my programming/running environment, but if someone needs it I can share without problem).
To add mpegts support, I tried with these lines (the last of a long sequence of trials):
OUTPUT:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=5000;
INPUT:
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp" ! rtpmp2tdepay ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
It works, but the video seems to stumble/bounce while playing.
What I'm missing?
As a side question, I would like to avoid to re-encode the video of the source prior sending it trough RTP. I would like to remove such elements from the output pipeline:
avdec_h264 ! x264enc tune=zerolatency
I tried, but the result goes from nothing to this,
if I add the config-interval=-1 parameter to h264parse.
Please note that I would like to keep the latency as low as possible.
--- UPDATE ---
I tried putting a queue element between rtpmp2tdepay and tsparse and this makes video playing fluid, but latency grows to seconds while playing RTP / H.264, only it's nearly real-time.
Since MPEGTS is only a transport protocol, why should it add more delay than actual encoding?
Is there a way to shorten this delay? No matter if it changes the whole pipeline as long as protocols and encoding are kept the same.
BTW, I tried tuning max-size-buffers size, but using values under 150 will cause play to stumble.
--- UPDATE ---
If I use VLC to create the output stream using the same file, things get even worse:
*:sout=#rtp{dst=127.0.0.1,port=5000,mux=ts} :no-sout-all :sout-keep*
It is the same stumbling and scrambled video without a chance to fix it:
I found a partial fix to the latency problem and compatibility with VLC:
! autovideosink sync=false
Disabling the clock synchronisation allows to shorten delays, and also VLC output streaming is now collected correctly by GStreamer.
This also makes the queue element unnecessary (not in the general use case probably), and AFAIK also tsparse is redundant.
Anyway, I still need to understand why I need to re-encode H.264 video (in the output pipeline).
I have a program that captures video from USB camera, process and stream to rtsp udp. I am using OpenCV with Gstreamer.
When I use the main thread to write out the frames, I can capture it with no problem using gst-launch.
However, when I create another thread to do the writing out the frame, nothing happens with gst-launch. I know the other thread is running because I am able to "imshow" the frames in that thread. Also, I am sure that the writer is open since I checked it before writing.
Writer pipeline : appsrc ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5015
Receiver: gst-launch-1.0 udpsrc port=5015 ! queue ! "application/x-rtp, media=(string)video, encoding-name=(string)H264, framerate=30/1" ! rtph264depay ! decodebin ! videoconvert ! autovideosink
This is already solved and is not related to multi-threading at all. It was in the composition of the pipeline. "port" keyword was not added in the ostream.
I'm trying to transmit via UDP a h264 encoded video using Gstreamer.
It works fine but only when I start the client before the server. I think it may be something related to key-frame, Its possible that the client is waiting for this mark, and when started at first the server it only sends one.
Here I attach server Gstreamer command, is there any parameter that indicate the number of frames between two keyframes?
gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw,width=1920,height=1080,format=(string)YV12,framerate=30/1" ! imxipuvideotransform ! "video/x-raw,width=1280,height=720,format=(string)I420,framerate=30/1" ! imxvpuenc_h264 idr-interval=0 ! rtph264pay pt=96 ! udpsink host=MULTICAST multicast-iface=eth0 force-ipv4=true port=5010 sync=false
Thanks a lot for the answers!
I'm trying to figure out how to create a pipeline in GStreamer (1.4.4) beyond the very simple playbin one. I have a stream being fed into a GTK+ DrawingArea widget but it's currently letter-boxing it whereas I want to experiment with the video stream expanded to fit the entire widget.
To that end, I've played with the gst-launch-1.0 app but I'm finding that a fakesink at the end seems to work but an autovideosink doesn't. The two pipelines are (X being an rtspt:// URI for an IP camera):
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! fakesink
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! autovideosink
In other words, the only difference is the sink itself. It appears that, no matter where I place the sink (even if it's just an rtspsrc location=X ! sink), the problem still occurs, and that problem manifests itself as:
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> error: Internal data flow error
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> streaming task paused, reason not-linked (-1)
I've tried running at higher debug levels but the output doesn't seems to have any useful information beyond the warnings already given.
Note that both the following commands work okay:
gst-play-1.0 X
gst-launch-1.0 playbin uri=X
But, as discussed, I don't really want a playbin since I want to install be own video scaler in the pipeline.
My (albeit limited) understanding is that the rtph264depay removes the unnecessary RTSP protocol stuff, h264parse decodes the H.264 data, decodebin auto-magically selects the correct decoder and the autovideosink selects the correct sink for displaying the stream.
I'm not entirely certain how changing something at stage five of the pipeline would affect how stage one works.
So why is it that a fake sink works but the automatic selection one does not?
Add videoconvert before autovideosink will make it works.
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! videoconvert ! autovideosink
The reason is sink element does not support format output from your decode, thus cause the error "streaming task paused, reason not-linked".
fakesink is different. It simply drops the data, not care about format, so it does not this error.
playbin can play because it automatically add convert element when need.
I'm using GStreamer to take a webcam feed, edit it through OpenCV, and stream it to a network. The pipeline I'm using isn't throwing any exceptions or errors, but it also isn't streaming. I haven't the faintest idea what could be wrong.
Here's the sample of code.
res = sprintf(pipeline2_str, "appsrc name=\"%s\" ! ffmpegcolorspace ! x264enc ! rtph264pay ! queue ! udpsink port=9001", app_src_name);
appsrc name is coming back from OpenCV.