For the sake of testing, I'd like to construct a pipeline that encodes and then decodes live audio. I have tried with mp3 or aac encoding, and I can certainly do it if the source is non-live:
$ gst-launch-1.0 audiotestsrc ! lamemp3enc ! mpegaudioparse ! mad ! alsasink
$ gst-launch-1.0 audiotestsrc ! faac ! audio/mpeg, stream-format=raw ! aacparse ! faad ! alsasink
In the above cases, the pipeline is constructed and I can hear the audio playing back. However, if the source is live, the pipeline doesn't fail to play, but there's no audio played back.
I'm sure I'm missing some key concept, but can't see what!
Can it be the live source you are using causing the issue? It may have additional latency causing the audio sink to drop all samples.
How about this pipeline:
$ gst-launch-1.0 audiotestsrc is-live=true ! faac ! aacparse ! faad ! autoaudiosink
Here the audiotestsrc acts as if it was a live source. Also note that it is advised to add parsers after encoder elements. So "aacparse" for AAC audio and "mpegaudioparse" for MP3 audio.
Related
I'm writing a Qt 5.15 application that should play an RTP / MPETGS / H.264 video on Linux UbuntuĀ 20.04 (Focal Fossa).
I'm running GStreamer 1.16.3.
Since I'm new to GStreamer, I made everything step by step starting from official tutorials... at this moment I'm able to play an RTP / H.264 stream almost realtime.
Now the last step (adding MPEGTS support) seems to be the hardest.
My source to make a test is an MP4 H.264 QuickTime file, and I stream it over the network through gst-launch.
The working RTP / H.264 output pipeline is the following shell command:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5000;
To test the input pipeline without messing up the Qt/C++ code, I use another shell command like this:
gst-launch-1.0 -v udpsrc port=5000 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
AFAIK, if the shell input pipeline works, it will work in my C++ code (of course elements after avdec_h264 depends on my programming/running environment, but if someone needs it I can share without problem).
To add mpegts support, I tried with these lines (the last of a long sequence of trials):
OUTPUT:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=5000;
INPUT:
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp" ! rtpmp2tdepay ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
It works, but the video seems to stumble/bounce while playing.
What I'm missing?
As a side question, I would like to avoid to re-encode the video of the source prior sending it trough RTP. I would like to remove such elements from the output pipeline:
avdec_h264 ! x264enc tune=zerolatency
I tried, but the result goes from nothing to this,
if I add the config-interval=-1 parameter to h264parse.
Please note that I would like to keep the latency as low as possible.
--- UPDATE ---
I tried putting a queue element between rtpmp2tdepay and tsparse and this makes video playing fluid, but latency grows to seconds while playing RTP / H.264, only it's nearly real-time.
Since MPEGTS is only a transport protocol, why should it add more delay than actual encoding?
Is there a way to shorten this delay? No matter if it changes the whole pipeline as long as protocols and encoding are kept the same.
BTW, I tried tuning max-size-buffers size, but using values under 150 will cause play to stumble.
--- UPDATE ---
If I use VLC to create the output stream using the same file, things get even worse:
*:sout=#rtp{dst=127.0.0.1,port=5000,mux=ts} :no-sout-all :sout-keep*
It is the same stumbling and scrambled video without a chance to fix it:
I found a partial fix to the latency problem and compatibility with VLC:
! autovideosink sync=false
Disabling the clock synchronisation allows to shorten delays, and also VLC output streaming is now collected correctly by GStreamer.
This also makes the queue element unnecessary (not in the general use case probably), and AFAIK also tsparse is redundant.
Anyway, I still need to understand why I need to re-encode H.264 video (in the output pipeline).
I'm trying to transmit via UDP a h264 encoded video using Gstreamer.
It works fine but only when I start the client before the server. I think it may be something related to key-frame, Its possible that the client is waiting for this mark, and when started at first the server it only sends one.
Here I attach server Gstreamer command, is there any parameter that indicate the number of frames between two keyframes?
gst-launch-1.0 v4l2src device=/dev/video0 ! "video/x-raw,width=1920,height=1080,format=(string)YV12,framerate=30/1" ! imxipuvideotransform ! "video/x-raw,width=1280,height=720,format=(string)I420,framerate=30/1" ! imxvpuenc_h264 idr-interval=0 ! rtph264pay pt=96 ! udpsink host=MULTICAST multicast-iface=eth0 force-ipv4=true port=5010 sync=false
Thanks a lot for the answers!
I'm trying to figure out how to create a pipeline in GStreamer (1.4.4) beyond the very simple playbin one. I have a stream being fed into a GTK+ DrawingArea widget but it's currently letter-boxing it whereas I want to experiment with the video stream expanded to fit the entire widget.
To that end, I've played with the gst-launch-1.0 app but I'm finding that a fakesink at the end seems to work but an autovideosink doesn't. The two pipelines are (X being an rtspt:// URI for an IP camera):
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! fakesink
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! autovideosink
In other words, the only difference is the sink itself. It appears that, no matter where I place the sink (even if it's just an rtspsrc location=X ! sink), the problem still occurs, and that problem manifests itself as:
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> error: Internal data flow error
rtspsrc gstrtspsrc.c:5074:gst_rtspsrc_loop<rtspsrc0> streaming task paused, reason not-linked (-1)
I've tried running at higher debug levels but the output doesn't seems to have any useful information beyond the warnings already given.
Note that both the following commands work okay:
gst-play-1.0 X
gst-launch-1.0 playbin uri=X
But, as discussed, I don't really want a playbin since I want to install be own video scaler in the pipeline.
My (albeit limited) understanding is that the rtph264depay removes the unnecessary RTSP protocol stuff, h264parse decodes the H.264 data, decodebin auto-magically selects the correct decoder and the autovideosink selects the correct sink for displaying the stream.
I'm not entirely certain how changing something at stage five of the pipeline would affect how stage one works.
So why is it that a fake sink works but the automatic selection one does not?
Add videoconvert before autovideosink will make it works.
gst-launch-1.0 rtspsrc location=X ! rtph264depay ! h264parse ! decodebin ! videoconvert ! autovideosink
The reason is sink element does not support format output from your decode, thus cause the error "streaming task paused, reason not-linked".
fakesink is different. It simply drops the data, not care about format, so it does not this error.
playbin can play because it automatically add convert element when need.
I'm just trying to save the dummy video to my directory.
In that case I end up in this error so I knew something is wrong in the pipeline.
Do I missing any parameters here ?
gst-launch -v videotestsrc ! ximagesink ! filesink location=~/cupcake.mp4
WARNING: erroneous pipeline: could not link ximagesink0 to filesink0
I just want to record only the video.
ximagesink is a sink element and as such doesn't have an output (source pad).
This command will tell you about the details of an element:
gst-inspect-1.0 ximagesink
Notice that ximagesink has only sink pad and no source pads, so it doesn't generate any output.
You can dump the video directly to file by using:
gst-launch-1.0 videotestsrc ! filesink location=~/cupcake.raw
Unfortunately, this is still not what you want as videotestsrc will generate raw video and not encoded or muxed to mp4. If you want mp4 you need to put it into the mp4mux that will organize the data it receives into the mp4 container. It is also recommended to encode the video to reduce its size. Let's assume you want to use H.264 as your codec. You can use the element x264enc to encode to H.264
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux ! filesink location=~/cupcake.mp4
Notice that I also added the "-e" parameter that will make gst-launch-1.0 send an EOS event and wait for the EOS message to indicate elements have finished working. Without the flag the pipeline is simply interrupted and aborted.
In any case I'd recommend going back to the manuals for application development: http://gstreamer.freedesktop.org/documentation/
The manpage for gst-launch-1.0 is also useful.
Disclaimer: You are using gstreamer 0.10 which is 3 years unmantained and obsolete, please upgrade to 1.0 (This answer is aimed at 1.0 but it can easily be applied to 0.10 by changing the commands to 0.10 version)
I'm currently trying to save a video stream into files using gst-launch while simultaneously watching the video itself (using v4l2src). As of now I got this by doing a work around with saving the images to files using ! multifilesink while having a tcl-script that automatically shows the newest file in one folder in an X windows.
This works but has of course a bit of a delay I would like to reduce.
Is there a possibility to do this with only using gst-launch? I'm not very experienced with gstreamer unfortunately. Could it be done saving the files with multifilesink while showing them using multifilesrc? Or is it impossible with only gst-launch?
It is possible, there is the 'tee' element that will replicate the stream in its source pads.
So, for example:
gst-launch-1.0 v4l2src ! tee name=t ! queue ! videoconvert ! autovideosink t. ! queue ! videoconvert ! jpegenc ! multifilesink location=image_%06d.jpg
This should have it displaying and saving to jpg with multifilesink.
Also, it seems that you are using gstreamer 0.10, it is (2 years?) obsolete and unmantained. Please move to 1.x