Dear gstreamer community,
I am running gstreamer (1.20.3) on ubuntu 22.04 LTS with decklinkdrivers (12.4)
After building (and playing around with gstreamer, wathing tutorials etc) the following pipeline I am able to decode a high quality HD SRT Stream (udp streaming) and am outputting it to SDI (in 1080i50), works very well.
gst-launch-1.0 -v srtsrc uri=srt://x.x.x.x:xxxx latency=200 ! tsdemux name=demux demux. ! h264parse ! video/x-h264 ! avdec_h264 ! queue ! videoconvert ! video/x-raw,format=UYVY ! decklinkvideosink mode=1080i50 sync=false demux. ! avdec_aac ! queue ! audioconvert ! audio/x-raw, format=S32LE, channels=2 ! decklinkaudiosink
Audio to Videosync is stable to each other for hours (didn't test for days), but after testing the encoder to decoder end to end on my gstreamer pipeline audio comes a little too early (about 60ms early).
I tried to only change buffersize in audiopart of the pipeline to correct the timing on the audiosite e.g.
queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=60000000
but audio to video offset didn't change here trying several different min-threshold-times.
for the decklinkaudiosink there is no ts-offset cap to change the timing here and also changing the buffer-time property here didn't change anything.
Can anybody please help me here how to correct the audio timing or audio latency to accurate videodecoding on my pipeline!?
Thanks!
Related
I would like to store a file which has AAC audio frames,
For that i used the below pipeline,
gst-launch-1.0 filesrc location=Test_44100Hz_2ch_s16le.wav ! "audio/x-raw,rate=44100,format=s16le,channels=2" ! audioparse format=raw raw-format=s16le rate=44100 channels=2 ! faac ! aacparse ! queue ! filesink location=a1
While reading that file again to pulsesink using below pipeline,
gst-launch-1.0 filesrc location=a1 ! aacparse ! faad ! audioconvert ! audioresample ! pulsesink
I am Receiving below error, I used GST_DEBUG=3, but i am not able find the solution.
0:00:00.031924804 3379 0x2231d60 WARN basesrc gstbasesrc.c:3483:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.033044700 3379 0x2231050 WARN baseparse gstbaseparse.c:3255:gst_base_parse_loop:<aacparse0> error: No valid frames found before end of stream
ERROR: from element /GstPipeline:pipeline0/GstAacParse:aacparse0: No valid frames found before end of stream
Additional debug info:
gstbaseparse.c(3255): gst_base_parse_loop (): /GstPipeline:pipeline0/GstAacParse:aacparse0
ERROR: pipeline doesn't want to preroll.
Can anybody help me, To solve this? I need to store AAC audio frames and need to stream that file as AAC audio stream.
This is it, tested working:
gst-launch-1.0 filesrc location=WAV_44_16bit.wav ! decodebin ! audioconvert ! queue ! voaacenc ! aacparse ! queue ! mp4mux ! filesink location=aac.mp4
gst-launch-1.0 filesrc location=aac.mp4 ! decodebin ! audioconvert ! audioresample ! alsasink
In container there are metadata information stored.. without them the decoder does not know how to process the data.
AAC Audio streams require a container in order to be useful within gstreamer
For decoder initialization it is necessary to know sampling frequency and Audio Object. In gstreamer we are unable to pass this metadata directly to the parser or the decoder. The parser collects this data instead from the mp4 header then the encoder inherits the frame structure/size and sample rate. So this is a deficiency in either aacparse(parser) or avdec_aac/faad(decoder), none of which have exposed parameters to specify frame size of a raw file, the afore mentioned metadata. That being said, I haven't found a compelling reason why anyone would need to do this. I found myself trying to do it before I discovered the aac simply needed to be muxed into an MP4(mp4mux) or another container to work and be portable. The container/framing only adds a small amount of data to the stream.
I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH
Recorded files with gstreamer-0.10 with FPS25 and FourCIF_Format plays in fast forward mode. Any solution would be appreciated. Some times skips 3-4 seconds in recorded files.
The pipeline I'm attempting to use is:
gst-launch v4l2src device=/dev/video2 !
'video/x-raw-yuv,width=704,height=576, framerate=25/1' ! tee
name=liveTee ! queue ! mfw_isink liveTee. ! queue ! vpuenc ! avimux !
filesink location=/home/Recording.avi
I'm gonna take a rough stab at it and re-format your question a bit. This is mostly a GStreamer and Freescale question, not so much QT.
gst-launch-1.0 -e videotestsrc pattern=ball do-timestamp=true
is-live=true ! timeoverlay !
'video/x-raw,width=704,height=576,framerate=25/1' ! tee name=liveTee !
queue leaky=downstream ! videoconvert !
ximagesink async=false
liveTee. ! queue leaky=downstream ! videoconvert ! queue ! x264enc !
avimux ! filesink location=/tmp/test.avi
The thing to keep in mind is that your encoder has to keep up with the live playback. So your pipeline needs to handle the case where the encoder falls out of sync. On the queue elements behind the tee, use the leaky attribute.
Then you also want to be careful about your video source and what it's supplying. It looks like in your case you want live video, but if your source was an existing video file the pipeline would probably need some more tweaking.
NOTE: It may be even simpler than that, just adding async=false to the videosink appears to be very important.
I am trying to use latest gstreamer and x265enc together. I saw that someone have already posted some commits in http://cgit.freedesktop.org/gstreamer/gst-plugins-bad/log/ext/x265/gstx265enc.c
Can anyone please give an example pipeline where it is known to working (gst-launch-1.0 pipeline example will be very helpful)
1)
What is the current status of x265enc plugin for gstreamer ? does it work really ?
Which branch of gstreamer I need to use to build x265enc? I want to build whole gsteamer source code which will be compatible with x265enc plugin.
What are the system requirement for x265enc and how to build it ? Any wiki/basic instructions will be very helpful.
My goal is to broadcast my ip cameras (h264 streams) as h265 stream on vaughnlive.tv
Currently, I am using following pipeline to broadcast in h264 format:
GST_DEBUG=2 gst-launch-1.0 flvmux name=mux streamable=true ! rtmpsink
sync=true location="rtmp://xxxxxxxxxxxx" rtspsrc
location="rtsp://xxxxxxx" caps="application/x-rtp,
media=(string)audio, clock-rate=(int)90000, encoding-name=(string)MPA,
payload=(int)96" ! rtpmpadepay ! mpegaudioparse ! queue ! mad !
audioconvert ! queue ! voaacenc bitrate=128000 ! aacparse !
audio/mpeg,mpegversion=4,stream-format=raw ! mux. rtspsrc
location="rtsp://xxxxxxx"
caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,
encoding-name=(string)H264" ! rtph264depay !
video/x-h264,stream-format=avc,alignment=au,byte-stream=false ! queue
! decodebin ! queue ! videorate ! "video/x-raw,framerate=30/1" ! queue
! x264enc threads=4 speed-preset=ultrafast bitrate=3072 ! mux.
2)
Can anyone please suggest on how should I change this pipeline to broadcast in h265 format using x265enc element?
A little late but, maybe some people will find this question when seeking info about H.265 support in gstreamer nowadays. This is with gstreamer 1.6.1 compiled from source on Ubuntu 15.10 which has packages ready for libx265..
1,
Encoder
There is x265enc which will be enabled when we have library libx265-dev.
The encoder is inside gst-plugins-bad so after doing autogen.sh you should see x265enc enabled.
You may also need h265parse, rtph265pay/depay
Decoder
I see two decoders, dont know which one is working, I guess libde265dec there is also avdec_h265.
mux
For mux for x264 I was using mpegtsmux, but this does not support video/x265, some work has to be done. The matroskamux should be working when using filesink etc..
[16:39] hi, which container is suitable for x265enc, for x264enc I was using mpegtsmux?
[16:54] otopolsky: mpegts would work if you add support for h265 there, not very difficult[16:55] slomo_: so we need to just add the caps compatibility?
[16:55] otopolsky: otherwise, matroskamux supports it. mp4mux/qtmux could get support for it relatively easily too
[16:55] otopolsky: a bit more than that. look at what tsdemux does for h265
[16:56] otopolsky: and check the gst_mpegts_descriptor_from_registration related code in tsmux
[17:00] slomo_: thanks
2,
Questioned flvmux also does not support h265 only h264..
matroskamux cannot be used for streaming, so only way is to patch mpegtsmux or flvmux etc.
I'm trying to get UPNP streaming to work. Rygel runs fine, however, all I get is a mono stream, even if the input is stereo. Doing some debugging, I replicated Rygel's gstreamer pipeline with
gst-launch-1.0 pulsesrc device=upnp.monitor num-buffers=100 ! audioconvert ! lamemp3enc target=quality quality=6 ! filesink location=test.mp3
where the problem is also apparent:
mp3info -x test.mp3
...
Media Type: MPEG 1.0 Layer III
Audio: Variable kbps, 44 kHz (mono)
...
Where does this pipeline lose the second channel? How can I debug this?
You never ask for stereo:
gst-launch-1.0 pulsesrc device=upnp.monitor num-buffers=100 ! "audio/x-raw,channels=2" ! audioconvert ! lamemp3enc target=quality quality=6 ! filesink location=test.mp3
Add a -v to the launch-line to see all the caps negotiated on all pads of the pipeline. Look for "channels" and see where it goes from 2 to 1.