gstreamer: pass frame PTS in command line API - gstreamer

Currently I have a setup like this.
my-app | gst-launch-1.0 -e fdsrc ! \
videoparse format=GST_VIDEO_FORMAT_BGR width=640 height=480 ! \
videoconvert ! 'video/x-raw, format=I420' ! x265enc ! h265parse ! \
matroskamux ! filesink location=my.mkv
From my-app I am streaming raw BGR frame buffers to gst. How can I also pass presentation timestamps (PTSs) for those frames? I have somewhat full control over my-app. I can open other pipes to gst from it.
I know I have the option to use gstreamer C/C++ API or write a gstreamer plugin, but I was trying to avoid this.

I guess you can set a framerate for the videoparse element. You can also try do-timestamp=true for the fdsrc - maybe it requires a combination of both.
If you have the PTS in my-app you would probably need to wrap buffers and PTS in a real GstBuffer and use gdppay and gdpdepay as payload between the link.
For example if your my-app would dump the images in the following format:
https://github.com/GStreamer/gstreamer/blob/master/docs/random/gdp
(not sure how recent this info document is)
You could receive the data with the following pipeline:
fdsrc ! gdpdepay ! videoconvert ! ..
No need for resolution and format either as it is part of the protocol too. And you will have PTS as well if set.
If you can use GStreamer lib in my-app you could some soome pipeline like this:
appsrc ! gdppay ! fakesink dump=true
And you would push your image buffers with PTS to the appsink.
See https://github.com/GStreamer/gst-plugins-bad/tree/master/gst/gdp for some examples how gdp is used as a protocol.

Related

Gstreamer preserve timestamp when encoding ts segments

I have a series of ts files(h265) which are part of a m3u8 manifest which are fed into the pipeline through fdsrc. I use the following pipeline to transcode them to H264 to be played on a Hlsjs web browser.
cat 2021-06-30T00-55-41Z_2000000.ts | gst-launch-1.0 -q mpegtsmux name=mux ! fdsink fd=1 fdsrc ! tsdemux name=demux demux. ! queue ! h265parse ! nvh265dec ! videoconvert ! videoscale ! video/x-raw,width=640,height=360 ! nvh264enc ! mux.
The individual ts segments are transcoded successfully and can be played.
However the DTS is out of aligment and when these ts segments are played as part of the hls manifest, it is not able to play as DTS is out of order.
[mpegts # 0x7fb69100a400] DTS 6496420096 < 6496446847 out of order
[hls # 0x7fb69580ea00] DTS 6496420096 < 6496446847 out of order
In FFMPEG we have copyts to preserve the timestamp.
Is there something similar in gstreamer to preserve the timestamp? Or atleast generate a timestamp with the current time so that the player doesnt complain?
I tried fdsrc do-timestamp=true but that didnt work.
I appreciate any help in this.
Best

I want to create a HLS (HTTP Live Streaming) Stream using Gstreamer but Audio only

what I want to do is create an m3u8-file out of an alsa soundcard input.
Like:
arecord hw:1,0 -d 10 test.wav | gst-launch-1.0 ....
I tried this for testing:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! hlssink
but it doesn't work.
Thank you for helping.
You can’t create directly HLS video transport segments (.ts) from audio raw source. You need to encode it with some encoder and then mux it before sending to hlssink plugin.
One of the problems that you’ll encounter is that the hlssink plugin won’t split the segments with only audio stream so you are going to need something like keyunitsscheduler to split correctly the streams and create the files.
An example pipeline using voaacenc to encode audio and mpegtmux to mux would be as follows:
gst-launch-1.0 audiotestsrc is-live=true ! audioconvert ! voaacenc bitrate=128000 ! aacparse ! audio/mpeg ! queue ! mpegtsmux ! keyunitsscheduler interval=5000000000 ! hlssink playlist-length=5 max-files=10 target-duration=5 playlist-root="http://localhost/hls/" playlist-location="/var/www/html/hls/stream0.m3u8" location="/var/www/html/hls/fragment%05d.ts"

gstreamer shmsrc and shmsink with h264 data

i am trying to share an h264 encoded data from gstreamer to another two processes(both are based on gstreamer).After some research only way i found is to use the shm plugin.
this is what i am trying to do
gstreamer--->h264 encoder--->shmsink
shmrc--->process1
shmrc--->process2
i was able to get raw data from videotestsrc and webcam working. But for h264 encoded data it doesn't.
this is my test pipeline
gst-launch-1.0 videotestsrc ! video/x-raw,width=640,height=480,format=YUY2 !
x264enc ! shmsink socket-path=/tmp/foo sync=true wait-for-
connection=false shm-size=10000000
gst-launch-1.0 shmsrc socket-path=/tmp/foo ! avdec_h264 ! video/x-
raw,width=640,height=480,framerate=25/1,format=YUY2 ! autovideosink
have anyone tried shm plugins with h264 encoded data, please help
Iam not aware of the capabilities of your sink used in "autovideosink", but as per my knowledge you either need to use videoconvert if the format supported by the sink (like kmssink or ximagesink) are different than provided by the source (in your case YUY2) or use videoparse if the camera format is supported by the sink. You may check this using gst-inspect-1.0 for the formats supported.
Anyways I am able to run your pipeline with some modifications using videoconvert in my setup :
./gst-launch-1.0 videotestsrc ! x264enc ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false shm-size=10000000
./gst-launch-1.0 shmsrc socket-path=/tmp/foo ! h264parse ! avdec_h264 ! videoconvert ! ximagesink
You may modify it as per the resolutions you want.
Kindly let me know if you face any issue with above.

Synchronize two RTSP/RTP H264 video streams capture using GStreamer

I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH

play encoded stream in gstreamer

I used the following GStreamer pipeline to store my encoded stream in a binary file:
gst-launch v4l2src ! videorate ! video/x-raw-yuv, framerate=\(fraction\)10/1 \
! videoscale ! video/x-raw-yuv, format=\(fourcc\)I420, width=640, height=480\
! ffmpegcolorspace ! x264enc ! fdsink > vid.bin
Now i want to play previously recorded files in GStreamer using the following pipeline:
cat vid.bin | gst-launch fdsrc ! ffdec_h264 ! autovideosink
But then it gives the following error:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/ffdec_h264:ffdec_h2640: Internal GStreamer error: negotiation problem. Please file a bug at http://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer.
Additional debug info:
gstffmpegdec.c(2804): gst_ffmpegdec_chain (): /GstPipeline:pipeline0/ffdec_h264:ffdec_h2640:
ffdec_h264: input format was not set before data start
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
I know that the best way to capture video is using Muxers but is there any way to play my previous files?
Thanks
Not sure your pipeline is right.
If you want to write to a file why not simply use filesink and filesrc.
fdsink > vid.bin will not work fine because if you see the prints by gstreamer gst-launch will also go into the file. [Just open vid.bin in an text editor and you will see what I mean].
Also for x264 stream to be stored without a muxer you need to use byte-stream=1 in your x264enc to store it in annexb format so that it is decodable.
To play back raw x264 stream you need to have a color space convertor before the video sink
gst-launch filesrc location=inputfile ! legacyh264parse ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
plays just fine here at my end
Or, to playback a raw h264 file with gstreamer 1.0:
gst-launch-1.0 filesrc location=/tmp/video.h264 ! h264parse ! avdec_h264 ! autovideosink