Parallel processing the same data in separeted branch - gstreamer

I'm trying separate the work on frames to a few parallel pipeline branch. Let's say we have this pipeline:
some_merger_frames_by_ts ! autovideosink
v4l2src device=/dev/video2 ! tee name=t
t. ! queue ! /*some work with received frame*/ ! some_merger_frames_by_ts.
t. ! queue ! /*some work with received frame*/ ! some_merger_frames_by_ts.
The same work is being doing on every branch. I couldn't found anything like some_merger_frames_by_ts with the nedeed functionality. I've been searching a lot, but couldn't found anything needed.
I want to filter frames in each queue by ts, and then merge them in some_merger_frames_by_ts and get aligned stream.
Really, had no one tried?
Is this possible? Any help or alternative approaches would be much appreciated.
Thank a lot in advance.

Related

Gstreamer element videorate not working with video x-raw(memory:NVMM)

I'm trying to reduce the frame rate of a branch in my pipeline using videorate The real application is using a c project and is running on a Jetson Xavier AGX but I can demonstrate with these launch scripts.
The goal is to get a pipeline like this to work
nvv4l2camerasrc device=/dev/v4l/by-path/platform-15c10000.vi-video-index0
! video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=UYVY,interlace-mode=progressive
! videorate drop-only=true ! video/x-raw(memory:NVMM),framerate=10/1
! filesink location=test.I420
This launches ok and a test.I420 file is created but it seems no data is written to it.
Adding a nvvidconv element to convert the buffers into CPU memory (I think this is what its doing?) works. Be careful this creates a massive file very quickly.
nvv4l2camerasrc device=/dev/v4l/by-path/platform-15c10000.vi-video-index0
! video/x-raw(memory:NVMM),width=3840,height=2160,framerate=30/1,format=UYVY,interlace-mode=progressive
! nvvidconv ! video/x-raw
! videorate drop-only=true ! video/x-raw,framerate=10/1
! filesink location=test_raw.I420
I've seen other examples online where people seem to be able to use video/x-raw(memory:NVMM) with videorate but with the above its not working for me.

x264 enc v4l2 /dev/video2 stream

I have this working but have been unable to get video from my magwell to intergrate and could use help with the correct pipline.
gst-launch-1.0 videotestsrc ! video/x-raw,width=848,height=480,framerate=25/1 ! x264enc bitrate=700 ! video/x-h264,width=848,height=480,framerate=25/1,stream-format=byte-stream,profile=baseline ! tee name=t\
t. ! queue ! tcpclientsink host=172.18.0.3 port=8000 \
t. ! queue ! tcpclientsink host=172.18.0.4 port=8000
I do not see the receiver side pipeline in the question description. This is required to verify that there are no issues at the receiver side. Based on your current pipeline I have the following suggestions:
You don't need set the caps again after the element x264enc, because the output is anyhow of type video/x-h264. What you need is to add h264parse after x264enc. You need to also add h264parse, before passing the data to decoder you are using at the receiver side.
The bitrate set for x264enc is also very less. The units are in kbits/sec, and for a video this might be very less. It's best to leave this to default setting if you do not have any strict resource constraints. Otherwise try for a higher value.
Also is there any reason why you are using TCP. Using UDP might be a better idea for video, in case video data/packet loss is not an issue.

How to record pipeline even if sender doesn't send data in gstreamer

I'm a newbie to gstreamer so i would be appreciated if you could help me.
I'm trying to listen to a pipeline and record frames to a file.
I have tried the following pipeline:
gst-launch-1.0 udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
I want to record whole timeline even if the sender doesn't provide any frame data.
In default, recorder appends the frames when pipeline receive some valid frames. But I want to see some black frames when sender doesn't send data.
I experimented a bit and I don't think you'll be able to do this with a plain gst-launch command. Unfortunately what it would probably involve is to write an application that detects when packets/buffers are not coming in any more, and then modifying the pipeline. If you want to give it a go I'd suggest the input-selector element in something like this:
gst-launch-1.0 videotestsrc pattern=black ! video/x-raw ! input-selector name=selector ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
Then I'd create a method to attach the stream to the input-selector:
udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! identity name=buffer-checker
To detect no packets coming in, you can listen for the handoff signal on the identity element, and then remove the stream when it times out and switch over to the black test pattern from the videotestsrc by using the active-pad property on the input-selector.
Using the videomixer element almost works, but I don't believe it will handle multiple stops and starts of the stream.
Anyway, hope someone else comes up with a better idea. You could also re-analyze your top level approach and see if there is a way you can work with multiple video clips instead of the one.

Synchronize two RTSP/RTP H264 video streams capture using GStreamer

I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH

In GStreamer how do I simultaneously playback and record an h264 AVI file of a v4l2src?

Recorded files with gstreamer-0.10 with FPS25 and FourCIF_Format plays in fast forward mode. Any solution would be appreciated. Some times skips 3-4 seconds in recorded files.
The pipeline I'm attempting to use is:
gst-launch v4l2src device=/dev/video2 !
'video/x-raw-yuv,width=704,height=576, framerate=25/1' ! tee
name=liveTee ! queue ! mfw_isink liveTee. ! queue ! vpuenc ! avimux !
filesink location=/home/Recording.avi
I'm gonna take a rough stab at it and re-format your question a bit. This is mostly a GStreamer and Freescale question, not so much QT.
gst-launch-1.0 -e videotestsrc pattern=ball do-timestamp=true
is-live=true ! timeoverlay !
'video/x-raw,width=704,height=576,framerate=25/1' ! tee name=liveTee !
queue leaky=downstream ! videoconvert !
ximagesink async=false
liveTee. ! queue leaky=downstream ! videoconvert ! queue ! x264enc !
avimux ! filesink location=/tmp/test.avi
The thing to keep in mind is that your encoder has to keep up with the live playback. So your pipeline needs to handle the case where the encoder falls out of sync. On the queue elements behind the tee, use the leaky attribute.
Then you also want to be careful about your video source and what it's supplying. It looks like in your case you want live video, but if your source was an existing video file the pipeline would probably need some more tweaking.
NOTE: It may be even simpler than that, just adding async=false to the videosink appears to be very important.