Gstreamer recording multiple segments of RTP stream to file - c++

I'm writing a c++ application with gstreamer and am trying to achieve the following: connect to an rtp audio stream (opus), write one copy of the entire stream to an audio file, and then additionally, based on events triggered by the user, create a separate series of audio files consisting of segments of the rtp stream (think a start/stop record toggle button).
Currently using udpsrc -> rtpbin -> rtpopusdepay -> queue -> tee (pipeline splits here)
tee_stream_1 -> queue -> webmmux -> filesink
tee_stream_2 -> queue -> webmmux -> filesink
tee_stream_1 should be active during the entire duration of the pipeline. tee_stream_2 is what should generate multiple files based on user toggle events.
An example scenario:
pipeline receive rtp audio stream, tee_stream_1 begins writing audio to full_stream.webm
2 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_1.webm
5 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_1.webm and closes file.
8 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_2.webm
9 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_2.webm and closes file.
10 seconds into rtp audio stream, stream ends, full_stream.webm finishes writing audio and closes.
End result being 3 audio files, full_stream.webm with 10 seconds of audio, stream_segment_1.webm with 3 seconds of audio, and stream_segment_2.webm with 1 second of audio.
Attempts to do this so far have been met with difficulty since the muxers seem to require an EOS event to finish properly writing the stream_segment files, however this EOS is propogated to the other elements of the pipeline which has the undesired effect of ending all of the recordings. Any ideas on how to best accomplish this? I can provide code if it would be helpful.
Thank you for any and all assistance!

For such case, I'd suggest to give a try to RidgeRun's open source gstd and interpipe plugins that provide high level control of dynamic pipelines.
You may install with something like:
# Some required packages to be installed, not exhaustive...
# If not enough you would see errors and figure out any other missing package
sudo apt install libsoup2.4-dev libjson-glib-dev libdaemon-dev libjansson-dev libreadline-dev gtk-doc-tools python3-pip
# Get gstd sources from github
git clone --recursive https://github.com/RidgeRun/gstd-1.x.git
# Configure, build and install (meson may be better, but here using autogen/configure
cd gstd-1.x
./autogen.sh
./configure
make -j $(nproc)
sudo make install
cd ..
# Get gst-interpipe sources from github
git clone --recursive https://github.com/RidgeRun/gst-interpipe.git
# Configure, build and install (meson may be better, but here using autogen/configure
cd gst-interpipe
./autogen.sh
./configure
make -j $(nproc)
sudo make install
cd ..
# Tell gstreamer about the new plugins interpipesink and interpipesrc
# First clear gstreamer cache (here using arm64, you would adapt for your arch)
rm ~/.cache/gstreamer-1.0/registry.aarch64.bin
# add new plugins path
export GST_PLUGIN_PATH=/usr/local/lib/gstreamer-1.0
# now any gstreamer command would rebuild the cache, so if ok this should work
gst-inspect-1.0 interpipesink
interpipes need a daemon that manages, so in a first terminal you would just start it. It will display some operations and errors if any:
gstd
Now in a second terminal you would try this script (here recording into directory /home/user/tmp/tmp2...adjust for your case):
#!/bin/sh
gstd-client pipeline_create rtpopussrc udpsrc port=5004 ! application/x-rtp,media=audio,encoding-name=OPUS,clock-rate=48000 ! queue ! rtpbin ! rtpopusdepay ! opusparse ! audio/x-opus ! interpipesink name=opussrc
gstd-client pipeline_create audio_record_full interpipesrc name=audiofull listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/full_stream.webm
gstd-client pipeline_play rtpopussrc
gstd-client pipeline_play audio_record_full
sleep 2
gstd-client pipeline_create audio_record_1 interpipesrc name=audio_rec1 listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/stream_segment_1.webm
gstd-client pipeline_play audio_record_1
sleep 3
gstd-client pipeline_stop audio_record_1
gstd-client pipeline_delete audio_record_1
sleep 3
gstd-client pipeline_create audio_record_2 interpipesrc name=audio_rec2 listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/stream_segment_2.webm
gstd-client pipeline_play audio_record_2
sleep 1
gstd-client pipeline_stop audio_record_2
gstd-client pipeline_delete audio_record_2
sleep 1
gstd-client pipeline_stop audio_record_full
gstd-client pipeline_delete audio_record_full
gstd-client pipeline_stop rtpopussrc
gstd-client pipeline_delete rtpopussrc
echo 'Done'
and check resulting files.

Related

Send EOS to pipeline containing webrtcbin and appsrc

I have a pipeline that receives from an application a video stream using appsrc and streams that stream to a WebRTC client. At the same time, the pipeline attempts to save the video to a file (as MP4 but could also be Matroska etc) in a separate branch using the tee command. The pipeline is created programmatically using gst_parse_launch and looks as follows:
webrtcbin bundle-policy=max-bundle name=myserver stun-server=stun://global.stun.twilio.com:3478?transport=udp
appsrc name=TextureSource-1 ! videoconvert ! video/x-raw,format=I420 ! x264enc name=VideoEncoder-1 tune=zerolatency
speed-preset=superfast ! tee name=t ! queue ! video/x-h264,stream-format=byte-stream !
filesink location=capture.mp4 t. ! queue ! rtph264pay !
application/x-rtp,media=video,encoding-name=H264,payload=96 ! myserver.
I can receive the stream without issues on my WebRTC client but the problem is, the saved MP4 file is mostly unplayable. Somehow, I can only play the file using ffplay which plays it at approximately twice the speed of the capture rate. After some web search, I found out that I need to send an EOS event to the pipeline (so that the MP4 header could be written properly and the file becomes playable) through a command like: gst_element_send_event(m_pPipeline, gst_event_new_eos()); where m_pPipeline is a pointer to my pipeline element.
However, on my bus I never get a message of type GST_MESSAGE_EOS which, in my understanding, means that the EOS message somehow does not travel downstream to my sinks. I tried to add the message-forward parameter using g_object_set (G_OBJECT(m_pPipeline), "message-forward", true, nullptr); but I observed the same behaviour.
What am I doing wrong here, should the EOS message not directly be sent to the pipeline but to the individual sinks (here filesink and webrtcbin)?

How to wait for x264enc to encode buffered frames on end-of-stream

I have a Python GStreamer application that uses appsrc to record mp4 files.
The issue is that despite specifying tune=zerolatency for x264enc, there is latency, and the output video is truncated when an eos is sent to the pipeline. Depending on the machine, the latency is substantial, resulting in a much shorter than expected output file.
If I change the pipeline to save the video as an AVI file, it is not truncated. Unfortunately, the resulting file is approximately 2 GB per minute versus 12 MB per minute with H.264.
Here is the x264enc pipeline:
appsrc name=appsrc format=time is-live=true caps=video/x-raw,format=(string)BGR appsrc. ! videoconvert ! x264enc tune=zerolatency ! qtmux ! filesink location=out.mp4
When the application is finished, it sends end-of-stream messages to the appsrc and pipeline:
if self._appsrc.emit("end-of-stream") == Gst.FlowReturn.OK:
self._sink_pipeline.send_event(Gst.Event.new_eos())
Is there a way for my application to wait while x264enc processes its buffer? A message, perhaps? I don't care how long it takes to finish. What's important is that all frames pushed to the appsrc are written to the output video file.
You will actually have to wait for that End-Of-Stream event to pass through the pipeline before you stop it. An End-Of-Stream message will be send to the pipeline's bus when all sinks have received the End-Of-Stream.
Something like this:
# <send EOS event>
self._sink_pipeline.get_bus().timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS)
# <stop pipeline>
Following Florian Zwoch's answer below a complete example that creates an video file and gracefully terminates the pipeline with an EOS signal.
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject
from time import sleep
Gst.init(None)
pipe_str = ' videotestsrc name=src do-timestamp=true is-live=true !' \
' x264enc !' \
' h264parse !' \
' matroskamux !' \
' filesink name=f location=test.mp4 '
pipeline = Gst.parse_launch(pipe_str)
bus = pipeline.get_bus()
print("Entering Playing state...")
pipeline.set_state(Gst.State.PLAYING)
sleep(5)
print("Sending an EOS event to the pipeline")
pipeline.send_event(Gst.Event.new_eos())
print("Waiting for the EOS message on the bus")
bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS)
print("Stopping pipeline")
pipeline.set_state(Gst.State.NULL)
Note:
Tested on Ubuntu 16.04, GStreamer 1.8.3

kurento can't receive rtp from gstreamer correctly

I installed kurento media server, and run the kurento java tutorial,(RTP receiver), kurento offer a gstreamer pipeline,
PEER_V=23490 PEER_IP=10.0.176.127 SELF_V=5004 SELF_VSSRC=112233
bash -c 'gst-launch-1.0 -t \
rtpbin name=r \
v4l2src device=/dev/video0 ! videoconvert ! x264enc tune=zerolatency \
! rtph264pay ! "application/x-rtp,payload=(int)103,clock-rate=(int)90000,ssrc=(uint)$SELF_VSSRC" \
! r.send_rtp_sink_1 \
r.send_rtp_src_1 ! udpsink host=$PEER_IP port=$PEER_V bind-port=$SELF_V \
'
this is the pipe which I simplified from officail pipeline, and it could run successfully;
but there is a problem when I implement this pipeline with c or c++ code.
kurento can't receive rtp stream, but I can receive rtp stream with my own rtp receiver that I wrote by c++.
the kurento media server log warnings:
enter image description here
it looks like that kurento doesn't process video stream, but audio stream.
but I never send audio stream.
So I want to know how to change c code to fit the kurento, let my video stream to kurento. my code enter link description here
yes , After a few days of toss, I figure out this problem today,
PEER_V=23490 PEER_IP=10.0.176.127 SELF_V=5004 SELF_VSSRC=112233
bash -c 'gst-launch-1.0 -t \
rtpbin name=r \
v4l2src device=/dev/video0 ! videoconvert ! x264enc tune=zerolatency \
! rtph264pay ! "application/x-rtp,payload=(int)103,clock-rate=(int)90000,ssrc=
(uint)$SELF_VSSRC" \
! r.send_rtp_sink_1 \
r.send_rtp_src_1 ! udpsink host=$PEER_IP port=$PEER_V bind-port=$SELF_V \
this pipeline, if you change payload to 96,then kurento media server will report the same warning as the picture in question.
so I think that it's my payload setting error.
then I add a pad probe to detect pad's caps.
s.h.i.t, it's true,
but I don't know why I set caps but not effective,
so I set the property "pt" of rtph264pay, and it runs successfully.
the code is enter link description here

Can save jpegenc images with time stamp using multifilesink

I am trying to take snapshot from analog camera for every 10 sec, it worked fine.
command:
gst-launch-1.0 v4l2src device=/dev/video0 ! queue ! vspmfilter !
video/x-raw,width=640,height=480 ! videorate !
video/x-raw,width=640,height=480,framerate=1/10 ! jpegenc quality=30 !
multifilesink location=/home/root/images/image_%d.jpg
I am getting images in the specified directory like
image0.jpg
image1.jpg
...
But I want image to be saved with time stamp after every 10 sec like image_yymmddhhmmss for example
image_20180817104333.jpg
image_20180817104343.jpg
....
How I can achieve this with above command?
I guess you could use a pad probe for the multifilesink. And whenever a buffer is received you set a new desired location property on the sink. Not sure though if that property can be set while in PLAYING state though.

Gstreamer: Pausing/resuming video in RTP streams

I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19