I have a pipeline that receives from an application a video stream using appsrc and streams that stream to a WebRTC client. At the same time, the pipeline attempts to save the video to a file (as MP4 but could also be Matroska etc) in a separate branch using the tee command. The pipeline is created programmatically using gst_parse_launch and looks as follows:
webrtcbin bundle-policy=max-bundle name=myserver stun-server=stun://global.stun.twilio.com:3478?transport=udp
appsrc name=TextureSource-1 ! videoconvert ! video/x-raw,format=I420 ! x264enc name=VideoEncoder-1 tune=zerolatency
speed-preset=superfast ! tee name=t ! queue ! video/x-h264,stream-format=byte-stream !
filesink location=capture.mp4 t. ! queue ! rtph264pay !
application/x-rtp,media=video,encoding-name=H264,payload=96 ! myserver.
I can receive the stream without issues on my WebRTC client but the problem is, the saved MP4 file is mostly unplayable. Somehow, I can only play the file using ffplay which plays it at approximately twice the speed of the capture rate. After some web search, I found out that I need to send an EOS event to the pipeline (so that the MP4 header could be written properly and the file becomes playable) through a command like: gst_element_send_event(m_pPipeline, gst_event_new_eos()); where m_pPipeline is a pointer to my pipeline element.
However, on my bus I never get a message of type GST_MESSAGE_EOS which, in my understanding, means that the EOS message somehow does not travel downstream to my sinks. I tried to add the message-forward parameter using g_object_set (G_OBJECT(m_pPipeline), "message-forward", true, nullptr); but I observed the same behaviour.
What am I doing wrong here, should the EOS message not directly be sent to the pipeline but to the individual sinks (here filesink and webrtcbin)?
Related
I'm using this Gstreamer pipeline to send an RTSP stream of a camera.
./gst-rtsp-launch --port 8554 "( v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1,width=640,height=480 ! rtpvrawpay name=pay0 pt=96 )"
I want to using playbin, so I don't need to specify the type of video from the rtsp stream. If I use this pipeline, I can get a single image from the camera:
gst-launch-1.0 playbin uri=rtsp://(ip-of-camera):8554 video-sink="jpegenc ! filesink location=capture1.jpeg"
But if I try this pipeline, to save as a file:
gst-launch-1.0 playbin uri=rtsp://(ip-of-camera):8554 video-sink="videoconvert ! video/x-h264,width=320,height=240 ! mp4mux ! filesink location=test.mp4"
I get this error:
ERROR: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source/GstUDPSrc:udpsrc1: Internal data stream error.
Additional information for debugging:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source/GstUDPSrc:udpsrc1:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.094968753
Defining the processing queue to NULL...
Freeing the processing queue...
Note: I had translate the last two lines.
Is there a problem in the pipeline I'm using to save the stream as a file?
I copied your pipeline and in worked.
I only added -e after gst-launch-1.0, so that GStreamer writes all necessary information to the file when ctrl-C is pressed:
gst-launch-1.0 -e playbin uri=(rtsp url) video-sink="videoconvert ! video/x-h264,width=320,height=240 ! mp4mux ! filesink location=test.mp4"
Maybe you are using an older version of GStreamer?
I am using GStreamer 1.20.0
I'm having an issue with pulling audio and video from an RTSP stream using gstreamer.
The command I am using to test is as follows:
gst-launch-1.0 rtspsrc location=rtsp://192.168.50.160/whp name=src src. ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! x264enc bitrate=10000 ! rtph264pay ! udpsink host=192.168.50.164 port=8004 src. ! queue ! fakesink
The result of the above is that the pipe follows through for the first (video) stream. The second stream however is untouched and seems to sit in the rtspsrc plugin.
The way I am finding this is by looking at the resultant dot file:
If I am reading this right it looks like the queue connects correctly to rtpsession0, but seems to ignore rtpsession1 and the second queue doesn't connect to anything resulting in audio from my stream being completely ignored.
Am I reading this incorrectly? If not am I missing something in my pipeline command that would rectify this issue?
I am happy to provide any more information necessary
Thanks
I would like to store a file which has AAC audio frames,
For that i used the below pipeline,
gst-launch-1.0 filesrc location=Test_44100Hz_2ch_s16le.wav ! "audio/x-raw,rate=44100,format=s16le,channels=2" ! audioparse format=raw raw-format=s16le rate=44100 channels=2 ! faac ! aacparse ! queue ! filesink location=a1
While reading that file again to pulsesink using below pipeline,
gst-launch-1.0 filesrc location=a1 ! aacparse ! faad ! audioconvert ! audioresample ! pulsesink
I am Receiving below error, I used GST_DEBUG=3, but i am not able find the solution.
0:00:00.031924804 3379 0x2231d60 WARN basesrc gstbasesrc.c:3483:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.033044700 3379 0x2231050 WARN baseparse gstbaseparse.c:3255:gst_base_parse_loop:<aacparse0> error: No valid frames found before end of stream
ERROR: from element /GstPipeline:pipeline0/GstAacParse:aacparse0: No valid frames found before end of stream
Additional debug info:
gstbaseparse.c(3255): gst_base_parse_loop (): /GstPipeline:pipeline0/GstAacParse:aacparse0
ERROR: pipeline doesn't want to preroll.
Can anybody help me, To solve this? I need to store AAC audio frames and need to stream that file as AAC audio stream.
This is it, tested working:
gst-launch-1.0 filesrc location=WAV_44_16bit.wav ! decodebin ! audioconvert ! queue ! voaacenc ! aacparse ! queue ! mp4mux ! filesink location=aac.mp4
gst-launch-1.0 filesrc location=aac.mp4 ! decodebin ! audioconvert ! audioresample ! alsasink
In container there are metadata information stored.. without them the decoder does not know how to process the data.
AAC Audio streams require a container in order to be useful within gstreamer
For decoder initialization it is necessary to know sampling frequency and Audio Object. In gstreamer we are unable to pass this metadata directly to the parser or the decoder. The parser collects this data instead from the mp4 header then the encoder inherits the frame structure/size and sample rate. So this is a deficiency in either aacparse(parser) or avdec_aac/faad(decoder), none of which have exposed parameters to specify frame size of a raw file, the afore mentioned metadata. That being said, I haven't found a compelling reason why anyone would need to do this. I found myself trying to do it before I discovered the aac simply needed to be muxed into an MP4(mp4mux) or another container to work and be portable. The container/framing only adds a small amount of data to the stream.
I am trying to create a GStreamer pipeline (v 1.0) in order to record and play special file format.
For recording purpose I use the following pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw-yuv, format=\(fourcc\)I420, width=640, height=480 ! videoconvert ! x264enc byte-stream=1 ! queue ! appsink
In appsink (using new_sample() callback) I use a compression method to compress H264 stream and finally store in a output file.
I use the following pipeline to play the recorded file:
gst-launch-1.0 appsrc ! video/x-h264 ! avdec_h264 ! autovideosink
In appsrc I decompress H264 stream and send it to appsrc buffer (using push-buffer). The size of each buffer is 4095.
Unfortunately GStreamer after push 2 buffers print the following debug message:
Error: Internal data flow error.
Is there any way to fix the problem?
Add legacyh264parse or h264parse (depending on your version of gst components) before your decoder. You need to be able to send full frames to the decoder.
Post avdec_h264 it would be nice to have a ffmpegcolorspace to be able to convert the video format to your display requirements.
I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19