I'm trying to write RTSP stream in shared memory, and then write it in .mkv file.
I use this command to write stream in .mkv file directly:
gst-launch-1.0 rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 ! rtph264depay ! h264parse ! matroskamux ! filesink location= file.mkv
It works.
Now I add shared memory:
gst-launch-1.0 rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 ! shmsink socket-path=/tmp/foo shm-size=2000000
And
gst-launch-1.0 shmsrc socket-path=/tmp/foo ! rtph264depay ! h264parse ! matroskamux ! filesink location=file.mkv
And I get message:
Input buffers need to have RTP caps set on them.
Ok, I write
gst-launch-1.0 rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" ! shmsink socket-path=/tmp/foo shm-size=2000000
And I get this message again.
What am I doing wrong?
You need to set the caps after shmsrc, for example following is my receiving pipeline:
gst-launch-1.0 -v rtspsrc
location=rtsp://192.168.1.150:8554/VBoxVideo ! shmsink
socket-path=/tmp/foo shm-size=2000000 wait-for-connection=false
You have to note down the caps from the above shmsink, following are my caps for shmsink:
/GstPipeline:pipeline0/GstShmSink:shmsink0.GstPad:sink: caps =
"application/x-rtp\,\ media\=(string)video\,\ payload\=(int)96\,\
clock-rate\=(int)90000\,\ encoding-name\=(string)H264\,\
packetization-mode\=(string)1\,\
profile-level-id\=(string)64002a\,\
sprop-parameter-sets\=(string)\"J2QAKqwbKgHgCJ+WEAAAPoAADqYOAAEZABGQve6wgA\\=\\=\\,KP4Briw\\=\"\,\
a-tool\=(string)GStreamer\,\ a-type\=(string)broadcast\,\
a-framerate\=(string)30\,\ a-ts-refclk\=(string)local\,\
a-mediaclk\=(string)sender\,\ ssrc\=(uint)4083957277\,\
clock-base\=(uint)1018840792\,\ seqnum-base\=(uint)13685\,\
npt-start\=(guint64)0\,\ play-speed\=(double)1\,\
play-scale\=(double)1"
Now, to use shmsrc,
gst-launch-1.0 -vm shmsrc socket-path=/tmp/foo do-timestamp=true is-live=true
num-buffers=1000 !
"application/x-rtp,media=(string)video,payload=(int)96,packetization-mode=(string)1" ! rtph264depay ! h264parse ! mp4mux ! filesink location=file.mp4
Note: I have set the caps from the above, also note I have set um-buffers=1000 as I am using mp4mux, and I need to send and eos for the file to play.
So in your case:
gst-launch-1.0 -v rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 ! shmsink socket-path=/tmp/foo shm-size=2000000
Note down the caps from the pipeline for shmsink0, and later use it in your pipeline:
gst-launch-1.0 shmsrc socket-path=/tmp/foo is-live=true num-buffers=1000 ! caps ! rtph264depay ! h264parse ! mp4mux ! filesink location=file.mp4
Related
I wanted to create a RTP-stream of a mp4-file with gstreamer.
I am using gstreamer 1.18.4 on debian bullseye.
To create a mp4-file I recorded an RTSP-stream from my webcam using the following command:
gst-launch-1.0 -e rtspsrc location="rtsp://192.168.111.146/axis-media/media.amp" port-range=28000-38000 buffer-mode=0 latency=80 ! rtph264depay ! h264parse ! mp4mux ! filesink location=filename.mp4
After recording the file filename.mp4 I tried to stream it using RTP:
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
And the playback of the stream could be started using the following command on the same machine:
gst-launch-1.0 udpsrc address=127.0.0.1 port=50000 auto-multicast=true ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! avdec_h264 ! autovideosink
Everything works as expected!
But since I don't want to transcode the file, I just wanted to skip the decoding and encoding part. Therefore, I created the following pipelines:
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
and
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! rtph264pay ! udpsink port=50000 host=127.0.0.1
However, if I retry the playback pipeline (the pipeline with udpsrc) on both pipelines the stream is not displayed.
Interestingly, nload shows network traffic on lo.
What is wrong with the streaming pipelines?
Did I miss some magic-plugin in between?
Meanwhile I found an answer to my question.
Changing the stream-server-pipeline from
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
to
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse config-interval=-1 ! rtph264pay ! udpsink port=50000 host=127.0.0.1
solves the issue.
Thus, the difference is setting the parameter config-interval=-1 for h264parse.
Using the following 2 commands I can stream a videotestsrc source over SRT.
gst-launch-1.0 -v videotestsrc ! queue ! x264enc ! queue ! mpegtsmux alignment=7 ! identity silent=false ! queue leaky=downstream ! srtsink uri="srt://:8888" sync=false async=false
gst-launch-1.0 -v srtsrc uri="srt://127.0.0.1:8888" ! identity silent=false ! fakesink async=false
And play it in this way:
gst-play-1.0 srt://127.0.0.1:8888
Now I want to stream a rtsp source, and I get it in the following way:
gst-launch-1.0 rtspsrc location=rtsp://localhost:8554/main latency=100 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! srtsink uri="srt://:8888" sync=false async=false
gst-launch-1.0 -v srtsrc uri="srt://127.0.0.1:8888" ! identity silent=false ! fakesink async=false
However, when I when to playback I have this error:
gst-play-1.0 srt://127.0.0.1:8888
Press 'k' to see a list of keyboard shortcuts.
Now playing srt://127.0.0.1:8888
Pipeline is live.
ERROR Could not determine type of stream. for srt://127.0.0.1:8888
ERROR debug information: ../subprojects/gstreamer/plugins/elements/gsttypefindelement.c(999): gst_type_find_element_chain_do_typefinding (): /GstPlayBin:playbin/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind
Reached end of play list.
How can I solve it?
I have an issue in sending H264 streams to linphone.
When I play the following pipeline :
gst-launch-1.0 -v filesrc location=C:/ test.mp4 ! qtdemux ! avdec_h264 ! x264enc bitrate=192 ! rtph264pay ! udpsink host=127.0.0.1 port=9078
Everything is ok and the video played in linphone screen.
But what I want to do is to save first the video streams into a file then send this file to the linphone. What I did is the following:
Saving into a file
gst-launch-1.0 -v filesrc location=C:/ test.mp4 ! qtdemux ! avdec_h264 ! x264enc bitrate=192 ! filesink location=C: /videosample
Send to linphone:
gst-launch-1.0 -v filesrc location=C: /videosample ! h264parse ! rtph264pay ! udpsink host=127.0.0.1 port=9078
The packets received but the linphone played a black screen.
I want to know what I am missing in my pipelines or if there is a specific parameter to set.
Note that it is working when a play a gst receiver :
gst-launch-1.0 -v udpsrc port=9078 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! avdec_h264 ! autovideosink
I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.
I'm using the following pipeline (SIMPLIFIED) in Gstreamer OSS Build 0.10.7 on Win 7 x64:
udpsrc ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 !
gstrtpjitterbuffer latency=200 ! rtph264depay ! tee name=h264Tee
h264Tee. ! queue ! h264parse ! mux.
matroskamux name=mux ! filesink location=rec.mkv sync=false // same for avimux/mp4/qt
h264Tee. ! queue ! ffdec_h264 ! tee name=videoTee
//.videoTee ! queue ! dx9videosink
//.videoTee ! queue ! appsink
//udpsrc ! queue ! directsoundsink
audiotestsrc ! mux. //only for testing, should be connected to udpsrc
The pipeline is launched via Gstreamer-Sharp.
Here's the console output of the pipeline:
WARN default xoverlay.c:354:gst_x_overlay_set_xwindow_id:<videoSink> Using deprecated gst_x_overlay_set_xwindow_id()
ERROR d3dvideosink d3dvideosink.c:2204:gst_d3dvideosink_release_swap_chain: Direct3D device has not been initialized
WARN bin gstbin.c:2378:gst_bin_do_latency_func:<pipeline0> failed to query latency
WARN matroskamux matroska-mux.c:970:gst_matroska_mux_video_pad_setcaps:<mux> pad video_0 refused caps 05370C40
Both video and audio are playing just fine as long as I leave out the muxer. When include the muxer in the pipeline, the video freezes immedeately and no sound can be heard. What's wrong why does the muxer refuse the caps?
Ok solved it my self:
The video caps above don't contain sprop-parameter-sets which aren't needed for playback. For encoding however they are needed since various properties of the stream are encoded within these:
udpsrc !
application/x-rtp, media=(string)video, clock-rate=(int)90000,
encoding-name=(string)H264,
sprop-parameter-sets= (string)\"Z0LADdkBQfsBEAAAAwAQAAADAyjxQqSA\\,aMuMTIA\\=\",
payload=(int)96,
ssrc (uint)2332354585,
clock-base=(uint)1158355497,
seqnum-base=(uint)10049 !
gstrtpjitterbuffer latency=200 ! rtph264depay ! tee name=h264Tee
...