I'm using the following pipeline (SIMPLIFIED) in Gstreamer OSS Build 0.10.7 on Win 7 x64:
udpsrc ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 !
gstrtpjitterbuffer latency=200 ! rtph264depay ! tee name=h264Tee
h264Tee. ! queue ! h264parse ! mux.
matroskamux name=mux ! filesink location=rec.mkv sync=false // same for avimux/mp4/qt
h264Tee. ! queue ! ffdec_h264 ! tee name=videoTee
//.videoTee ! queue ! dx9videosink
//.videoTee ! queue ! appsink
//udpsrc ! queue ! directsoundsink
audiotestsrc ! mux. //only for testing, should be connected to udpsrc
The pipeline is launched via Gstreamer-Sharp.
Here's the console output of the pipeline:
WARN default xoverlay.c:354:gst_x_overlay_set_xwindow_id:<videoSink> Using deprecated gst_x_overlay_set_xwindow_id()
ERROR d3dvideosink d3dvideosink.c:2204:gst_d3dvideosink_release_swap_chain: Direct3D device has not been initialized
WARN bin gstbin.c:2378:gst_bin_do_latency_func:<pipeline0> failed to query latency
WARN matroskamux matroska-mux.c:970:gst_matroska_mux_video_pad_setcaps:<mux> pad video_0 refused caps 05370C40
Both video and audio are playing just fine as long as I leave out the muxer. When include the muxer in the pipeline, the video freezes immedeately and no sound can be heard. What's wrong why does the muxer refuse the caps?
Ok solved it my self:
The video caps above don't contain sprop-parameter-sets which aren't needed for playback. For encoding however they are needed since various properties of the stream are encoded within these:
udpsrc !
application/x-rtp, media=(string)video, clock-rate=(int)90000,
encoding-name=(string)H264,
sprop-parameter-sets= (string)\"Z0LADdkBQfsBEAAAAwAQAAADAyjxQqSA\\,aMuMTIA\\=\",
payload=(int)96,
ssrc (uint)2332354585,
clock-base=(uint)1158355497,
seqnum-base=(uint)10049 !
gstrtpjitterbuffer latency=200 ! rtph264depay ! tee name=h264Tee
...
Related
I wanted to create a RTP-stream of a mp4-file with gstreamer.
I am using gstreamer 1.18.4 on debian bullseye.
To create a mp4-file I recorded an RTSP-stream from my webcam using the following command:
gst-launch-1.0 -e rtspsrc location="rtsp://192.168.111.146/axis-media/media.amp" port-range=28000-38000 buffer-mode=0 latency=80 ! rtph264depay ! h264parse ! mp4mux ! filesink location=filename.mp4
After recording the file filename.mp4 I tried to stream it using RTP:
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
And the playback of the stream could be started using the following command on the same machine:
gst-launch-1.0 udpsrc address=127.0.0.1 port=50000 auto-multicast=true ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! avdec_h264 ! autovideosink
Everything works as expected!
But since I don't want to transcode the file, I just wanted to skip the decoding and encoding part. Therefore, I created the following pipelines:
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
and
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! rtph264pay ! udpsink port=50000 host=127.0.0.1
However, if I retry the playback pipeline (the pipeline with udpsrc) on both pipelines the stream is not displayed.
Interestingly, nload shows network traffic on lo.
What is wrong with the streaming pipelines?
Did I miss some magic-plugin in between?
Meanwhile I found an answer to my question.
Changing the stream-server-pipeline from
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
to
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse config-interval=-1 ! rtph264pay ! udpsink port=50000 host=127.0.0.1
solves the issue.
Thus, the difference is setting the parameter config-interval=-1 for h264parse.
I'm trying to write RTSP stream in shared memory, and then write it in .mkv file.
I use this command to write stream in .mkv file directly:
gst-launch-1.0 rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 ! rtph264depay ! h264parse ! matroskamux ! filesink location= file.mkv
It works.
Now I add shared memory:
gst-launch-1.0 rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 ! shmsink socket-path=/tmp/foo shm-size=2000000
And
gst-launch-1.0 shmsrc socket-path=/tmp/foo ! rtph264depay ! h264parse ! matroskamux ! filesink location=file.mkv
And I get message:
Input buffers need to have RTP caps set on them.
Ok, I write
gst-launch-1.0 rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" ! shmsink socket-path=/tmp/foo shm-size=2000000
And I get this message again.
What am I doing wrong?
You need to set the caps after shmsrc, for example following is my receiving pipeline:
gst-launch-1.0 -v rtspsrc
location=rtsp://192.168.1.150:8554/VBoxVideo ! shmsink
socket-path=/tmp/foo shm-size=2000000 wait-for-connection=false
You have to note down the caps from the above shmsink, following are my caps for shmsink:
/GstPipeline:pipeline0/GstShmSink:shmsink0.GstPad:sink: caps =
"application/x-rtp\,\ media\=(string)video\,\ payload\=(int)96\,\
clock-rate\=(int)90000\,\ encoding-name\=(string)H264\,\
packetization-mode\=(string)1\,\
profile-level-id\=(string)64002a\,\
sprop-parameter-sets\=(string)\"J2QAKqwbKgHgCJ+WEAAAPoAADqYOAAEZABGQve6wgA\\=\\=\\,KP4Briw\\=\"\,\
a-tool\=(string)GStreamer\,\ a-type\=(string)broadcast\,\
a-framerate\=(string)30\,\ a-ts-refclk\=(string)local\,\
a-mediaclk\=(string)sender\,\ ssrc\=(uint)4083957277\,\
clock-base\=(uint)1018840792\,\ seqnum-base\=(uint)13685\,\
npt-start\=(guint64)0\,\ play-speed\=(double)1\,\
play-scale\=(double)1"
Now, to use shmsrc,
gst-launch-1.0 -vm shmsrc socket-path=/tmp/foo do-timestamp=true is-live=true
num-buffers=1000 !
"application/x-rtp,media=(string)video,payload=(int)96,packetization-mode=(string)1" ! rtph264depay ! h264parse ! mp4mux ! filesink location=file.mp4
Note: I have set the caps from the above, also note I have set um-buffers=1000 as I am using mp4mux, and I need to send and eos for the file to play.
So in your case:
gst-launch-1.0 -v rtspsrc location=rtsp://admin:admin#192.168.88.248:554/h264 ! shmsink socket-path=/tmp/foo shm-size=2000000
Note down the caps from the pipeline for shmsink0, and later use it in your pipeline:
gst-launch-1.0 shmsrc socket-path=/tmp/foo is-live=true num-buffers=1000 ! caps ! rtph264depay ! h264parse ! mp4mux ! filesink location=file.mp4
I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.
I am trying to construct a RTSP pipeline on the client side to receive audio and video streams on android platform
Only video pipeline works fine
data->pipeline = gst_parse_launch("rtspsrc location=rtsp://192.168.1.100:8554/ss ! gstrtpjitterbuffer ! rtph264depay ! h264parse ! amcviddec-omxtiducati1videodecoder ! ffmpegcolorspace ! autovideosink",&error);
I need to receive audio streams also, so I tried with below pipeline
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
Gstreamer throws error saying no element "demux"
Please let me know proper rtsp pipeline to receive audio and video streams on android
Please try this, (tested):
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss name=demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.