Using the following 2 commands I can stream a videotestsrc source over SRT.
gst-launch-1.0 -v videotestsrc ! queue ! x264enc ! queue ! mpegtsmux alignment=7 ! identity silent=false ! queue leaky=downstream ! srtsink uri="srt://:8888" sync=false async=false
gst-launch-1.0 -v srtsrc uri="srt://127.0.0.1:8888" ! identity silent=false ! fakesink async=false
And play it in this way:
gst-play-1.0 srt://127.0.0.1:8888
Now I want to stream a rtsp source, and I get it in the following way:
gst-launch-1.0 rtspsrc location=rtsp://localhost:8554/main latency=100 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! srtsink uri="srt://:8888" sync=false async=false
gst-launch-1.0 -v srtsrc uri="srt://127.0.0.1:8888" ! identity silent=false ! fakesink async=false
However, when I when to playback I have this error:
gst-play-1.0 srt://127.0.0.1:8888
Press 'k' to see a list of keyboard shortcuts.
Now playing srt://127.0.0.1:8888
Pipeline is live.
ERROR Could not determine type of stream. for srt://127.0.0.1:8888
ERROR debug information: ../subprojects/gstreamer/plugins/elements/gsttypefindelement.c(999): gst_type_find_element_chain_do_typefinding (): /GstPlayBin:playbin/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind
Reached end of play list.
How can I solve it?
Related
In the below example pipeline,
gst-launch-1.0 -v videotestsrc num-buffers=100 ! "video/x-raw, width=1920, height=1080, framerate=30/1, format=NV12" ! videoscale ! "video/x-raw, width=1280, height=720, format=NV12" ! tee name=t ! queue ! fakesink t. ! queue ! x264enc rc-lookahead=5 ! fakesink
The pipeline is not completing the pre-roll, async-done message is not reaching the gst-launch-1.0
This works only with "async=false". But why is this required, why is it not working without "async=false" ?
I wanted to create a RTP-stream of a mp4-file with gstreamer.
I am using gstreamer 1.18.4 on debian bullseye.
To create a mp4-file I recorded an RTSP-stream from my webcam using the following command:
gst-launch-1.0 -e rtspsrc location="rtsp://192.168.111.146/axis-media/media.amp" port-range=28000-38000 buffer-mode=0 latency=80 ! rtph264depay ! h264parse ! mp4mux ! filesink location=filename.mp4
After recording the file filename.mp4 I tried to stream it using RTP:
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
And the playback of the stream could be started using the following command on the same machine:
gst-launch-1.0 udpsrc address=127.0.0.1 port=50000 auto-multicast=true ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! avdec_h264 ! autovideosink
Everything works as expected!
But since I don't want to transcode the file, I just wanted to skip the decoding and encoding part. Therefore, I created the following pipelines:
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
and
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! rtph264pay ! udpsink port=50000 host=127.0.0.1
However, if I retry the playback pipeline (the pipeline with udpsrc) on both pipelines the stream is not displayed.
Interestingly, nload shows network traffic on lo.
What is wrong with the streaming pipelines?
Did I miss some magic-plugin in between?
Meanwhile I found an answer to my question.
Changing the stream-server-pipeline from
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse ! rtph264pay ! udpsink port=50000 host=127.0.0.1
to
gst-launch-1.0 filesrc location=filename.mp4 ! qtdemux ! h264parse config-interval=-1 ! rtph264pay ! udpsink port=50000 host=127.0.0.1
solves the issue.
Thus, the difference is setting the parameter config-interval=-1 for h264parse.
I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.
I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.
I would like to feed a video file to my virtual video device using gstreamer and v4l2loopback.
Using videotestsrc, something like this works (i.e. I can open my virtual device from VLC):
gst-launch -v videotestsrc ! queue ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! v4l2sink device=/dev/video0
However, the exact same code does not work with my video file:
gst-launch filesrc location=~/Documents/my_video.ogv ! queue ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! v4l2sink device=/dev/video0
It actually gets stuck in the "PREROLLING" phase:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Can anybody see why? Do I miss some conversion between filesrc and decodebin2?
I don't know why exactly, but I was missing the ! videoscale ! step. And the ! queue !'s are apparently not necessary.
Here is the working line:
gst-launch filesrc location=~/Documents/my_video.ogv ! decodebin2 ! ffmpegcolorspace ! videoscale ! ffmpegcolorspace ! v4l2sink device=/dev/video0