what is the use of "mux." in pipeline command - gstreamer

This pipeline encodes a test audio and video stream and muxes both into an FLV file.
gst-launch-1.0 -v flvmux name=mux ! filesink location=test.flv audiotestsrc samplesperbuffer=44100 num-buffers=10 ! faac ! mux. videotestsrc num-buffers=250 ! video/x-raw,framerate=25/1 ! x264enc ! mux.
I am not able to figure out what mux. actually does and use of dot in pipeline command during muxing,Can someone please explain or any doc for reference

The dot . at the end of a string refers to named elements. In your case mux. means The element with the name "mux". In your pipeline you have flvmux name=mux - here the flvmux element instance gets the name mux assigned to it (via name=mux. The mux. then latter refers to that specific instance of flvmux (and not a new muxer instance for example).
https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html?gi-language=c#pipeline-description

Related

Separate RTSP payloads from gst-rtsp-server

I have an RTSP video source (h265) which I can display using VLC. I would like to split the stream into two, one at native resolution (encoded with h265) and the other at a new, lower resolution (encoded with h264). Both of the new streams should also be RTSP streams, viewable with VLC.
Due to bandwidth considerations, I can only connect a single client to the primary source.
So far, I have a working gst-rstp-server setup, with a single media factory running this gst launch string:
rtspsrc location=... ! rtph265depay !
h265parse ! tee name=t ! queue ! rtph265pay name=pay1 pt=96 t. ! queue
! decodebin ! videoscale ! videorate !
video/x-raw,framerate=30/1,width=640,height=480 ! x264enc bitrate=500
speed-preset=superfast tune=zerolatency ! h264parse ! rtph264pay
name=pay0 pt=96
I set up a mount point for the media factory and can connect to VLC, eg. "rtsp://127.0.0.1:8550/test". With this, I can only get whichever substream is pay0 in VLC. I can see that both substreams are working by changing which one is pay0. But how can I have VLC show my pay1?
Otherwise, how can I tee the original video source, then have two different media factories (with different gst launch strings...) use the tee's as their own source?
Both streams are being sent to you at the same time.
Usually the case for pay0 & pay1, would be sending video & audio.
For your case where you want 2 separate video streams you will need to modify code.
A simple example of what you want to achieve can be done by modifying the file at gst-rtsp-server/examples/test-launch.c
factory = gst_rtsp_media_factory_new ();
gst_rtsp_media_factory_set_launch (factory, argv[1]);
gst_rtsp_media_factory_set_shared (factory, TRUE);
gst_rtsp_mount_points_add_factory (mounts, "/stream1", factory);
gst_rtsp_media_factory_set_launch (factory, argv[2]);
gst_rtsp_media_factory_set_shared (factory, TRUE);
gst_rtsp_mount_points_add_factory (mounts, "/stream2", factory);
Then start with ./test-launch "rtspsrc location=... ! rtph265depay ! h265parse ! rtph265pay name=pay1 pt=96" "rtspsrc location=... ! rtph265depay ! h265parse ! decodebin ! videoscale ! videorate ! video/x-raw,framerate=30/1,width=640,height=480 ! x264enc bitrate=500 speed-preset=superfast tune=zerolatency ! h264parse ! rtph264pay name=pay0 pt=96"
You would then have 2 consumers on your camera though.
If you prefer to only consume once, it would be up to you to T the stream & make it available as the src for your gst_rtsp_media_factory_set_launch pipeline.

Save H264 encoded stream without re-encoding

I have a gstreamer pipeline that streams using :
v4l2src ! x264enc ! rtph264pay pt=96 ! udpsink host=ip port=8554
And this pipeline that receives this stream :
/ queue ! avdec_h264 ! appsink
udpsrc ! capsfilter ! rtpjitterbuffer ! rtph264depay ! tee !
\ queue ! h264parse ! mp4mux ! filesink
Simplified receiver pipeline without the tee is :
gst-launch-1.0 udpsrc port=8080 caps="lots-of-caps" ! rtpjitterbuffer ! rtph264depay ! h264parse ! mp4mux ! filesink location=/home/rish/Desktop/recorded.264 -e
Question :
Is there a way to save the H264 encoded stream received from udpsrc without having to re-encode it? How do I correctly close the filesink?
What I've tried so far : The discussion from this thread suggests the pipeline I've tried above but file is still corrupt. (not correctly closed).
This question asks a similar question. However I do not want to decode and re-encode. Another answer in the thread suggests using matroskamux element instead of mp4mux. This works, but I'd rather prefer using mp4mux (no particular reason, but I'd like to know why matroskamux works and mp4mux doesn't).
Your pipeline is already muxing without re-encoding, there is no encoder on your pipeline. h264parse is just a parser.
you've already got an answer on how to close the stream here: Sending EoS to filesink while removing branch from tee

How to demux audio and video from rtspsrc and then save to file using matroska mux?

I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.

gstreamer output-selector does not allow saving to file

I want to listen to an audiostream from alsasrc continuously, and at the same time be able to save snippets to file. I will push a button with 'record' or 'stop'.
I think I need the following gst-launch-0.10 command to work:
gst-launch-0.10 alsasrc do-timestamp=true ! tee name=t ! queue ! alsasink t. ! queue ! audioconvert ! wavenc ! output-selector name=s s. ! filesink location=test1.wav s. ! fakesink
Where I program the output-selector to switch between the filesink and the fakesink when I push the record/stop-button, I know gst-plugins-bad/tests/icles/output-selector-test.c example in the plugins, and want to hack that a bit.
Now the problem arises in the outputselector, it creates the file test1.wav but does not write to it. To focus on this problem I created:
gst-launch-0.10 audiotestsrc is-live=true do-timestamp=true ! wavenc ! output-selector name=s s. ! filesink location=test1.wav s. ! filesink location=test2.wav
and this also does not work (while gst-launch-0.10 audiotestsrc is-live=true do-timestamp=true ! wavenc ! filesink location=test1.wav works as expected). The 2 files are created but not written to. Can anybody point me in the right direction?
In posting "[gst-devel] how to link multiple filesinks to a output-selector before playing pipeline" I read that the 2nd sink is blocking on preroll. That why the example in output-selector-test.c uses a live-src as a trick, I also do that with audiotestsrc but it does not do the trick for me.
Prerolling is not your friend here, so set async=0 on both filesink and fakesink:
gst-launch-0.10 command to work: gst-launch-0.10 alsasrc do-timestamp=true ! tee name=t ! queue ! alsasink t. ! queue ! audioconvert ! wavenc ! output-selector name=s s. ! filesink location=test1.wav async=0 s. ! fakesink async=0
What about putting the wavenc element into the s. ! filesink location=test1.wav branch? wavenc will do some work when the stream starts and stops. output-selector can only ensures that this work if the elements sits behind it.

Recording audio+video from webcam with gstreamer

I'm having a problem trying to record audio+video from my webcam to a file. If I use videotestsrc and autoaudiosrc I get everything right (read as in I get a file with audio recorded from the webcam's mic, and test-video image), but as soon as I replace videotestsrc with v4l2src (or autovideosrc) I get Error starting streaming on device '/dev/video0'.
The command I'm using:
gst-launch-0.10 videotestsrc ! queue ! ffmpegcolorspace! theoraenc ! queue ! oggmux name=mux autoaudiosrc ! queue ! audioconvert ! vorbisenc ! queue ! mux. mux. ! queue ! filesink location = test.ogg
Why is that happening? What am I doing wrong?
EDIT:
In fact, something as simple as
gst-launch-0.10 autovideosrc ! autovideosink autoaudiosrc ! autoaudiosink
is failing with the same error (Error starting streaming on device '/dev/video0')
Replacing autovideosrc with videotestsrc gives me test image + real audio.
Replacing autoauidosrc with audiotestsrc gives me real image + test audio.
I'm starting to think that this is some kind of limitation of my webcam. Is that possible?
EDIT:
GST_DEBUG=2 log here: http://pastie.org/4755009
EDIT 2:
GST_DEBUG="v4l2*:5" (gstreamer 0.10): http://pastie.org/4810519
GST_DEBUG="v4l2*:5" (gstreamer 1.0): http://pastie.org/4810502
Please do a
gst-launch-1.0 v4l2src ! videoscale ! videoconvert ! autovideosink
Does that run? If not repeat as
GST_DEBUG="v4l2*:5" GST_DEBUG_NO_COLOR=1 gst-launch 2>debug.log ...
and check the log for errors. You also might want to run v4l-info (install v4l-conf under debian/ubuntu) and report what formats your camera supports.