I am trying to construct a RTSP pipeline on the client side to receive audio and video streams on android platform
Only video pipeline works fine
data->pipeline = gst_parse_launch("rtspsrc location=rtsp://192.168.1.100:8554/ss ! gstrtpjitterbuffer ! rtph264depay ! h264parse ! amcviddec-omxtiducati1videodecoder ! ffmpegcolorspace ! autovideosink",&error);
I need to receive audio streams also, so I tried with below pipeline
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
Gstreamer throws error saying no element "demux"
Please let me know proper rtsp pipeline to receive audio and video streams on android
Please try this, (tested):
gst-launch rtspsrc location=rtsp://192.168.1.100:8554/ss name=demux. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink demux. ! queue ! rtpmp4gdepay ! aacparse ! ffdec_aac ! audioconvert ! autoaudiosink
Related
I cant save audio from stream I get only video in file. I suspect that I do not need two filesink in pipeline or there is some problem two different mux.
I tried to use autoadiosink and autovideosink and they works successfully.
autoadiosink and autovideosink pipeline:
gst-launch-1.0 rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov latency=0 droponlatency=1 name=rtp_source ! queue ! rtph264depay ! decodebin ! videoconvert ! autovideosink rtp_source. ! queue ! decodebin ! autoaudiosink
Save to file filesink pipeline:
gst-launch-1.0 rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov latency=0 droponlatency=1 name=rtp_source ! queue ! rtph264depay ! decodebin ! vp8enc ! webmmux ! filesink location=BigBuckBunny_115k.webm rtp_source. ! "application/x-rtp, media=(string)audio" ! queue ! decodebin ! vorbisenc ! oggmux ! filesink location=BigBuckBunny_115k.webm
I want to get also audio in resulting file.
You just reuse the existing mux - so that the vorbis is put into the webmmux too:
gst-launch-1.0 rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov \
latency=0 droponlatency=1 name=rtp_source ! queue ! rtph264depay ! decodebin ! \
vp8enc ! webmmux name=mux ! filesink location=BigBuckBunny_115k.webm rtp_source. ! \
application/x-rtp, media=(string)audio" ! queue ! decodebin ! vorbisenc ! mux.
I'm trying to record audio and video from internal webcam and mic to segmented files with gstreamer.
It works to a single file by doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
mpegtsmux name=mux ! filesink location=test.mp4 osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
It doesn't work when doing:
gst-launch-1.0 -e avfvideosrc !
video/x-raw ! vtenc_h264 ! h264parse ! queue !
splitmuxsink
muxer=mpegtsmux
location=test%04d.mp4
max-size-time=1000000000
name=mux osxaudiosrc !
decodebin ! audioconvert ! faac ! aacparse ! queue ! mux.
saying erroneous pipeline: could not link queue1 to mux
I'm using gstreamer 1.12.3 on Mac OSX Sierra
Note: The H264/AAC encoding isn't necessary for what I want to achieve, so if there are solutions that only work with e.g. avimux, for whatever reason, that's fine.
EDIT: I've tried this on a windows machine with the same error.
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.
Just like on Mac, replacing splitmuxsink with avimux ! filesink works. I'm sure I'm just missing out on some 'pipeline' logic so any clarifiction that can push me in the right direction would be helpful.
I needed to send the audio stream to the audio track of the muxer like so: mux.audio_0
gst-launch-1.0 -ev ksvideosrc ! video/x-raw !
videoconvert ! queue !
splitmuxsink max-size-time=1000000000 muxer=avimux name=mux
location=video%04d.avi autoaudiosrc !
decodebin ! audioconvert ! queue ! mux.audio_O
This happens when the documentation should be clear but you're missing out on some basic knowledge on how to interpret it.
I use the gstreamer to decode the H264, when I use the pipeline like this:
gst-launch-1.0 udpsrc uri=udp://0.0.0.0:15550 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,payload=(int)33,encoding-name=(string)MP2T" ! .recv_rtp_sink_0 rtpbin latency=800 ! rtpmp2tdepay ! tsdemux name=demux demux. ! h264parse ! queue ! omxh264dec ! vspfilter ! video/x-raw,width=800,height=480 ! waylandsink sync=false max-lateness=-1 demux. ! aacparse ! queue max-size-buffers=8192000 max-size-time=2000000000 ! faad ! alsasink device=media
there would be only about 200ms delay.
And when I set the sync=true, like this:
gst-launch-1.0 udpsrc uri=udp://0.0.0.0:15550 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,payload=(int)33,encoding-name=(string)MP2T" ! .recv_rtp_sink_0 rtpbin latency=800 ! rtpmp2tdepay ! tsdemux name=demux demux. ! h264parse ! queue ! omxh264dec ! vspfilter ! video/x-raw,width=800,height=480 ! waylandsink sync=true max-lateness=-1 demux. ! aacparse ! queue max-size-buffers=8192000 max-size-time=2000000000 ! faad ! alsasink device=media
the dalay would reach 1200ms
I have no idea about it.
I'm working on gstreamer.Is there any way to store the output of mpegtsdemux element in a pipeline to a file as I'm interested in seperating audio,video ts packets into different files.
You can seperate video and audio track after mpegtsdemux. I hope this exemple will help you:
gst-launch filesrc location="source.ts" ! mpegtsdemux name=demux ! queue max-size-buffers=400000000 ! decodebin ! videorate ! videoscale ! ffenc_mpeg4 ! matroskamux ! filesink location="your_video_file.mkv" demux. ! queue max-size-buffers=400000000 ! decodebin ! audioconvert ! wavenc! filesink location="your_audio_file.wav"
I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.