I am trying to create a pipeline for streaming a jpeg stream into h263 encoded stream over RTP. When I execute:
gst-launch -v \
souphttpsrc \
location=http://192.168.1.54:8080 \
do-timestamp=true \
! multipartdemux ! image/jpeg,width=352,height=288 \
! ffmpegcolorspace ! video/x-raw-yuv,framerate=15/1 \
! videoscale \
! ffenc_h263 ! rtph263pay \
! udpsink host=192.168.1.31 port=1234
gstreamer reports:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2: caps = image/jpeg, width=(int)352, height=(int)288
ERROR: from element /GstPipeline:pipeline0/GstSoupHTTPSrc:souphttpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2507): gst_base_src_loop (): /GstPipeline:pipeline0/GstSoupHTTPSrc:souphttpsrc0:
streaming task paused, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
/GstPipeline:pipeline0/GstMultipartDemux:multipartdemux0.GstPad:src_0: caps = NULL
Freeing pipeline ...
I've checked that the elements are existing. I've run gst-inspect for ffenc_h263, ffmpegcolorspace and the rest of the elements in this command too. gst-inspect does not report any error.
Is there something I'm missing?
You need jpegdec after multipartdemux to decode jpeg stream into raw video.
You don't need ffmpegcolorspace because jpegdec converts to video/x-raw-yuv.
videoscale is useless here, because you do not specify width/height for outgoing stream.
Try this:
gst-launch -v \
souphttpsrc \
location=http://192.168.1.54:8080 \
do-timestamp=true \
! multipartdemux \
! image/jpeg,width=352,height=288,framerate=15/1 \
! jpegdec ! ffenc_h263 ! rtph263pay \
! udpsink host=192.168.1.31 port=1234
Related
I have one camera and I would like to clone it to be able to use it in two different apps.
The following two things work ok, but I'm not able to combine them:
Read from /dev/video0 and clone to /dev/video1 and /dev/video2
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! \
queue ! v4l2sink device=/dev/video2
Read from /dev/video0 and rescale it and output to /dev/video1
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
videoscale ! video/x-raw,width=178,height=100 ! videoconvert ! \
v4l2sink device=/dev/video1
But the following does not work (reading -> rescaling -> clone)
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
videoscale ! video/x-raw,width=178,height=100 ! videoconvert ! \
tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! \
queue ! v4l2sink device=/dev/video2
It fails with the following error:
ERROR: from element /GstPipeline:pipeline0/GstVideoScale:videoscale0: Failed to configure the buffer pool
Additional debug info:
gstbasetransform.c(904): gst_base_transform_default_decide_allocation (): /GstPipeline:pipeline0/GstVideoScale:videoscale0:
Configuration is most likely invalid, please report this issue.
Thanks!
How to mix h264 format with audio on webcam with gstreamer?
gst-launch-1.0 -v v4l2src device=/dev/video2 ! video/x-h264,framerate=30/1,width=1920,height=1080 \
! queue ! mux. \
alsasrc device=hw:1 ! queue ! audioconvert ! fdkaacenc \
! mux. matroskamux name=mux ! filesink location=video.mkv
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.001309727
Setting pipeline to NULL ...
Freeing pipeline ...
Preview works
gst-launch-1.0 -v \
v4l2src device=/dev/video2 ! video/x-h264,framerate=30/1,width=1920,height=1080 ! decodebin ! autovideosink
Audio works
gst-launch-1.0 -v alsasrc device=hw:1 ! queue ! audioconvert ! fdkaacenc ! fdkaacdec ! autoaudiosink
h264parse needed before mux
gst-launch-1.0 -v \
v4l2src device=/dev/video2 ! queue ! video/x-h264,framerate=30/1,width=1920,height=1080 \
! h264parse ! mux. \
alsasrc device=hw:1 ! queue ! audioconvert ! fdkaacenc ! mux. \
matroskamux name=mux ! filesink location=video.mp4
I am trying to reencode the audio part of a MKV file that contains some video/x-h264 and some audio/x-raw. I can't manage to just demux the MKV and remux it. Even simply:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_00 ! mux.video_00 \
demux.audio_00 ! mux.audio_00
fails miserably with:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
WARNING: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Delayed linking failed.
Additional debug info:
../gstreamer/gst/parse/grammar.y(506): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
failed delayed linking pad video_00 of GstMatroskaDemux named demux to pad video_00 of GstMatroskaMux named mux
WARNING: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Delayed linking failed.
Additional debug info:
../gstreamer/gst/parse/grammar.y(506): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
failed delayed linking pad audio_00 of GstMatroskaDemux named demux to pad audio_00 of GstMatroskaMux named mux
ERROR: from element /GstPipeline:pipeline0/GstMatroskaDemux:demux: Internal data stream error.
Additional debug info:
../gst-plugins-good/gst/matroska/matroska-demux.c(5715): gst_matroska_demux_loop (): /GstPipeline:pipeline0/GstMatroskaDemux:demux:
streaming stopped, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
My best attempt at the transcoding mentioned above goes:
gst-launch-1.0 -v filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_00 ! queue ! 'video/x-h264' ! h264parse ! mux. \
demux.audio_00 ! queue ! rawaudioparse ! audioconvert ! audioresample ! avenc_aac ! mux.
with the same result. Removing the pad name audio_00 leads to gst being stuck at PREROLLING.
I have seen a few people facing similar problems:
http://gstreamer-devel.966125.n4.nabble.com/Putting-h264-file-inside-a-container-td4668158.html
http://gstreamer-devel.966125.n4.nabble.com/Changing-the-container-format-td3576914.html
As therein, keeping only video or only audio works.
I think the rawaudioparse should not be here. I tried your pipeline and trouble with it too. I just came up with something as I would have done it and it seemed to work:
filesrc location=test.mkv ! matroskademux \
matroskademux0. ! queue ! audioconvert ! avenc_aac ! matroskamux ! filesink location=test2.mkv \
matroskademux0. ! queue ! h264parse ! matroskamux0.
Audio in my case was:
Stream #0:0(eng): Audio: pcm_f32le, 44100 Hz, 2 channels, flt, 2822 kb/s (default)
Another format may require addiitonal transformations..
The problem is that the pads video_00 and audio_00 have been renamed video_0 and audio_0. This can be seen using gst-inspect-1.0 matroskademux, which indicates that the format for the pads now reads video_%u. Note that some documentation pages of gstreamer are not updated to reflect that.
The first command, MKV to MKV should read:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_0 ! queue ! mux.video_0 \
demux.audio_0 ! queue ! mux.audio_0
(Note the added queues)
The second command, MKV to MKV reencoding audio should read:
gst-launch-1.0 -v filesrc location=test.mkv ! matroskademux name=demux \
matroskamux name=mux ! filesink location=test2.mkv \
demux.video_0 ! queue ! 'video/x-h264' ! h264parse ! mux. \
demux.audio_0 ! queue ! rawaudioparse ! audioconvert ! audioresample ! avenc_aac ! mux.
The same result could have been achieved by not specifying the pads and using cap filters if needed.
Thanks go to user Florian Zwoch for providing a working pipeline.
I'm trying to make a server and client application that sends and receives a raw video stream using rtpbin. In order to send an uncompressed videostream I'm using rtpgstpay and rtpgstdepay to payload to the data.
The server application can succesfully send the video stream with the following pipeline:
gst-launch-1.0 -vvv rtpbin name=rtpbin \
videotestsrc ! \
rtpgstpay ! application/x-rtp,media=application,payload=96,encoding-name=X-GST ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! udpsink port=5000 host=127.0.0.1 name=vrtpsink \
rtpbin.send_rtcp_src_0 ! udpsink port=5002 host=127.0.0.1 sync=false async=false name=vrtcpsink
The client pipeline looks like this:
gst-launch-1.0 -vvv rtpbin name=rtpbin \
udpsrc caps="application/x-rtp,payload=96,media=application,encoding-name=X-GST" port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! rtpgstdepay ! videoconvert ! autovideosink \
udpsrc port=5002 ! rtpbin.recv_rtcp_sink_0
rtpbin succesfully creates a sink and links to the udpsrc, but no stream comes out of rtp source pad.
The same pipeline without rtpbin can succesfully display the stream:
gst-launch-1.0 -vvv \
udpsrc caps="application/x-rtp,payload=96,media=application,encoding-name=X-GST" port=5000 ! \
rtpgstdepay ! videoconvert ! autovideosink
What an I doing wrong that rtpbin doesn't want to output the stream?
I also tried to replace the rtp_source part of the client with a fakesink to see if it would output anything, but still nothing comes out of rtpbin.
I found the solution to my problem. If any people come across the same problem, this is how to fix it:
First of all, rtpbin needs the clock-rate to be specified in the caps
When using rtpgst(de)pay, you need to specify the caps event string in your caps filter at the receiver, you can find this when printing the caps of the rtpgstpay element at the transmitter, eg:
application/x-rtp, media=(string)application, clock-rate=(int)90000, encoding-name=(string)X-GST, caps=(string)"dmlkZW8veC1yYXcsIGZvcm1hdD0oc3RyaW5nKUdSQVk4LCB3aWR0aD0oaW50KTY0MCwgaGVpZ2h0PShpbnQpNDYwLCBpbnRlcmxhY2UtbW9kZT0oc3RyaW5nKXByb2dyZXNzaXZlLCBwaXhlbC1hc3BlY3QtcmF0aW89KGZyYWN0aW9uKTEvMSwgY29sb3JpbWV0cnk9KHN0cmluZykxOjQ6MDowLCBmcmFtZXJhdGU9KGZyYWN0aW9uKTI1LzE\=", capsversion=(string)0, payload=(int)96, ssrc=(uint)2501988797, timestamp-offset=(uint)1970605309, seqnum-offset=(uint)2428, a-framerate=(string)25
So here the caps event string is
dmlkZW8veC1yYXcsIGZvcm1hdD0oc3RyaW5nKUdSQVk4LCB3aWR0aD0oaW50KTY0MCwgaGVpZ2h0PShpbnQpNDYwLCBpbnRlcmxhY2UtbW9kZT0oc3RyaW5nKXByb2dyZXNzaXZlLCBwaXhlbC1hc3BlY3QtcmF0aW89KGZyYWN0aW9uKTEvMSwgY29sb3JpbWV0cnk9KHN0cmluZykxOjQ6MDowLCBmcmFtZXJhdGU9KGZyYWN0aW9uKTI1LzE\=
When adding this to the caps at the receiver, you have to add a null terminator ( \0 ) at the end of the string.
I'm trying to use gstreamer to send a sample file .avi over a network. The code that I'm using to build my pipeline is the following:
gst-launch-1.0 -v rtpbin name=rtpbin latency=200 \
filesrc location=filesrc location=/home/enry/drop.avi ! decodebin name=dec \
dec. ! queue ! x264enc byte-stream=false bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! udpsink port=5000 host=127.0.0.1 ts-offset=0 name=vrtpsink \
rtpbin.send_rtcp_src_0 ! udpsink port=5001 host=127.0.0.1 sync=false async=false \
name=vrtcpsink udpsrc port=5005 name=vrtpsrc ! rtpbin.recv_rtcp_sink_0
When I try to execute this command I'm getting this error:
gstavidemux.c(5383): gst_avi_demux_loop ():
/GstPipeline:pipeline0/GstDecodeBin:dec/GstAviDemux:avidemux0:
streaming stopped, reason not-linked
Execution ended after 0:00:00.032906515
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Can you help me to solve this?
I think you would need a demux element before the decodebin, since an avi file would consist both an audio and video but you are using just one decodebin to decode them.
Take a look at the avidemux element here. You need it to get the video stream from the file.