So, I'm starting to play around with gstreamer and I'm able to do very simple pipes such as
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=10/1 ! videoconvert ! autovideosink
Now, as my USB webcam (which is video1, video0 being the computer's built in camera) supports h264 (I have checked using lsusb), I would like to try to get the h264 feed directly. I understand that this feed is muxed in the mjpeg one, but looking around on the web it seems that gstreamer is able to get it nonetheless.
Since my end goal is to stream it from a Beaglebone, I made an attempt using the solution given to this post (adding a listener from a different terminal):
#sender
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-264,width=320,height=90,framerate=10/1 ! tcpserversink host=192.168.148.112 port=9999
But this yields the following error :
WARNING: erroneous pipeline: could not link v4l2src0 to tcpserversink0
I also tried something similar to my first command, changing the source from raw to h264 (based on that post , trying the full command given there gives the same error message)
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-h264,width=640,height=480,framerate=10/1 ! h264parse ! avdec_h264 ! autovideosink
But again, this did not work either:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.036309961
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I admit this is driving me pretty crazy: looking on SO or elsewhere on the web, there seems to be a lot of people who made it work with exactly the same webcam as the one I have (Logitech C920), but I keep running into issues one after the other.
What would be an example of correct pipe to extract the h264 from that webcam?
You definitely need to use a payloader before it hits the wire. For example rtph264pay. Here is an example that cannot test as I don't have your hardware available. I have working udp examples from alternates sources if this doesn't steer you in the right direction.
server
gst-launch v4l2src device=/dev/video1 \
! video/x-264,width=320,height=90,framerate=10/1 \
! x264enc \
! queue \
! rtph264pay, config-interval=3, pt=96, mtu=1500 \
! queue \
! tcpserversink host=127.0.0.1 port=9002
client
gst-launch tcpserversrc host=127.0.0.1 port=9002 \
! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 \
! rtph264depay \
! video/x-h264 \
! queue \
! ffdec_h264 \
! queue \
! xvimagesink
Related
I have a GStreamer pipeline that grabs an mjpeg webcam stream from 3 separate cameras, and saves 2 frames each. I'm executing these commands on the USB 3.0 bus of an Odroid XU4 with Ubuntu 18.04 When doing this, I discovered I would occasionally get a garbled image like this in the collection of output images:
It wouldn't always happen, but maybe 1/5 executions an image might look like that.
I then discovered if I decode the jpeg then re-encode, this doesn't ever happen. See the below pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! \
jpegenc ! multifilesink location=/root/cam0_%02d.jpg v4l2src \
device=/dev/video1 num-buffers=2 ! image/jpeg,width=3840,height=2160 ! \
queue ! jpegdec ! queue ! jpegenc ! multifilesink \
location=/root/cam1_%02d.jpg v4l2src device=/dev/video2 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! jpegenc ! \
multifilesink location=/root/cam2_%02d.jpg
Now, when I run this pipeline I have a 1/5 chance of getting this error:
/GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Is there a way to make gstreamer wait for the frames instead of simply failing? I attempted this by adding the !queue! in the above pipeline, to no success.
I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH
I'm really running out of ideas. Here is my problem: I need to stream on demand mp4 (H264) through the network. I'm new with gstreamer and after lot of tries with versions > 1.0 I decided to use 0.10 because seems to be most promising so far.
Command below works perfect ( I see window with my movie )
gst-launch filesrc location=/home/zuko/sintel_trailer-368p.mp4 ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! autovideosink dec. ! queue ! audioconvert ! audioresample ! autoaudiosink
Now I'm trying to build TCP stream using commands (so far on localhost only):
Server side:
gst-launch filesrc location=/home/zuko/sintel_trailer-368p.mp4 ! decodebin2 name=dec ! tcpserversink host=127.0.0.1 port=5000
Client side:
gst-launch tcpclientsrc host=127.0.0.1 port=5000 ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! autovideosink dec. ! queue ! audioconvert ! audioresample ! autoaudiosink
But response from the "client side" command is following:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin2:dec/GstTypeFindElement:typefind: Could not determine type of stream.
Additional debug info:
gsttypefindelement.c(813): gst_type_find_element_chain_do_typefinding (): /GstPipeline:pipeline0/GstDecodeBin2:dec/GstTypeFindElement:typefind
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
What is missing, or what I'm doing wrong?
I'm testing on: VirtualBox 4.3.12 with Ubuntu 14.04, kernel 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Full error with (GST_DEBUG_NO_COLOR=1 GST_DEBUG=*:3 ) attached here:
https://app.box.com/s/4ntyk6am2ibg0pohtg9h
First off, using 0.10 is an absolutely bad idea, you should really stick to 1.0, for which you will have community support.
Second, to your problem itself, you are trying to stream the decoded stream over the network ("decodebin2 ! tcpserversink") and to decode it again on the other side ("tcpclientsrc ! decodebin2"). Not only is it very wrong with respect to bandwidth usage, it also straight up won't work.
I'll advise you to have a look at the rtp plugins provided by gstreamer.
Using gstreamer 1.0 the server side can share h264 streams with:
gst-launch-1.0 filesrc location="C:\\Videos\\videotestsrc.avi" ! decodebin ! x264enc ! mpegtsmux ! queue ! tcpserversink host=127.0.0.1 port=8080
While the client side receives with:
gst-launch-1.0 tcpclientsrc host=127.0.0.1 port=8080 ! decodebin ! videoconvert ! autovideosink sync=false
Alternatively, the client could be simulated with VLC through:
Media >> Open Network Stream >> tcp://127.0.0.1:8080 >> Play
I'm trying to use gstreamer to send a sample file .avi over a network. The code that I'm using to build my pipeline is the following:
gst-launch-1.0 -v rtpbin name=rtpbin latency=200 \
filesrc location=filesrc location=/home/enry/drop.avi ! decodebin name=dec \
dec. ! queue ! x264enc byte-stream=false bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! udpsink port=5000 host=127.0.0.1 ts-offset=0 name=vrtpsink \
rtpbin.send_rtcp_src_0 ! udpsink port=5001 host=127.0.0.1 sync=false async=false \
name=vrtcpsink udpsrc port=5005 name=vrtpsrc ! rtpbin.recv_rtcp_sink_0
When I try to execute this command I'm getting this error:
gstavidemux.c(5383): gst_avi_demux_loop ():
/GstPipeline:pipeline0/GstDecodeBin:dec/GstAviDemux:avidemux0:
streaming stopped, reason not-linked
Execution ended after 0:00:00.032906515
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Can you help me to solve this?
I think you would need a demux element before the decodebin, since an avi file would consist both an audio and video but you are using just one decodebin to decode them.
Take a look at the avidemux element here. You need it to get the video stream from the file.
I would like to stream my webcam, I tried with vlc, but I'm getting a 10-15s delay between the server and client on the same network
vlc v4l2:// :v4l2-dev=/dev/video0 :v4l2-width=640 :v4l2-height=480 --sout="#transcode{vcodec=h264,vb=800,scale=1,acodec=mp4a,ab=128,channels=2,samplerate=44100}:rtp{sdp=rtsp://:8554/live.ts}" -I dummy
Now I would like to test gstreamer, but I couldn't found any example, how can I setup a live webcam stream(rtsp or http) using gstreamer?
To create a YouTube live event, one needs a RTMP stream containing x264+aac.
gst-launch -v videotestsrc \
! video/x-raw-yuv,width=640,height=480,framerate=30/1 \
! x264enc key-int-max=60 \
! h264parse \
! flvmux name=mux \
audiotestsrc ! queue ! audioconvert ! ffenc_aac ! aacparse ! mux. \
mux. ! rtmpsink location="rtmp://<stream-server-url>/"
Key frames in live feed must appear each 2 seconds at most, thus key-int-max=<double framerate>.
Note that RTMP works over TCP, so on a bad channel it will suffer significant delays.
Take a look at the rtsp-server examples in
http://cgit.freedesktop.org/gstreamer/gst-rtsp-server/tree/examples