I am fairly new to GStreamer, and without any error messages I am lost trying to debug my problem. I am trying to stream a h264 stream as RTP packets over UDP:
gst-launch-1.0 v4l2src ! 'video/x-raw, width=1280, height=720, framerate=30/1' ! videoconvert ! x264enc ! rtph264pay ! udpsink host=192.168.88.1 port=5004
Output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
When I do a netcat on 192.168.88.1 I see nothing:
nc -ul 5004
And neither VLC nor Janus Gateway are picking up any stream. I am honestly shooting in the dark because I'm not very familiar with gstreamer or video streaming in general. Any guidance would be appreciated.
Thanks!
Related
I have a camera and I am streaming the video data using the GStreamer. With below pipeline.
gst-launch-1.0 -e camerasrc ! video/x-h264,width=1920,height=1080,framerate=30/1 ! h264parse config-interval=-1 ! rtph264pay pt=96 ! udpsink host=127.0.0.1 port=8554
Now I would like to make the streaming ONVIF compliance. How I can do it with Gstreamer?
GStreamer has support for ONVIF. Unfortunately it is not just as easy as running a pipeline with gst-launch, you should implement an RTSP server by using the gst-rtsp-server.
Im working on this project :
https://www.hackster.io/jonmendenhall/jetson-nano-search-and-rescue-ai-uav-9ca547
At some point I will need to mount my camera (waveshare ; IMX219-77IR) on top of the drone and I would like to use vlc on Windows or Linux outside of nomachine (because I have installed nomachine server on the nano and the client on windows and because it will run in headless mode),to display what the camera sees when the drone is flying. For this reason I’m trying to configure a gstreamer with RTSP to start a streaming server on the Ubuntu 18.04 that I have installed on the jetson nano.
Below u can see what are the commands that I have issued :
$ ./test-launch "videotestsrc ! nvvidconv ! nvv4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96"
And on the same Jetson Nano, I opened another console where I ran this pipeline to decode the RTSP stream:
gst-launch-1.0 uridecodebin uri=rtsp://127.0.0.1:8554/test ! nvoverlaysink
I see this picture :
The picture is from videotestsrc plugin. I would like to replace videotestsrc with my video source,but I don't know how to do that.
I tried these combinations,but none of them worked :
./test-launch "v4l2src device=/dev/video0 ! nvvidconv ! nvv4l2h264enc ! h264parse ! queue ! rtph264pay name=pay0 pt=96"
./test-launch "device=/dev/video0 ! nvvidconv ! nvv4l2h264enc ! h264parse ! queue ! rtph264pay name=pay0 pt=96"
but the error is still the same :
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://127.0.0.1:8554/test
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
ERROR: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Could not read from resource.
Additional debug info:
gstrtspsrc.c(5917): gst_rtsp_src_receive_response (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Could not receive message. (Timeout while waiting for server response)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
but why ? I know for sure that my camera (model waveshare ; IMX219-77IR) created a device called /dev/video0 and I know for sure that it works,because this command is able to show my face on the screen :
DISPLAY=:0.0 gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=3280, height=2464, format=(string)NV12, framerate=(fraction)20/1' ! nvoverlaysink -e
I am going to use multiple clients on different computers to be able to view video of an IP Camera stream url. Because the Ip camera has limitations on the number of connected clients, I want to setup a streamer for this purpose. I googled and tried GStreamer with different command line options but not yet successful.
Here is a test command line:
gst-launch-1.0 rtspsrc
location="rtsp://root:root#192.168.1.1/axis-media/media.amp?videocodec=h264&resolution=320x240&fps=10&compression=50"
latency=10 ! rtph264depay ! h264parse ! tcpserversink
host=127.0.0.1 port=5100 -e
But when I want to test it with vlc, nothing is played. Is it related to SDP? Does gstreamer can restream sdp from source?
After finding the correct command line, I want to integrate it into a c# application to automate this process.
Any help is welcome.
You need gst-rtsp-server. And to use it you have to write small C/C++ application - example here
upd: If your rtsp source provide h264 video stream you could use following pipeline to restream it without transcoding:
rtspsrc location=rtsp://example.com ! rtph264depay ! h264parse ! rtph264pay name=pay0 pt=96
To re-stream h.264 video from IP camera, below is the Gstreamer pipeline (this works for me)
rtspsrc location=rtsp://IP_CAMERA_URL ! rtph264depay ! video/x-h264, stream-format=byte-stream ! h264parse ! rtph264pay ! application/x-rtp,media=video,encoding-name=H264,payload=96 ! yoursink
On gst-launch-1.0 --version --->
gst-launch-1.0 version 1.14.5
GStreamer 1.14.5
I am trying to stream adpcm (G726),32kbps audio from a host to a client through
RTP:
I have tried following commands from the client(receiver) and the host(sender).
Both boards are connected through IP.
I am getting "internal data flow error" at the receiver once i run the cmds on
the sender:
RECEIVER:
gst-launch -v udpsrc port=3004 caps="application/x-rtp” ! rtpg726depay !
ffdec_g726 ! alsasink
SENDER:
gst-launch -v autoaudiosrc ! audioconvert ! audioresample ! ffenc_g726
bitrate=32000 ! rtpg726pay ! udpsink host=192.168.1.104 port=3004
If I try the same with pcm, alaw encoder and decoder, then streaming works
fine. I can hear the live audio(when I speak on sender's Microphone) into
receiver's speakerphone:
commands I am running in this case:
RECEIVER:
gst-launch udpsrc port=3004 caps="application/x-rtp" ! rtppcmadepay ! alawdec
! alsasink
SENDER:
gst-launch autoaudiosrc ! audioconvert ! audioresample ! alawenc ! rtppcmapay
! udpsink host=192.168.1.104 port=3004
This seems to be an issue in the pipeline?
here is the full error I am getting at the receiver
///////////////
root#am335x-evm:~/EVM4# gst-launch udpsrc port=3004 caps="application/x-rtp" !
rtpg726depay ! ffdec_g726 ! alsasink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: Internal data
flow error.
Additional debug info:
gstbasesrc.c(2625): gst_base_src_loop ():
/GstPipeline:pipeline0/GstUDPSrc:udpsrc0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 20828810227 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I need to set up a live audio streaming server with gstreamer. Server should be sending live audio to client and at the client side, vlc player should be used to play the incoming stream. I am using the following code
VIDEO_CAPS="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264"
gst-launch -v udpsrc caps=$VIDEO_CAPS port=4444 \
! gstrtpbin .recv_rtp_sink_0 \
! rtph264depay ! ffdec_h264 ! xvimagesink
then gstreamer reports like:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Please help me with steps for setting up a server using gstreamer a client for performing live streaming
Try reading manual on streaming with VLC here.
Or just:
cvlc rtp://#:4444
Update:
Due to my bad reading skills I slightly misunderstood the question.
Here is how to set up a server:
gst-launch -v pulsesrc ! audioconvert ! audioresample \
! speexenc ! rtpspeexpay \
! udpsink host=224.1.1.1 port=4444 auto-multicast=true
or use multiudpsink to send to multiple clients.