Hey Guys
I have one ubuntu machine with gstreamer version 1.8.3 and one arm device with gstreamer version 1.4.4, if I try to use the rtspsrc proxy setting on an gst-launch. I have the same mem allocation error on both devices.
I want to test if is possible to play the axis camera stream over an HTTP tunnel that is described in the axis camera manual as:
RTSP can be tunnelled over HTTP. This might prove necessary in order
to pass firewalls etc. To tunnel RTSP over HTTP, two sessions are set
up; one GET (for command replies and stream data) and one POST (for
commands). RTSP commands sent on the POST connection are base64
encoded, but the replies on the GET connection are in plain text. To
bind the two sessions together the Axis product needs a unique ID
(conveyed in the x-sessioncookie header). The GET and POST requests
are accepted on both the HTTP port (default 80) and the RTSP server
port (default 554).
I see at the rtspsrc there is a proxy settings for HTTP tunneling, i dont know if it works, or if I am on the wrong way.
To get forward on this task i would testing this proxy propertie, but if i start the gst-launch I have this mem alloc error.
Pipeline:
gst-launch-1.0 rtspsrc location="rtsp://root:1qay2wsx#192.168.1.211/axis-media/media.amp" proxy="http://root:1qay2wsx#192.168.1.211/axis-media/media.amp" ! rtph264depay! h264parse ! decodebin ! autovideosink
Error:
(gst-launch-1.0:15450): GLib-ERROR **: /build/glib2.0-prJhLS/glib2.0-2.48.2/./glib/gmem.c:100: failed to allocate 18446744073709551614 bytes
I hope anybody can help me, and thanks for your help guys.
BR Christoph
Related
I can't figure out how to call rtsp methods with headers, for understanding:
I have an rtsp player on qt, I want to add the rtsp playback speed functionality, many vendors do this by sending a PLAY request with a Speed header,
but I don't understand how I can send this request from gstreamer.
You set playback rate for the pipeline via a GstSegemnt. E.g. when issueing a seek command. See https://gstreamer.freedesktop.org/documentation/gstreamer/gstsegment.html#GstSegment.
rtspsrc should then take care of sending the required header info to the RTSP server when initiating the session.
I`m using Gstreamer to reach rtsp stream. When i use it with internet connection and with my lan - its ok, when i lost lan connection. But when i use it without internet connection, all my program just freeze for 20s.
I have already tried to change all timeout var - it doesnt work!
Pipeline: s = “rtspsrc protocols=tcp location=” + s + " latency=0 tcp-timeout=1 buffer-mode=1 ! queue ! rtph264depay ! h264parse ! decodebin ! videoconvert ! videorate ! video/x-raw,framerate=25/1 ! appsink";
What should i do?
If the source of your rtsp stream is hosted on your lan then your rtsp source may require a internet connection to operate. Some of the cheaper IP cameras require a constant connection the internet to operate.
Your gstreamer pipeline looks okay.
you can try the following commands to rule out that your pipeline is the problem:
gst-discoverer-1.0 <source_uri>
and
gst-play-1.0 <source_uri>
gst-discoverer-1.0 will tell you information about the source, you should run it while you are connected to the internet, then disconnect from the internet and run it again to see if there are any changes.
gst-play-1.0 will automagically create the correct pipeline and display the video to your monitor.
I want to be able to request my RTSP server (gst-rtsp-server) for sending an h264 keyframe from the client side (rtspsrc). From the docs it should be possible but I couldn't get it to work.
Can anyone share a short snippet for how it's done?
Thank you
I'm relatively new to this topic so there might be some fundamental gap in my knowledge, but I am trying to use GStreamer to send an MPEG2-TS stream to an embedded device using IPv6 (on Windows 10). The embedded device is connected via a USB-Ethernet adapter to a media converter (RJ45 -to- BroadR-Reach).
If I use IPv4 to broadcast (e.g. 192.168.1.255), everything works fine. I can receive the stream on the device without any problems. A sample command that works:
gst-launch-1.0.exe -v filesrc location=d:/video.ts do-timestamp=false ! \
"video/mpegts, systemstream=(boolean)true, packetsize=(int)188" ! \
tsparse set-timestamps=true ! rtpmp2tpay mtu=1200 pt=127 ! \
udpsink host=192.168.1.255 port=5001
Now I need to do this with IPv6 via multicast and I can't figure out how!
Assuming the IPv6 address of the embedded device is fe80::1:2:3 and the IPv6 address of the Ethernet interface on the PC is fe80::1:2:4. Which address do I use as multicast? I already tried ff0x::1:2:4 and ff1x::1:2:4 (where x=1,2,3), but the data is transmitted over my computer's main network interface (e.g. WiFi interface, this was checked using Wireshark).
If I try to add the option of multicast-iface, GStreamer gives the following error:
Could not join multicast group: Error joining multicast group: The
requested address is not valid in its context.
Ok, so after posting similar questions to various mailing lists and forums, I've learned that you can't bind to an interface this way and additionally, multicast traffic is always routed through interface with the lowest metric. So the only possibility to achieve what I wanted is to:
Play around with the metrics of the interfaces in question
Add a route for the required address range
Somehow force all traffic from GStreamer through the required interface (e.g. ForceBindIP)
Since I couldn't make any permanent changes to the Windows machine vis-a-vis the network routes/metrics, I went with a modified version of the 3rd option, i.e. a VirtualBox virtual machine running GStreamer on Linux with the USB-Ethernet adapter setup as the only active network interface.
I would like to stream web cam video to http web page. I know how to read from web cam and archive it to file.
But how to stream via web. What is the pipeline for that?
Use hlssink element from gst-plugins-bad:
gst-launch-1.0 videotestsrc is-live=true ! x264enc ! mpegtsmux ! hlssink
It will generate playlist and segments files. You need to provide HTTP access to these files, you can use any webserver, nginx or Apache, for example.
You can tweak hlssink's parameters to specify target location, segments count, etc. All options can be listed with:
gst-inspect-1.0 hlssink
If you need better low-level control, you'd better create your own web server with libsoup, manually split MPEG-TS into segments and add your own playlist endpoint.