How to Stream audio from Microphone Using gstreamer over a network - gstreamer

i want to stream audio input from my mic on my system over a network , i know how to record audio from mic but stuck with streaming it over a network.
here pipeline for recording from my microphone
gst-launch -v alsasrc device=hw 0 ! audioconvert ! vorbisenc ! oggmux ! filesink location=alsasrc1112.ogg

Related

Gstreamer last frame freeze when no internet connection

I`m using Gstreamer to reach rtsp stream. When i use it with internet connection and with my lan - its ok, when i lost lan connection. But when i use it without internet connection, all my program just freeze for 20s.
I have already tried to change all timeout var - it doesnt work!
Pipeline: s = “rtspsrc protocols=tcp location=” + s + " latency=0 tcp-timeout=1 buffer-mode=1 ! queue ! rtph264depay ! h264parse ! decodebin ! videoconvert ! videorate ! video/x-raw,framerate=25/1 ! appsink";
What should i do?
If the source of your rtsp stream is hosted on your lan then your rtsp source may require a internet connection to operate. Some of the cheaper IP cameras require a constant connection the internet to operate.
Your gstreamer pipeline looks okay.
you can try the following commands to rule out that your pipeline is the problem:
gst-discoverer-1.0 <source_uri>
and
gst-play-1.0 <source_uri>
gst-discoverer-1.0 will tell you information about the source, you should run it while you are connected to the internet, then disconnect from the internet and run it again to see if there are any changes.
gst-play-1.0 will automagically create the correct pipeline and display the video to your monitor.

How to add buffer for RTSP server?

I’m trying to create the following pipeline:
On Jetson:
1.1) camera -> … - > udpsink
1.2) udpsrc -> rtspserver
On Host PC
2.1) rtspsrc -> jitterbuffer -> detection -> tracker -> analytics
The main question is
My Jetson connected to Host PC over local WiFi network. I’ve choosen mesh WiFi Tenda Nova MW3. When Jetson reconnect from one WiFi access point to another, I loose some frames (from 0.5 to 10 seconds of stream). I understand that we cann’t get ideal seamless WiFi network, and system will "loose some frames" during reconnection.
I've try to setup buffer on udpsrc and udpsink, I've try to setup do-retransmission on rtspsrc, but it didn't work, or may be I did it wrong.
How to setup buffer in RTSP server to keep frames on Jetson client when it reconnects to another WiFi point and continue send frames from last point to Host PC?
Should I setup buffer on udpsink or udpsrc or rtspserver?
How to config rtpsrc to send frames from “losted time”?
I loose some frames (from 0.5 to 10 seconds of stream)
Maybe because decoder in Host pipeline misses the I frame, and should wait for the next I frame, 0.5~10s depends on the first frame position in the GOP(The GOP can be changed via encoder property).
You can use a rtmp server instead of rtsp, srs is a good choose, and open the gop cache. the pipeine would like to be camera -> encoder -> flvmux -> rtmpsink
vhost __defaultVhost__ {
gop_cache on;
}
The latency will increase while enable the gop cache, so the GOP for encoder should not be that large, may be 2s is good.
the pipeline on host pc may receive video data that had processed before reconnecting wifi, also the latency is bigger compare with gop disabled. If either counts, you should drop the outdated frame after decoding.

Gstreamer rtspsrc proxy mem alloc error

Hey Guys
I have one ubuntu machine with gstreamer version 1.8.3 and one arm device with gstreamer version 1.4.4, if I try to use the rtspsrc proxy setting on an gst-launch. I have the same mem allocation error on both devices.
I want to test if is possible to play the axis camera stream over an HTTP tunnel that is described in the axis camera manual as:
RTSP can be tunnelled over HTTP. This might prove necessary in order
to pass firewalls etc. To tunnel RTSP over HTTP, two sessions are set
up; one GET (for command replies and stream data) and one POST (for
commands). RTSP commands sent on the POST connection are base64
encoded, but the replies on the GET connection are in plain text. To
bind the two sessions together the Axis product needs a unique ID
(conveyed in the x-sessioncookie header). The GET and POST requests
are accepted on both the HTTP port (default 80) and the RTSP server
port (default 554).
I see at the rtspsrc there is a proxy settings for HTTP tunneling, i dont know if it works, or if I am on the wrong way.
To get forward on this task i would testing this proxy propertie, but if i start the gst-launch I have this mem alloc error.
Pipeline:
gst-launch-1.0 rtspsrc location="rtsp://root:1qay2wsx#192.168.1.211/axis-media/media.amp" proxy="http://root:1qay2wsx#192.168.1.211/axis-media/media.amp" ! rtph264depay! h264parse ! decodebin ! autovideosink
Error:
(gst-launch-1.0:15450): GLib-ERROR **: /build/glib2.0-prJhLS/glib2.0-2.48.2/./glib/gmem.c:100: failed to allocate 18446744073709551614 bytes
I hope anybody can help me, and thanks for your help guys.
BR Christoph

I want to perform HLS (HTTP Live Streaming) using Gstreamer

I would like to stream web cam video to http web page. I know how to read from web cam and archive it to file.
But how to stream via web. What is the pipeline for that?
Use hlssink element from gst-plugins-bad:
gst-launch-1.0 videotestsrc is-live=true ! x264enc ! mpegtsmux ! hlssink
It will generate playlist and segments files. You need to provide HTTP access to these files, you can use any webserver, nginx or Apache, for example.
You can tweak hlssink's parameters to specify target location, segments count, etc. All options can be listed with:
gst-inspect-1.0 hlssink
If you need better low-level control, you'd better create your own web server with libsoup, manually split MPEG-TS into segments and add your own playlist endpoint.

Compress mpeg stream and send through network gstreamer

I want to build a pipe that sends a mpeg file with the gstreamer 1.0 through the network. I've tried some pipes and and examples but neither was the element known nor was it able to link two elements.
Can somebody show me a pipe for examlple with the udp protocol for sender and reciever? Or some give me some hints?
I'm currently using Ubuntu 14.04
It's always better to add code/script you've tried and error messages you get. Following works for me, for example,
Sender side: Get raw video from video file, encode in H.264, package in RTP and dump to UDP port 5000,
gst-launch-1.0 uridecodebin uri=file://`pwd`/sample.mpg ! x264enc ! h264parse ! rtph264pay ! udpsink host=127.0.0.1 port=5000
Receiver side: Read RTP packets from UDP port 5000, get video data (depay in GStreamer terminology), decode into raw video and play.
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp, encoding-name=H264,payload=96 ! rtph264depay ! decodebin ! autovideosink