I have an audio processing server, and I'd like to be able to connect to it via WebRTC.
The native library from Google seems suitable for that (from looking at the peerconnection example): https://webrtc.org/native-code/native-apis/
But the library relies too much on the audio devices: it opens them behind the scenes. I've managed to grab the incoming audio by appending my own AudioTrackSinkInterface to the stream, but haven't yet found how to inject the audio into the outbound stream. And these hacks don't avoid opening the devices anyway.
How to do it cleanly?
Related
I manged to run WebRTC peerconnection example, but it is not running on the browser.
I'm trying to find a way to stream both video and audio from browser to my native program.
Is there any way?
It can be done. WebRTC is designed to work in a peer-to-peer manner between two WebRTC agents (typically a Web Browser). Your native program needs to become the second peer.
If you need to rely on open source components a good starting point is:
OpenSSL for the DTLS key exchange.
libsrtp to encrypt the RTP packets.
ffmpeg to decode the PCM audio from the browser (libvpx if you need to do video).
You'll also need to handle the ICE negotiation which requires processing STUN messages. Also extract the media payloads from the RTP packets. All these steps are also after you've determined a signalling method to exchange the SDP offer and answer between you app and the browser.
As you've probably realised starting from scratch it's a major task. There are probably some commercial libraries that will do the job and save you a lot of pain.
If that doesn't scare you and you do still want to make an attempt using open source components this example "may" help. The sample is doing the reverse of what you've asked and is sending a video stream to Chrome rather than receiving an audio stream. The useful aspect is the connection negotiation. The sample program is able to get RTP packets flowing which is often the main problem.
The example is also using Windows Media Foundation which is Windows specific. It also has lots of shortcuts particularly with the RTP and STUN packet processing.
Summary: How do I stream high quality video using WebRTC native?
I have an h264 stream that's 1920x1080 at about 30fps. I can currently stream this from a server on localhost to a native client on localhost just fine.
I wrote a WebRTC server using Google's WebRTC native library. I've written a VideoEncoder and VideoEncoderFactory that takes frames consisting of already encoded data and and broadcasts it over a video track. Using this I can send my h264 stream to the WebRTC server over a pipe and I can see the video stream in a browser.
However, any time something moves the video gets corrupted. It continues to play but is full of artifacts. Eventually I discovered that WebRTC is dropping some of my frames. When I attach a sequentially increasing ID to each frame before I pass it to rtc::AdaptedVideoTrackSource::OnFrame and I log this same ID in webrtc::VideoEncoder::Encode I can see that some of my frames simply disappear.
This kind of makes sense, I'm trying to stream high quality video over something meant for video chat and lowing my framerate fixes the corruption. However, I'm not asking the WebRTC library to do a lot, it's just forwarding already encoded data to a client on localhost. I have a native app that does this fine and I've seen one browser WebRTC client that can do this. Is there a field in the SDP or some configuration change that will allow me to stream my video?
This was the solution How to control bandwidth in WebRTC video call? .
I had heard about changing the offer sdp but dismissed it because I was told that the browser will accept unlimited bandwidth by default and that you'd only need to to this if you want to limit bandwidth. However, adding "b=AS:high number" has fixed all of my problems.
So I have on one hand an embedded device with a camera running openCV and on the other hand a C++ (Qt) GUI. I would like to connect both i.e.:
"stream" all the output image frames/video from openCV to my remote C++ gui
send commands from my C++ gui to the embedded device
How can I do this, what possibilities do I have? I was thinking about sockets, but I don't know whether that is the easiest solution to stream the image frames from openCV to my Qt gui.
Thank you
You should give us more details about what you're trying to achieve.
You say "stream [...] to my remote C++ GUI": do you mean sending the data over a cabled connection? over a LAN network? over the Internet?
Depending on the answer this changes your system's architecture quite a bit. Especially in case you want to stream the data over the Internet. If your use case implies a LAN network, you can easily setup a peer-to-peer connection between the embedded device and the C++ app to send data. However, it's much more complicated if you want to send data over the Internet, because it is difficult to create a peer-to-peer connection if you don't have static IPs (which I'm assuming you do not have). You will need a server (which can be written with Qt as well) to work as a relay for sending data from the device to your C++ app.
Do you need actual video streaming (at 25fps), or is a low refresh rate (1-0.5fps) sufficient ?
(I'm making the assumption you want to send data over a network)
Because if a low image rate is sufficient, using WebSockets to send images on a regular basis might just do the trick.
Otherwise, you'll need to setup a UDP connection with a video buffer.
Hope this helps!
D
I have small computer (something like Arduino or Raspberry pi) with Linux, camera and gstreamer installed on it.
I need to stream h264 video from this device to browser using WebRTC technology. Also, I use NodeJS as signaling server.
In simple words, I need to doing a WebRTC client from my device. What is the best way to do this? Can I use WebRTC Native API for this goal? How can I install it on my small device? Or, maybe, I just need to play with my gstreamer and install some webrtc plugins for it?
Since you will have to use a signalling server anyways, I would say you should use the Janus-Gateway. You mention CentOS for your signalling server, I am not 100% if it will run on CentOS specifically, but I have ran it successfully in Debian Jessie build with just a few dependency installations.
Janus handles the entire call set up with the gateway(signalling and everything). So, some port forwarding will probably have to be done so that the SDP exchange can occur(which you would have to worry about with any signalling server).
Install the gateway, there are a few dependencies but all were simple
installations
Take a look at the janus_streaming plugin. It has a gstreamer example that will stream from a gstreamer pipeline. Also, the streamingtest demo page to see how the Javascript API works for that plugin
The plugin listens on those ports given in the configuration file and will accept traffic from any IP address. So, I expect you can run a gstreamer pipeline on a different machine on the same network and send it to the plugin.
NOTE: You will have to modify the SDP that the JavaScipt sends to the gateway so that it includes H264(probably get rid of all other codecs as well just to force negotiation). You can do this by accessing the sdp through the jsep object passed to the success case for the createOffer function in the janus JavaScript API(jsep.sdp).
Another possibility for you is to use the Kurento Media Server (KMS), which has been written on top of GStreamer. I see two possibilities
You install KMS in a Ubuntu 14.04 box and bridge with your device, so that the device generates the video stream and sends it to the KMS box. From that, you can transcode it to VP9 and distribute it as a WebRTC stream quite easily using kurento client APIs (which may be used from Node.js). The application making the transcoding will require an RtpEndpoint (receiving video form the device in RTP/H.264) connected to a WebRtcEndpoint (capable of sending the video stream through WebRTC). This option is quite simple to implement because it's the standard way of using KMS. However, you will need to generate the RTP/H.264 stream on the device and appropriate SDP for it (this can be done using standard GStreamer elements)
You try to install KMS into your box directly. This might be more complex because it requires compiling KMS to the specific device, which may require some time investment. In addition, performing the transcoding in the device might be too expensive and you could starve its CPU.
Disclaimer: I'm member of the Kurento development team
You mentioned that you used a NodeJS signaling server. Recently Ericsson released an open source WebRTC gstreamer element: http://www.openwebrtc.io/, and along with their release they also published a WebRTC demo using node.js: http://demo.openwebrtc.io:38080/; the code here: https://github.com/EricssonResearch/openwebrtc-examples/tree/master/server.
For WebRTC for Raspberry Pi 2 you may want to consider UV4L. It allows you to stream live Audio & Video from the Rpi to any browser on a PC (HTML5).
I am implementing a client/server application where video streaming occurs between two computers (in one direction). I would like to have the server publish an SDP file when it starts streaming. The client would then be able to download this SDP file and use it to get the stream. In order to implement this it seems I need to include a RTSP server in my server application.
I am planning to use either libVLC or GStreamer for the client. Both are able to get incoming video streams using the info from an SDP file.
Server-side I don't really know where to start. Can anyone recommend a good C++ library that would allow me to create a small RTSP server?
Use Live555 LGPL library or for fun, read the RFC and implement :-)
Libcurl's library offers a simple example that can be usefull for the server side..
Take a look at: https://curl.haxx.se/libcurl/c/rtsp.html