I don’t know if I can say “I’m sorry for ask” but I spent more than a week looking for a solution without success. I have a Jetson Nano and with OpenCV I get and process an image at 4fps, I need to send this video to a web server to allow the client connected to the server get the video. Everything need to be written in C++.
Because a need a low latency I did test with GStreamer and WebRTC without success. I don’t have any web server ready, so I can use any implementation.
Anyone know where I can find some example implementation with this schema?
You can use mediasoup to send data to the server to then send the stream with rtp to another endpoint like gstreamer or ffmpeg.
Here is a recording project where data is sent from the browser -> server -> gstreamer -> file.
Mediasoup is written in c++ and has a wrapper for js.
I had similar problem and used such example from GStreamer WebRTC official repo. It's written in Python for Janus Gateway video rooms but I think it can be easily rewritten in C++ as you need.
In the code for OpenCV, I used V4L2Loopback as a virtual output device to be used as input for GStreamer WebRTC example.
I hope such approach may help you.
I think no need to send it to a Web Server. In Gstreamer examples [https://github.com/GStreamer/gst-examples]. The SendOnly example sends a video to a Web Client Using WebRTC. You can modify it to send an OpenCV mat.
Related
I would like to write a RTSP streaming server using C++. Multiple clients will be connected to this server for receiving the streamed data.
What I understand is that I need to do socket programming in C++ for client server architecture.
I know FFMPEG has command line support for streaming audio/video. But my requirement is writing a client server socket model in C++.
I had a look at https://www.medialan.de/usecase0001.html
I am also looking at this. https://www.youtube.com/watch?v=MEMzo59CPr8
but I am not sure if this will help me.
For streaming the audio/video data, Do i need to use FFMEPG APIs. If yes, which libraries of FFMPEG i need to use?.
I think I will use gstreamer RTSP server.
Gstreamer is easy to use.
I tried sample example and I was able to stream a video over RTSP.
No, you don’t need ffmpeg to write an RTSP server.
I manged to run WebRTC peerconnection example, but it is not running on the browser.
I'm trying to find a way to stream both video and audio from browser to my native program.
Is there any way?
It can be done. WebRTC is designed to work in a peer-to-peer manner between two WebRTC agents (typically a Web Browser). Your native program needs to become the second peer.
If you need to rely on open source components a good starting point is:
OpenSSL for the DTLS key exchange.
libsrtp to encrypt the RTP packets.
ffmpeg to decode the PCM audio from the browser (libvpx if you need to do video).
You'll also need to handle the ICE negotiation which requires processing STUN messages. Also extract the media payloads from the RTP packets. All these steps are also after you've determined a signalling method to exchange the SDP offer and answer between you app and the browser.
As you've probably realised starting from scratch it's a major task. There are probably some commercial libraries that will do the job and save you a lot of pain.
If that doesn't scare you and you do still want to make an attempt using open source components this example "may" help. The sample is doing the reverse of what you've asked and is sending a video stream to Chrome rather than receiving an audio stream. The useful aspect is the connection negotiation. The sample program is able to get RTP packets flowing which is often the main problem.
The example is also using Windows Media Foundation which is Windows specific. It also has lots of shortcuts particularly with the RTP and STUN packet processing.
I have small computer (something like Arduino or Raspberry pi) with Linux, camera and gstreamer installed on it.
I need to stream h264 video from this device to browser using WebRTC technology. Also, I use NodeJS as signaling server.
In simple words, I need to doing a WebRTC client from my device. What is the best way to do this? Can I use WebRTC Native API for this goal? How can I install it on my small device? Or, maybe, I just need to play with my gstreamer and install some webrtc plugins for it?
Since you will have to use a signalling server anyways, I would say you should use the Janus-Gateway. You mention CentOS for your signalling server, I am not 100% if it will run on CentOS specifically, but I have ran it successfully in Debian Jessie build with just a few dependency installations.
Janus handles the entire call set up with the gateway(signalling and everything). So, some port forwarding will probably have to be done so that the SDP exchange can occur(which you would have to worry about with any signalling server).
Install the gateway, there are a few dependencies but all were simple
installations
Take a look at the janus_streaming plugin. It has a gstreamer example that will stream from a gstreamer pipeline. Also, the streamingtest demo page to see how the Javascript API works for that plugin
The plugin listens on those ports given in the configuration file and will accept traffic from any IP address. So, I expect you can run a gstreamer pipeline on a different machine on the same network and send it to the plugin.
NOTE: You will have to modify the SDP that the JavaScipt sends to the gateway so that it includes H264(probably get rid of all other codecs as well just to force negotiation). You can do this by accessing the sdp through the jsep object passed to the success case for the createOffer function in the janus JavaScript API(jsep.sdp).
Another possibility for you is to use the Kurento Media Server (KMS), which has been written on top of GStreamer. I see two possibilities
You install KMS in a Ubuntu 14.04 box and bridge with your device, so that the device generates the video stream and sends it to the KMS box. From that, you can transcode it to VP9 and distribute it as a WebRTC stream quite easily using kurento client APIs (which may be used from Node.js). The application making the transcoding will require an RtpEndpoint (receiving video form the device in RTP/H.264) connected to a WebRtcEndpoint (capable of sending the video stream through WebRTC). This option is quite simple to implement because it's the standard way of using KMS. However, you will need to generate the RTP/H.264 stream on the device and appropriate SDP for it (this can be done using standard GStreamer elements)
You try to install KMS into your box directly. This might be more complex because it requires compiling KMS to the specific device, which may require some time investment. In addition, performing the transcoding in the device might be too expensive and you could starve its CPU.
Disclaimer: I'm member of the Kurento development team
You mentioned that you used a NodeJS signaling server. Recently Ericsson released an open source WebRTC gstreamer element: http://www.openwebrtc.io/, and along with their release they also published a WebRTC demo using node.js: http://demo.openwebrtc.io:38080/; the code here: https://github.com/EricssonResearch/openwebrtc-examples/tree/master/server.
For WebRTC for Raspberry Pi 2 you may want to consider UV4L. It allows you to stream live Audio & Video from the Rpi to any browser on a PC (HTML5).
I would like to be able to stream media content originated by eg. a file to a flash player using RTMP.
I have considered librtmp though it seems ffmpeg support rtmp more as a client than as a server, that is, it implement the push/pull models w/o a ~server~ model.
Having 'ffserver' in mind, Does it support RTMP in the above mentioned manner? is it possibe to expose H264/AAC content via RTMP using ffserver ?
Any help will B appreciated.
Nadav at Sophin
Have you looked into Red5? http://www.red5.org/
I have used CRTMP-Server and have to say its amazing, and C/C++
http://www.rtmpd.com/
it worked great for me. I used to to send a MPEG-TS stream to a flash client. for a live desktop capture application.
Basically i had a directshow filter that captured the desktop area, then fed it to a H264 encoder filter then wrapped it in a TS container and fed it via TCP to rtmpd. It worked pretty well.
I am implementing a client/server application where video streaming occurs between two computers (in one direction). I would like to have the server publish an SDP file when it starts streaming. The client would then be able to download this SDP file and use it to get the stream. In order to implement this it seems I need to include a RTSP server in my server application.
I am planning to use either libVLC or GStreamer for the client. Both are able to get incoming video streams using the info from an SDP file.
Server-side I don't really know where to start. Can anyone recommend a good C++ library that would allow me to create a small RTSP server?
Use Live555 LGPL library or for fun, read the RFC and implement :-)
Libcurl's library offers a simple example that can be usefull for the server side..
Take a look at: https://curl.haxx.se/libcurl/c/rtsp.html