RTMP Streaming using ffserver - c++

I would like to be able to stream media content originated by eg. a file to a flash player using RTMP.
I have considered librtmp though it seems ffmpeg support rtmp more as a client than as a server, that is, it implement the push/pull models w/o a ~server~ model.
Having 'ffserver' in mind, Does it support RTMP in the above mentioned manner? is it possibe to expose H264/AAC content via RTMP using ffserver ?
Any help will B appreciated.
Nadav at Sophin

Have you looked into Red5? http://www.red5.org/
I have used CRTMP-Server and have to say its amazing, and C/C++
http://www.rtmpd.com/
it worked great for me. I used to to send a MPEG-TS stream to a flash client. for a live desktop capture application.
Basically i had a directshow filter that captured the desktop area, then fed it to a H264 encoder filter then wrapped it in a TS container and fed it via TCP to rtmpd. It worked pretty well.

Related

GStreamer with WebRTC, OpenCV-Server-Client

I don’t know if I can say “I’m sorry for ask” but I spent more than a week looking for a solution without success. I have a Jetson Nano and with OpenCV I get and process an image at 4fps, I need to send this video to a web server to allow the client connected to the server get the video. Everything need to be written in C++.
Because a need a low latency I did test with GStreamer and WebRTC without success. I don’t have any web server ready, so I can use any implementation.
Anyone know where I can find some example implementation with this schema?
You can use mediasoup to send data to the server to then send the stream with rtp to another endpoint like gstreamer or ffmpeg.
Here is a recording project where data is sent from the browser -> server -> gstreamer -> file.
Mediasoup is written in c++ and has a wrapper for js.
I had similar problem and used such example from GStreamer WebRTC official repo. It's written in Python for Janus Gateway video rooms but I think it can be easily rewritten in C++ as you need.
In the code for OpenCV, I used V4L2Loopback as a virtual output device to be used as input for GStreamer WebRTC example.
I hope such approach may help you.
I think no need to send it to a Web Server. In Gstreamer examples [https://github.com/GStreamer/gst-examples]. The SendOnly example sends a video to a Web Client Using WebRTC. You can modify it to send an OpenCV mat.

RTSP Streaming Server C++

I would like to write a RTSP streaming server using C++. Multiple clients will be connected to this server for receiving the streamed data.
What I understand is that I need to do socket programming in C++ for client server architecture.
I know FFMPEG has command line support for streaming audio/video. But my requirement is writing a client server socket model in C++.
I had a look at https://www.medialan.de/usecase0001.html
I am also looking at this. https://www.youtube.com/watch?v=MEMzo59CPr8
but I am not sure if this will help me.
For streaming the audio/video data, Do i need to use FFMEPG APIs. If yes, which libraries of FFMPEG i need to use?.
I think I will use gstreamer RTSP server.
Gstreamer is easy to use.
I tried sample example and I was able to stream a video over RTSP.
No, you don’t need ffmpeg to write an RTSP server.

Live streaming from webcam in a browser

I am working on a live-streaming prototype, I have been reading a lot about how live-streaming works and many different approaches but I still can't find a live-streaming stack that suits my needs...
These are the requirements for my prototype:
1)The video/audio recording must come from a web browser using the webcam, the idea is that the client preferably shouldn't need to install plugins or do anything complicated(maybe installing Flash player plugin is acceptable, only for recording the video, the viewers should be able to view the stream without plugins).
2)It can't be peer to peer since I also need to store the entire video in my server (or in Amazon s3 servers for example) for viewing later.
3)The viewers should also be able to watch the stream without the need of installing anything, from their web browsers, say Chrome and Firefox for example. We want to use the HTML5 video tag if possible.
4)The prototype should be constructed without expending money preferably. I have seen that AWS-Cloudfront and Wowza offer free trials so we are thinking about using these 2 services.
5)The prototype should be able to maintain 1 live stream at a time and 2 viewers, just that, so there are no restrictions regarding this.
Any suggestions?
I am specially stuck/confused with the uploading/encoding video part of the architecture(I am new to streaming and all the formats/codecs/protocols/technologies are making it really hard to digest).
As of right now, I came across WebRTC that apparently allows me to do what I want, record and encode video from the browser using the webcam, but this API only works with HTTPS sites. Are there any alternatives that work with HTTP sites?
The other part that I am not completely sure about is the need for an encoding server, for example Wowza Streaming Engine, why do I need it? Isn't it enough if I use for example WebRTC for encoding the video and then I just send it to the distribution service (AWS-Cloudfront for example)? I do understand that the encoding server would allow me to support many different devices since it will create lots of different encodings and serve many different HTTP protocols, but do I need it for this prototype? I just want to make a 1 format (MP4 for example) live-stream that can be viewed in 2 web browsers, that's all, I don't need variety of formats nor support for different bandwidths or devices.
Base on your requirement, WebRTC is good way.
API only works with HTTPS sites. Are there any alternatives that work
with HTTP sites?
No. Currently Firefox is only browser is allow WebRTC on HTTP, but finally it need HTTPS
For doing this prototype you need to go with the Wowza WebRTC.
While going with wowza all the streams are delivered from the wowza only.So it become a routed WebRTC.
Install Wowza - https://www.wowza.com/docs/how-to-install-and-configure-wowza-streaming-engine
Enable the WebRTC - https://www.wowza.com/docs/how-to-use-webrtc-with-wowza-streaming-engine
Downaload and configure the Streamlock. or Selfsigned JKS file - https://www.wowza.com/docs/how-to-request-an-ssl-certificate-from-a-certificate-authority
Download the sample WebRTC - https://www.wowza.com/_private/webrtc/
Publish stream using the Publish HTML and Play through the Play HTML ( Supported Chrome,Firefox & Opera Browsers)
For MP4 files in WebRTC : you need to enable the transcoder with h264 & aac. Also you need to enable the option Record all the incoming Streams in the properties of application which you are creating for the WebRTC ( Not the DVR ).Using the File writer module save all the recorded files in a custom location.By using a custom script(Bash,Python) Move all the Transcoded files to the s3 bucket, Deliver through cloudfront.

How to make a streaming relay using gstreamer?

I would like to make of some sort of a streaming server. I would like it to receive RTSP streams over the net from live streams (e.g. webcam, ipcam, etc.) then broadcast that same stream on my local network using a different URL. I know gstreamer can do it quite well but I don't know how. I'm quite confused with the way the documentation is written. Can somebody help me?
If you would like to retransmit the video streams using RTSP as well, you can use GStreamer RTSP Server. There is a lot of examples on the Internet how to use it. The best source of the examples is the gst-rtsp-server's examples directory:
http://cgit.freedesktop.org/gstreamer/gst-rtsp-server/tree/examples
As you want to retransmit existing RTSP streams, you'll need to use the rtspsrc element for reception of the remote streams.
I think you are looking for something like this: https://github.com/jayridge/rtsprelay. It configures one rtsp server to receive clients on two urls with a record link and a play link.
This example uses a dynamic form:
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1454
rtsp://server/path?uri=encoded-URI
you HTML encode the destination in URI form and add a path where it should register this camera to. The first time you connect; it will take some time, after that; the sessions are re-used.

How to implement a tiny RTSP server?

I am implementing a client/server application where video streaming occurs between two computers (in one direction). I would like to have the server publish an SDP file when it starts streaming. The client would then be able to download this SDP file and use it to get the stream. In order to implement this it seems I need to include a RTSP server in my server application.
I am planning to use either libVLC or GStreamer for the client. Both are able to get incoming video streams using the info from an SDP file.
Server-side I don't really know where to start. Can anyone recommend a good C++ library that would allow me to create a small RTSP server?
Use Live555 LGPL library or for fun, read the RFC and implement :-)
Libcurl's library offers a simple example that can be usefull for the server side..
Take a look at: https://curl.haxx.se/libcurl/c/rtsp.html