RTSP server for more than one stream (gstreamer) - gstreamer

I'm trying to setup a RTSP server by using gstreamer. And I can use some help to define the server.
The concept of the project is:
We have several camera modules (let's say 'cam0' and 'cam1') and each of these have some video channels (HD and SD) and some audio channels (language0 and language1). The user(RTSP client) should be able to switch between the different video and audio channels. If a user is watching the HD stream with language0, he doesn't want to receive the other streams(reducing the required bandwidth).
The question is, how should I implement the RTSP server to handle these requirements?
Which of the following proposals is the best? Or if there is a better way to do it, let me know.
Use one RTSP server per camera module and this server has multiple URI's, like:
server0: rtsp://IP:port/HD
server0: rtsp://IP:port/SD
server0: rtsp://IP:port/lang0
server0: rtsp://IP:port/lang1
Use multiple RTSP servers per camera module and each server has one URI, like:
server0: rtsp://IP:port0/HD
server1: rtsp://IP:port1/SD
server2: rtsp://IP:port2/lang0
server3: rtsp://IP:port3/lang1
Use one RTSP server per camera module and one URI with several substreams, like:
server0: rtsp://IP:port/stream (contains substreams HD, SD, lang0, lang1)
So is one of these suggestions the right way to implement the RTSP server? And only the streams that are being watched have to be on the network.
Notes:
I use gst-rtsp-server for the RTSP server.
I'm using rtspsrc to receive the streams.
Update:
I use a combination of case 1 and 3. So I have two video streams s1 and s2. And for s1 I use case 3 to have two sub streams, hd and sd. I was not able to split those two because they come from the same videosrc. So both are send if one is requested.
To get case 1, you have to use two media-factories and give them a different uri. I gave them both a different multicast address and port range.

Related

How can display or stream the output of OpenCV to an HTML page or some other client applications

I want to implement the routine to send or write the OpenCV output frames to HTML or any other applications by using the network protocol such RTMP or RTSP etc.
I have tried and search a lot for this, but I did not find any solution.
The routine is like this:
Reading the frames from IP camera by using RTSP protocol. (done)
Process the frames (face detection... etc). (done)
Now I want to send/show the frames in the browser or any other
client application. (problem)
What have I done so far:
Sending the frames to a server (because the program is running on a
different machine) then the server using WebSocket sends the frames to the webpage. (which is very costly process, CPU, and RAM become very
slow and stop the processing).
The second, I have tried to use the OpenCV VideoWriter class to open
a stream and write the frames into that, but it was not opening the
stream.
Now the question is:
Can we use OpenCV built-in functions to broadcast or to write the frames to stream? If yes, then how it can be possible to achieve that. If not, so how we can implement this routine in a stable way? Are there any better way or framework to use?
command = 'ffmpeg -i - -f flv [streaming_url]'
import subprocess as sp
proc = sp.Popen(command, stdin=sp.PIPE,shell=False)
proc.stdin.write(frame.tostring())
Use the ffmpeg to generate the rtmp stream. Pipe in the images to ffmpeg command using stdin.

How many concurrent rtsp streams can live555 stream over the WAN reliably

I have written a rtsp ondemand server in C++ using live555 and I am able to host a rtsp stream. I then used VLC to connect to the server through the WAN and the image streams and looks great. Then I went to another computer and connected to the rtsp stream, I am seeing that both videos become choppy.
The data is h264 compressed and the resolution of the image is 800x600. The symptoms looks like there isnt enough bandwidth?
Basically my question is how many concurrent rtsp connections can be done over the WAN with live555. Has anyone else been able to stream reliably over the WAN using live555?
Thanks in advance.
This is mostly dependent on your WAN up-link bandwidth and your video bit-rate.
Let's try to estimate bit-rate of your given video. A very good explanation can be found here Assuming a moderate level of motion and 30 fps video this results in 3 mbps (800 x 600 x 30 x 3 x 0.07) bit-rate in your case. So if your up-link BW is less than 6 mbps, you cannot stream 2 videos simultaneously.
Other than that, live555 doesn't have any hard-coded limitations on this regard.

How to obtain mp3 audio packets for streaming in C/C++

I want to be able to break a song into packets and have access to these individual packets.
The reason for that is that I want to send each individual packet over the network using an experimental network protocol called Named Data Network.
As the packets arrive at the destination I want to play them. So I want to implement a streaming functionality. The only difference is the network layer that I will use. This network layer is not based on IP.
Does anyone know any C/C++ implementation of breaking a song file into pieces and then playing these packets individually? I looked over Gstreamer, but it seems complicated to get individual packets from its pipeline structure.
I found this reference which was the closest to what I wanted, however it was not so clear for me: how can I parse audio raw data recorder with gstreamer?
Summarizing the points I need:
Break a song into packets
Play the audio content of a single packet (or a small set of packets).
Thank you very much for the help!
An MP3 file is just a succession of MP3 frames. Each frame is made of a header and a data block.
Splitting the MP3 file as MP3 frames will involve parsing the MP3 file. You can refer to this documentation for a good description of the format.
Note that in the case of mpeg layer 3 codec, frames are not independant. In the worst case, 9 input frames may be needed before beeing able to decode one single frame.
What I would do instead of this
I guess you could probably ignore most of these details and focus on the streaming problem itself. Here is what I would try to build first:
on the sender side, split a file into packets, and send them one by one using your system. Command example: send_stream test.mp3
on the receiver side, receive the packets and rebuild the original file. Command example: receive_stream test.mp3
Once you have this working fine, modify the receiver program so that it writes the packets in-order on the standard output. This will allow you to redirect stdout to a file
# sender side did not change
send_stream test.mp3
# receiver side
receive_stream > test.mp3
Then, you can use madplay to play the mp3 while it is received simply by redirecting receive_stream output to madplay:
# madplay - tells madplay to read its input from standard input.
receive_stream | madplay -
For a good mp3 decoder, take a look at MAD.

Multiple applications using GStreamer

I want to write (but first I want to understand how to do it) applications (more than one) based on GStreamer framework that would share the same hardware resource at the same time.
For example: there is a hardware with HW acceleration for video decoding. I want to start simultaneously two applications that are able to decode different video streams, using HW acceleration. Of course I assume that HW is able to handle such requests, there is appropriate driver (but not GStreamer element) for doing this, but how to write GStreamer element that would support such resource sharing between separate processes?
I would appreciate any links, suggestions where to start...
You have h/w that can be accessed concurrently. Hence two gstreamer elements accessing it concurrently should work! There is nothing Gstreamer specific here.
Say you wanted to write a decoding element, it is like any decoding element and you access your hardware correctly. Your drivers should take care of the concurrent access.
The starting place is the Gstreamer plugin writer's guide.
So you need a single process that controls the HW decoder, and decodes streams from multiple sources.
I would recommend building a daemon, possibly itself based on GStreamer also. The gdppay and gdpdepay provide quite simple ways to pass data through sockets to the daemon and back. The daemon would wait for connections on a specified port (or unix socket) and open a virtual decoder per each connection. The video decoder elements in the separate applications would internally connect to the daemon and get back the decoded video.

Search for i-frame in RTP Packet

I am implementing RTSP in C# using an Axis IP Camera. Everything is working fine but when i try to display the video, I am getting first few frames with lots of Green Patches. I suspect the issue that I am not sending the i-frame first to the client.
Hence, I want to know the algorithm required to detect an i-frame in RTP Packet.
when initiating a RTSP-Session the server normaly starts the RTP-stream with config-data followed by the first I-Frame.
It is thinkable, that your Axis-camera is set to "always multicast" - in this case the RTSP-communication leads to a SDP description which tells the client all necessary network and streaming details for receiving the multicast stream.
Since the multicast stream is always present, you most probably receive some P- or B- frames first (depending on GOP-size).
You can detect these P/B-frames in your RTP client the same way you were detecting the I-frames as suggested by Ralf by identyfieng them via the NAL-unit type. Simply skip all frames in the RTP client until you receive the first I-frame.
Now you can forward all following frames to the decoder.
or you gave to change you camera settings!
jens.
ps: don't forget that you have fragmentation in your RTP stream - that means that beside of the RTP header there are some fragmentation information. Before identifying a frame you have to reassemble it.
It depends on the video media type. If you take H.264 for instance, you would look at the NAL unit header to check the nal unit type.
The green patches can indeed be caused by not having received an iframe first.