I'm recording an audio conference using RestComm and the Telestax MediaServer. I would like to forward the RTP data of that conference, so I could do some real-time processing with it using gstreamer. I was taking a look to the documentation but I didn't find how to do it easily with RestComm.
Some guidance/documentation here would be welcomed so I can start to work on it (configuration or maybe extending RestComm if needed)
Did you try joining a mute participant to the conference. This muted participant would be your SIP + gstreamer client that could do some real time processing on the media coming from the audio conference ?
Related
I am using Twilio Programmable video, and trying to pipe remote participant's audio in real time to Google Cloud Media Translation client.
There is a sample code on how to use Google Cloud Media Translation client via microphone on here.
What I am trying to accomplish is that instead of using a microphone and node-record-lpcm16, I want to pipe what I am getting from Twilio's AudioTrack to Google Cloud Media Translation client. According to
this doc,
Tracks represent the individual audio, data, and video media streams that are shared within a Room.
Also, according to this doc, AudioTrack contains an audio MediaStreamTrack. I am guessing this can be used to extract the audio and pipe it to somewhere else.
What's the best way of tackling this problem?
Twilio developer evangelist here.
With the MediaStreamTrack you can compose it back into a MediaStream object and then pass it to a MediaRecorder. When you start the MediaRecorder it will receive dataavailable events which will be a chunk of audio in the webm format. You can then pipe those chunks elsewhere to do the translation. I wrote a blog post on recording using the MediaRecorder, which should give you a better idea how the MediaRecorder works, but you will have to complete the work to stream the audio chunks to the server to be translated.
I don’t know if I can say “I’m sorry for ask” but I spent more than a week looking for a solution without success. I have a Jetson Nano and with OpenCV I get and process an image at 4fps, I need to send this video to a web server to allow the client connected to the server get the video. Everything need to be written in C++.
Because a need a low latency I did test with GStreamer and WebRTC without success. I don’t have any web server ready, so I can use any implementation.
Anyone know where I can find some example implementation with this schema?
You can use mediasoup to send data to the server to then send the stream with rtp to another endpoint like gstreamer or ffmpeg.
Here is a recording project where data is sent from the browser -> server -> gstreamer -> file.
Mediasoup is written in c++ and has a wrapper for js.
I had similar problem and used such example from GStreamer WebRTC official repo. It's written in Python for Janus Gateway video rooms but I think it can be easily rewritten in C++ as you need.
In the code for OpenCV, I used V4L2Loopback as a virtual output device to be used as input for GStreamer WebRTC example.
I hope such approach may help you.
I think no need to send it to a Web Server. In Gstreamer examples [https://github.com/GStreamer/gst-examples]. The SendOnly example sends a video to a Web Client Using WebRTC. You can modify it to send an OpenCV mat.
I manged to run WebRTC peerconnection example, but it is not running on the browser.
I'm trying to find a way to stream both video and audio from browser to my native program.
Is there any way?
It can be done. WebRTC is designed to work in a peer-to-peer manner between two WebRTC agents (typically a Web Browser). Your native program needs to become the second peer.
If you need to rely on open source components a good starting point is:
OpenSSL for the DTLS key exchange.
libsrtp to encrypt the RTP packets.
ffmpeg to decode the PCM audio from the browser (libvpx if you need to do video).
You'll also need to handle the ICE negotiation which requires processing STUN messages. Also extract the media payloads from the RTP packets. All these steps are also after you've determined a signalling method to exchange the SDP offer and answer between you app and the browser.
As you've probably realised starting from scratch it's a major task. There are probably some commercial libraries that will do the job and save you a lot of pain.
If that doesn't scare you and you do still want to make an attempt using open source components this example "may" help. The sample is doing the reverse of what you've asked and is sending a video stream to Chrome rather than receiving an audio stream. The useful aspect is the connection negotiation. The sample program is able to get RTP packets flowing which is often the main problem.
The example is also using Windows Media Foundation which is Windows specific. It also has lots of shortcuts particularly with the RTP and STUN packet processing.
Summary: How do I stream high quality video using WebRTC native?
I have an h264 stream that's 1920x1080 at about 30fps. I can currently stream this from a server on localhost to a native client on localhost just fine.
I wrote a WebRTC server using Google's WebRTC native library. I've written a VideoEncoder and VideoEncoderFactory that takes frames consisting of already encoded data and and broadcasts it over a video track. Using this I can send my h264 stream to the WebRTC server over a pipe and I can see the video stream in a browser.
However, any time something moves the video gets corrupted. It continues to play but is full of artifacts. Eventually I discovered that WebRTC is dropping some of my frames. When I attach a sequentially increasing ID to each frame before I pass it to rtc::AdaptedVideoTrackSource::OnFrame and I log this same ID in webrtc::VideoEncoder::Encode I can see that some of my frames simply disappear.
This kind of makes sense, I'm trying to stream high quality video over something meant for video chat and lowing my framerate fixes the corruption. However, I'm not asking the WebRTC library to do a lot, it's just forwarding already encoded data to a client on localhost. I have a native app that does this fine and I've seen one browser WebRTC client that can do this. Is there a field in the SDP or some configuration change that will allow me to stream my video?
This was the solution How to control bandwidth in WebRTC video call? .
I had heard about changing the offer sdp but dismissed it because I was told that the browser will accept unlimited bandwidth by default and that you'd only need to to this if you want to limit bandwidth. However, adding "b=AS:high number" has fixed all of my problems.
I'm developing the application that sending MPEG video over IP network using RTP protocol.
In order to test it I'm looking for a software tool that can measure network jitter, recognize RTP packets reordering events and show results as a graph.
Any help is highly appreciated.
You can do a trace and then analyze it in Wireshark. Probably it won't recognize the RTP streams at first (it doesn't know the port number), but you can select one packet and choose 'Decode As.. RTP'. After that, go to 'Telephony / RTP / Stream Analysis' and you will get comprehensive RTP statistics.
The solution is to use Wireshark: Open Main menu->telephony->RTP->Stream analysis...
More information is here: http://wiki.wireshark.org/RTP_statistics
#Dima, if you are looking for a software tool capable of measuring network jitter, I suggest you check out http://www.netrounds.com. This is a cloud-controlled software solution for IPTV MPEG transport monitoring in line with ETR 101 290. You can measure both PCR and RTP jitter.
RTP jitter will show any network jitter problems in your transport path, whereas PCR jitter will also include any jitter introduced by encoding/transcoding.
There is a free version of Netrounds available that supports monitoring of two IPTV MPEG transport streams (channels) concurrently. Channels are received by the probe/agent by IGMP join messages sent to the corresponding multicast groups.
DISCLAIMER: I am affiliated with Netrounds.