how to change the microphone source dynamically gstreamer - gstreamer

I have a client in gstreamer which is sending his audio to another client. I want the sender client to be able to change between different audio sources (Microphones in the device) dynamically.
I am using wasapisrc for getting the audio stream from the microphone
I tried changing the device property of the wasapisrc (when the connection is established) but it didn't work.

you can use input-selector element to switch between input sources
https://gstreamer.freedesktop.org/documentation/coreelements/input-selector.html

Related

Native WebRTC dropping frames

Summary: How do I stream high quality video using WebRTC native?
I have an h264 stream that's 1920x1080 at about 30fps. I can currently stream this from a server on localhost to a native client on localhost just fine.
I wrote a WebRTC server using Google's WebRTC native library. I've written a VideoEncoder and VideoEncoderFactory that takes frames consisting of already encoded data and and broadcasts it over a video track. Using this I can send my h264 stream to the WebRTC server over a pipe and I can see the video stream in a browser.
However, any time something moves the video gets corrupted. It continues to play but is full of artifacts. Eventually I discovered that WebRTC is dropping some of my frames. When I attach a sequentially increasing ID to each frame before I pass it to rtc::AdaptedVideoTrackSource::OnFrame and I log this same ID in webrtc::VideoEncoder::Encode I can see that some of my frames simply disappear.
This kind of makes sense, I'm trying to stream high quality video over something meant for video chat and lowing my framerate fixes the corruption. However, I'm not asking the WebRTC library to do a lot, it's just forwarding already encoded data to a client on localhost. I have a native app that does this fine and I've seen one browser WebRTC client that can do this. Is there a field in the SDP or some configuration change that will allow me to stream my video?
This was the solution How to control bandwidth in WebRTC video call? .
I had heard about changing the offer sdp but dismissed it because I was told that the browser will accept unlimited bandwidth by default and that you'd only need to to this if you want to limit bandwidth. However, adding "b=AS:high number" has fixed all of my problems.

Native WebRTC without audio device

I have an audio processing server, and I'd like to be able to connect to it via WebRTC.
The native library from Google seems suitable for that (from looking at the peerconnection example): https://webrtc.org/native-code/native-apis/
But the library relies too much on the audio devices: it opens them behind the scenes. I've managed to grab the incoming audio by appending my own AudioTrackSinkInterface to the stream, but haven't yet found how to inject the audio into the outbound stream. And these hacks don't avoid opening the devices anyway.
How to do it cleanly?

Receive rtsp stream using gstreamer

I want to receive rtsp stream using gstreamer I knw rtspsrc can be used for this purpose but the problem is that it only receives it as a client but in my case i have a ffmpeg application which streams the video as a client and waits for a server to connect with it before streaming. So i want gstreamer to act as server and receive the stream from ffmpeg
I haven't used it myself, but I believe there is a separate package for RTSP server functionality. In Debian based systems it should be under something like:
libgstrtspserver-0.10-0

WebRTC without actual audio / video device

I am planning (using native API), to establish webRTC session between two clients.
My requirement is to establish webRTC session without using PC's audio/video devices (as I plan to have multiple simultaneous webRTC sessions in same PC).
I am currently following this tutorial and want to know where in these files the following things happen:
open the audio / video device
read from audio / video device (capture)
write to audio / video device (play)
close the audio device
Kindly guide me if somebody knows the file-name / function-name where I need to look for above 4 points.

Does RTP Packets using RTSP protocol contain both audio and video

I am developing a client program which will display the media captured from IP camera. So I want to whether the RTP packets using RTSP protocol contain both audio and video if contains both how should I extract it?
RTSP stream does not carry video/audio itself, it provides a method to control independent RTP video and audio streams (they are in turn independent one from another).
One of the options though is when RTP streams are tunnelled through RTSP connection, in which case all communication might be taking place through single TCP connection.
You can read the SDP returned in the SETUP request to the RtspServer.
There should be a MediaInformation for each stream available.
That will tell you if there is audio or video etc...
http://en.wikipedia.org/wiki/Session_Description_Protocol