How to get mic to c++ from qml webgl? - c++

I'm making a call software using Qt and Qml, and I need to get the microphone feed from Qml running as webgl to the C++ side, if not straight to GStreamer using server.
I already have a Qt program as the client using GStreamer to push audio stream to the server. GStreamer, of course, doesn't go to the webgl client side though. I've found, that you can get permissions to use mic/camera from Qml, but I haven't found any example actually grabbing the stream from there. I've also checked out the usage of WebRTC. It seems like it could work with Qml and I have found some examples using it with GStreamer, but I haven't been able to get the combination of WebRTC and GStreamer working even with the examples.
So the full question:
How can I get the audio from the Qml running as webgl? Is there a way within Qt or do I have to go through WebRTC? If so, is there some simpler or more beginner friendly example than the Nirbheek's gstwebrtc demos for connecting WebRTC to GStreamer?

Not the answer I wanted, but this ended up working in my case:
As the C++ side is also running another Qt GUI, Qml to be specific, I
can use WebEngineView with html and javascript to never bother the C++
implementation with GStreamer for WebRTC. So currently I'm running
PeerJS on both sides of the connection with PeerJS' signaling
server in between.
I'd have preferred to use C++ with GStreamer to connect to WebRTC, but
I can't find other easy way for me to connect browser's audio to Qt.
edit: I apologise, this answer doesn't work in the end. I have been testing the programs on a single computer, thus I didn't realize WebGL hosted Qml doesn't run WebEngineView's Javascript on the frontend, but backend instead.

Related

How to stream sound on MAC OS using QT?

I want to develop an application to play sounds from web radio streams.
I first did it using the following code :
mRequest.setUrl(QUrl(QString("http://dummy.com/file.mp3")));
mPlayer.setMedia(mRequest);
mPlayer.play();
It works fine but I want to do it passing a custom QIODevice because I have to process the stream in a specific way before playing it.
I found some example describing how to do that but it seems not to work.
I then found an interesting link here describing an issue with streaming on MAC OS using Qt (QMediaplayer streaming from a custom QIODevice with encryption on Mac OS (10.9)) and what seems to explain the problem (Supported media player features).
So, assuming I want to get a stream from a web radio, process specific received bytes before playing, what QT class do you advice me to use ?
Thanks in advance for your help.

Receiving WebRTC call from a C++ native Windows application

I would like, from a native Windows application using C++, to receive video/audio data sent from a browser located in a remote location. It seems like WebRTC is the way to go for this.
Most information I find is about how to interact with the browser to write WebRTC apps, but it may case the data would be received by my C++ app. Is it correct that I would need to use the WebRTC Native Code package for this, which is described as being 'for browser developers'? Document is located here: http://www.webrtc.org/webrtc-native-code-package
And what if I want to send video/audio data that I generate (ie not directly coming from a webcam and microphone), would I be able to send it to the remote location browser?
Any sample code out there which does something like I'm trying to accomplish?
The wording in that link is a bit misleading. They intend people that are developing browsers to use the native code, and advise those that are developing "applications" in a browser to use the WebRTC API.
I have worked with their native code for over a year to develop an Android application that is capable of performing audio and / or video calls between other Android devices and to browsers. So, I a pretty sure that it is completely possible to to take their native code and create a Windows application (especially since they have example code that does that for Linux and Mac -- look at peerconnection client and peerconnection server for this). You might have to write and re-write code to get it to work on Windows.
As for as data that you generate. In the Android project that I worked with, we didn't rely on the Android device / system to provide us with video, we captured and sent that out our selves using the "LibJingle" / WebRTC libraries. So, I know that that is possible, as long as you provide the libraries with video data in the correct format. I would imagine that one would be able to do the same with audio, but we never fiddled with that, so I cannot say for sure.
And as for example code, I can only suggest Luke Weber's gitbug repositories. Although it is for Android, it might be of some help to look at how he interfaces with the two libraries. Probably the better code to look at is the peerconnection client stuff that comes in the "LibJingle" second of the native code. [edit]: That is located in /talk/examples/peerconection/client/ .
If you get lost from my use of "LibJingle", that will show you when I started working with all of this code. Sometime around July of 2013 they migrated "LibJingle" into the WebRTC "talk" folder. From everything that I have seen, they are the same thing, just with the location and named changed.

Using live data from Blaze DS in a swf file

I currently have a c++ client which can play SWF, AVI, BIK etc
It uses DirectX9 to render the graphics
I currently have a requirement for dynamic SWF files, which would retrieve data from a BlazeDS server and put certain text in certain places depending on the retrieved data
From what I have read, BlazeDS talks to Adobe Flex and Adobe AIR applications
Would that mean I would have to convert my current c++ client into a Flex application
Sorry if this seems like a stupid question, I'm just having trouble trying to figure out how Blaze and Flex and Air all fit together
It's a litte unclear what you're asking, so let me have a crack:
Assuming that you have a c++ runtime that you want to communicate with BlazeDS, you could write a c++ implementation of the AMF protocol.
The protocol itself is open source, and there may even be c++ implementations of it out there already.
I assume from your question that the client that is rendering the SWF is not the flash player. If this is the case, switching your c++ app to Flex won't win you anything, as Flex itself doesn't know how to handle AMF -- the serialization process is handled by the flash player, rather than the flex framework.

Audiooutput problem in Qt using qmultimedia low level API

I'm trying to get mpg123 audio decoder to work with QT on windows. How do i play the decoded audio data at the right speed with Qmultimedia module in push mode. Currently i'm using simple timer to get it to play audio but it's not very efficient way to do it, if I do anything else at the same time audio get all distorted. Is there any better way to send the decoded data to audio output? It would be nice if anyone could point me to any nice examples using Qmultimedia module and Qaudiooutput class. I've tried to figure out QT example project "audiooutput" but it seems that it's also using timer to send audio to output in push mode.. Hope that I'm not too confusing.
I also had to figure that out and I would also suggest using the Phonon framework to do this.
It uses Windows Media Player as host on Windows, QuickTime on Mac and some KDE stuff on Linux.
So it's pretty platform independent.
If you need more low-level functionality, you should take a look into an open-source project called portaudio. It's very easy to use and you can manipulate or even fill buffers from code.
I used it to build an oscillator.
Hope that helps!
Best,
guitarflow

Play RTP video stream using Qt?

I want to create a Qt widget that can play incoming RTP streams where the video is encoded as H264 and contains no audio.
My basic plan for implementation is this:
Create a Phonon MediaSource object (Stream type).
Connect it with a QIODevice subclass that provides the data
Obtain the video data using either:
The JRTPLIB client library
The GStreamer gstrtpbin plugin. This plugin takes care depayloading the packages and decoding the video. Maybe this improves the chances that Phonon will recognize the data.
My environment:
Ubuntu 9.10
Qt 4.6
My questions:
Is my approach a good one? Perhaps I'm overlooking a more obvious or simple solution?
I'm currently experiencing this issue: when trying to play the video stream the state of the MediaObject turns to ErrorState with errorType FatalError. Can anyone tell me what I'm doing wrong?
Edit
One solution I found is using libVLC in combination with Qt, which I learned about in this thread. Here's a code sample for the interested.
I'm still looking for a Phonon-based solution.
Ideally I would only need to provide an SDP file and job is done.
I was able to get it to work using the libVLC solution. I can't garantuee that this is the best solution though as I simply stopped looking after that.
Here's a link to the libVLC sample.
The way I understand Phonon works at least in Windows is that QT provides a phonon backend plugin for DirectShow (\plugins\phonon_backend\phonon_ds94.dll) and GStreamer in your case. Then you would either obtain or write your own DirectShow filter which can accept RTP streams as a source. DirectShow takes care of the decoding, and Phonon will take care of the rendering.
So if the backend works, the application code is as simple as:
Phonon::MediaObject *media = new Phonon::MediaObject();
Phonon::VideoWidget *video = new Phonon::VideoWidget();
Phonon::createPath(media, video);
media->setCurrentSource(source);
media->play();
Seems that the problem lies with the GStreamer backend accepting RTP as a source. Can you playback that source in standalone GStreamer without any problems?