I would like, from a native Windows application using C++, to receive video/audio data sent from a browser located in a remote location. It seems like WebRTC is the way to go for this.
Most information I find is about how to interact with the browser to write WebRTC apps, but it may case the data would be received by my C++ app. Is it correct that I would need to use the WebRTC Native Code package for this, which is described as being 'for browser developers'? Document is located here: http://www.webrtc.org/webrtc-native-code-package
And what if I want to send video/audio data that I generate (ie not directly coming from a webcam and microphone), would I be able to send it to the remote location browser?
Any sample code out there which does something like I'm trying to accomplish?
The wording in that link is a bit misleading. They intend people that are developing browsers to use the native code, and advise those that are developing "applications" in a browser to use the WebRTC API.
I have worked with their native code for over a year to develop an Android application that is capable of performing audio and / or video calls between other Android devices and to browsers. So, I a pretty sure that it is completely possible to to take their native code and create a Windows application (especially since they have example code that does that for Linux and Mac -- look at peerconnection client and peerconnection server for this). You might have to write and re-write code to get it to work on Windows.
As for as data that you generate. In the Android project that I worked with, we didn't rely on the Android device / system to provide us with video, we captured and sent that out our selves using the "LibJingle" / WebRTC libraries. So, I know that that is possible, as long as you provide the libraries with video data in the correct format. I would imagine that one would be able to do the same with audio, but we never fiddled with that, so I cannot say for sure.
And as for example code, I can only suggest Luke Weber's gitbug repositories. Although it is for Android, it might be of some help to look at how he interfaces with the two libraries. Probably the better code to look at is the peerconnection client stuff that comes in the "LibJingle" second of the native code. [edit]: That is located in /talk/examples/peerconection/client/ .
If you get lost from my use of "LibJingle", that will show you when I started working with all of this code. Sometime around July of 2013 they migrated "LibJingle" into the WebRTC "talk" folder. From everything that I have seen, they are the same thing, just with the location and named changed.
Related
I'm making a call software using Qt and Qml, and I need to get the microphone feed from Qml running as webgl to the C++ side, if not straight to GStreamer using server.
I already have a Qt program as the client using GStreamer to push audio stream to the server. GStreamer, of course, doesn't go to the webgl client side though. I've found, that you can get permissions to use mic/camera from Qml, but I haven't found any example actually grabbing the stream from there. I've also checked out the usage of WebRTC. It seems like it could work with Qml and I have found some examples using it with GStreamer, but I haven't been able to get the combination of WebRTC and GStreamer working even with the examples.
So the full question:
How can I get the audio from the Qml running as webgl? Is there a way within Qt or do I have to go through WebRTC? If so, is there some simpler or more beginner friendly example than the Nirbheek's gstwebrtc demos for connecting WebRTC to GStreamer?
Not the answer I wanted, but this ended up working in my case:
As the C++ side is also running another Qt GUI, Qml to be specific, I
can use WebEngineView with html and javascript to never bother the C++
implementation with GStreamer for WebRTC. So currently I'm running
PeerJS on both sides of the connection with PeerJS' signaling
server in between.
I'd have preferred to use C++ with GStreamer to connect to WebRTC, but
I can't find other easy way for me to connect browser's audio to Qt.
edit: I apologise, this answer doesn't work in the end. I have been testing the programs on a single computer, thus I didn't realize WebGL hosted Qml doesn't run WebEngineView's Javascript on the frontend, but backend instead.
In my C++ Windows application, my users can plug in their mobile device via USB and my application can transfer specific files to/from the device. For Android devices, I was able to use MTP. But iOS devices have me tripped up (I'm not an iOS user).
Immediately, I saw that MTP wasn't an option as I couldn't view the device's filesystem via Windows Explorer (wasn't expecting that). So now I'm stuck, and confused. Googled like crazy and all I discovered was that other 3rd party programs can do it, but I can't find any documentation or resources as to HOW.
Can someone point me in the right direction? What would I need to do in order to view the filesystem on a connected iOS device? Are there any libraries I may be unaware of that I can't find? I can see that iTunes has the functionality I'm looking for.
Thanks for your time!
Well, it looks like my comment was wrong. There is at least libimobiledevice http://www.libimobiledevice.org/ and https://github.com/libimobiledevice/ifuse that claims to still support access to the iOS device file system and even claims to be cross-platform. I haven't tried it though to verify if those claims are true.
See also https://www.theiphonewiki.com/wiki/MobileDevice_Library for some possible alternatives
I am writing an application using Qt and want to try and deploy it as a web-application. I want user's to be able to use my application by accessing it through a web browser. I'm guessing that's what a web-application is? What kind of options do I have? I've never looked into doing anything like this but I'd like to learn something new.
EDIT: What if I deployed my application on a Linux server and had users access/run it through a terminal? I think writing web application is going to be more complicated than I had originally thought.
If all you have is a Qt application, then the best you can do is use Qt 5 and run it using a remote visualization package:
Use WebGL streaming, introduced in Qt 5.10. Qt exposes a browser-connectible interface directly, without need for third-party code.
For Qt 5.0-5.9, you can use the vnc platform plugin. Then connect using a web-browser based vnc client.
For many uses it might be sufficient, and certainly it's much less effort than coding up a web app.
You're looking for Wt which provides a different set of drawing routines for many Qt GUI elements, turning them from lines on screen to HTML controls.
http://www.webtoolkit.eu/wt
It also handles websocket calls to provide interactivity. It seems a great idea, let us know how it works in practice.
For the case of QML there is QmlWeb which is a JavaScript library that is able to parse QML-code and create a website out of it using normal HTML/DOM elements and absolute positions within CSS, translating the QML properties into CSS properties.
QmlWeb is a small project by Lauri Paimen that he’s already developing for a few years now. QmlWeb of course doesn’t yet support everything Qt’s implementation of QML does, but it already supports a quite usable subset of it. It supports nearly all of the most basic QML syntax. Moreover it has support for HTML input elements (Button, TextInput, TextArea are currently supported, more to come).
Well, QmlWeb is not finished. I hope Digia help with this project to make it ready with mature features.
Interestingly, it is possible to compile Qt applications to javascript using emscripten-qt. These run fairly fast with Firefox's asm.js interpreter:
http://vps2.etotheipiplusone.com:30176/redmine/projects/emscripten-qt/wiki
Try "Qt for Webassembly".
Webassembly allows the C/C++ code to be compiled and run natively inside majority of the browsers:
WebAssembly (Wasm, WA) is a web standard that defines a binary format and a corresponding assembly-like text format for executable code in Web pages. ... It is executed in a sandbox in the web browser after a verification step. Programs can be compiled from high-level languages into Wasm modules and loaded as libraries from within JavaScript applets ... Its initial aim is to support compilation from C and C++, though support for other source languages such as Rust and .NET languages is also emerging.
To run a Qt application unchanged over the web so users can operate it in a browser, you can compile it for Android using the x86 Android ABI, run it inside an Android emulator on a server and supply the Android Cast videostream to users' browsers. You'll also need to have JavaScript in place that records the keyboard and mouse events on the web clients and relays them back to the server.
I had previously tried Qt WebGL streaming and found it to be good over the local network but too slow over the Internet. A 10 s application startup time is acceptable, but 3 s to show a new screen is rather not. I had the exact same experience with the Qt VNC platform plugin. Compared with that, the Android Cast streaming based appetize.io solution (see below) was much faster, providing a well usable user experience even over my 8 Mbit/s connection.
Existing solutions
Here is an overview of commercial products and open source software components that I found that can help you with this approach:
appetize.io. This is a commercial product to run Android applications over the web for demo and testing purposes. I have just done this with a Qt QML based application and liked the outcome. When choosing an Android 9 / 10 device you can see that the "Screencast" setting is on; which is why I believe that this solution uses the Android Cast technology.
runthatapp.com. This is another commercial offer. Not as sophisticated (yet) as appetize.io, but providing a nice pay-as-you-go scheme.
ScreenStream. An open source Android app that provides a web server to view the screen of one Android device in a web browser, also relying on the Android Cast technology. That Android device could be an emulator running on a web server. And to make this multi-user capable you can employ a small load balancer similar to a technique that I developed for Qt WebGL streaming. The ScreenStream README shows that the application might consume up to 20 Mbit/s per client in short bursts.
Ideas for future improvements
Serving your Qt app as an interactive live video stream seems a promising idea to me, given that I found it already less sluggish than VNC and similar solutions. There are ways to make this even faster, such as using a hardware H.265 video encoder to create a video stream with very little delay. By operating multiple such encoders on a single server, the server could serve multiple clients and still keep its CPU load low. Maybe there are even better video formats for such a purpose, given that user interfaces of programs lend themselves well to lossless compression.
Some hints for appetize.io
Finally: since I used the appetize.io product for a Qt application over the last few days, here are some tips from that experience:
It is necessary to compile your Qt application for the x86 Android ABI. The default armeabi-v7a ABI will not work because most appetize.io devices are actually server-based Android emulators and the only ARM based device ("Nexus 5 Physical") failed to start any Qt application I tried to use with it.
The x86_64 ABI may also work, but you might then have to also compile Qt yourself for it, as not all versions of Qt come pre-compiled for that architecture.
All appetize.io links (both for standalone pages and embeddable iframes) support GET parameters to configure the app presentation format. Especially relevant here is screenOnly=true to show the app without a picture of a phone or tablet around it.
Features that rely on phone hardware (camera, position etc.) will not work or only show dummy data. But if you really wanted, you could create a hybrid application combined with client-side JavaScript. It would run device-dependent code in the user's browser, for example to take a photo with the webcam, and then provide the results to the Qt application via the appetize.io cross-document messaging protocol. The following message types seem suitable to build a simple communication protocol: pasteText(value), keypress(key, shiftKey) and openUrl(value).
In the default appetize.io standalone app demo pages, only the key events of ordinary letter keys are sent to the app, not keyboard shortcuts or function keys like F2 and Esc. This might be possible to fix with JavaScript on an own page embedding the appetize.io iframe, as their cross-document messaging protocol provides the keypress(key, shiftKey) message type.
Qt does not support writing browser based web applications. Unfortunately.
You need to use common web programming technologies for this. There are a lot of ways, but Qt is not one of them.
I'm currently working on a cross-platform application (Win/OSX/iOS) which has a C++ (with Boost) back end. On iOS and OSX I'm using the Cocoa Net Service Browser Delegate functions to discover an embedded device via mDNS, then pass the information to the back end to create the objects it needs to communicate with it.
I wanted to take a similar approach with my Windows MFC front end and I found this article which seemed to do exactly what I want. However, it seems that using the Bonjour SDK has some really nasty side effects - forcing you to static link to MFC and in my case the only way I can get it to link properly is to not use debug DLLs at all, which is not ideal.
So, the Bonjour SDK isn't really any good for me because it imposes too many restrictions on my project. With Cocoa I'm actually using very little of the functionality - just didFindService and netServiceDidResolveAddress really. All I want to do is find the devices of a given type and get their IP addresses.
Can anyone suggest another way around this that will work with an MFC front end on Windows?
From what I have been able to gather from researching this topic just goto http://www.opensource.apple.com/source/mDNSResponder/mDNSResponder-333.10/ and grab the source. There is a VC project file which will let you build the dll how you want.
Im quite new on Android and I have some question for all of you who are experts!
Ok, my problem...
I implemented a client-server application based on socket programming. The server encode some packets, send them to the client through a socket and the clinet decode them.
I tested the code with two linux machines and it works fine but in my experiment it is required to include another node (this will be the Android). So the server (linux machine) will encode the packets and send through socket to client1(linux machine) and client2(Android).
For this reason I want to port the native binary of my code (which is in C++) to Android.
In which way could I do this?
Please give me some help!
Really im totally stucked!
Thanks,
Zenia
when you want to port native code C/C++ to android you want to look up android ndk and jni
http://developer.android.com/sdk/ndk/index.html
http://download.oracle.com/javase/1.5.0/docs/guide/jni/spec/functions.html
There are some examples in the ndk on how to do this.
be warned that C is fully supported but C++ support apis is very limited on android (the list is in the docs of the ndk) so you might have problems porting your code.
I would recommend using directly java if you can, since working with JNI is tedious lol
how else can you port this? start learning android i did a quick check noticed it's sdk uses java you can start by looking at
http://developer.android.com/reference/java/net/Socket.html
Thanks for the reply,
I first tried to write my own code totally in java using sockets, however i had to port some optimized libraries to Android and I could figure out how to do that (i could port a simple small library but not the one that I wanted). I gave up and I right now im trying to play with jni and ndk. however i dont know if indeed i could port my binary as it is non static (like hello world). Thats why im asking. if anyone else have some experince on that please let me know. thanks a lot,
Zenia
What you should probably do is install the SDK and NDK and build the hello-jni ndk example.
Then look up how to access the android logcat output from C, and write yourself a nice little printf-like wrapper for that (probably using the vargs version of the underlying function) so you can easily generate debug output from your native code.
Then graft your native executable onto the hello-jni example code, so you'll have a java wrapper that does very little other than start things with a call to the native code. Just remember not to do much processing in the UI thread or native code called under that thread, or you will risk an application not responding timeout.
It is also possible to (ab)use the ndk's gcc to produce stand alone native executables with no java wrapper, but this is discouraged. It's hard to find a reliable place to install them on a non-rooted phone, and android's process management isn't happy about unknown native processes. In other words, that's a path that's fine for personal experiments on your own device, but a difficult and non-future-proof one for an application deployed to others.