I am writing an application using Qt and want to try and deploy it as a web-application. I want user's to be able to use my application by accessing it through a web browser. I'm guessing that's what a web-application is? What kind of options do I have? I've never looked into doing anything like this but I'd like to learn something new.
EDIT: What if I deployed my application on a Linux server and had users access/run it through a terminal? I think writing web application is going to be more complicated than I had originally thought.
If all you have is a Qt application, then the best you can do is use Qt 5 and run it using a remote visualization package:
Use WebGL streaming, introduced in Qt 5.10. Qt exposes a browser-connectible interface directly, without need for third-party code.
For Qt 5.0-5.9, you can use the vnc platform plugin. Then connect using a web-browser based vnc client.
For many uses it might be sufficient, and certainly it's much less effort than coding up a web app.
You're looking for Wt which provides a different set of drawing routines for many Qt GUI elements, turning them from lines on screen to HTML controls.
http://www.webtoolkit.eu/wt
It also handles websocket calls to provide interactivity. It seems a great idea, let us know how it works in practice.
For the case of QML there is QmlWeb which is a JavaScript library that is able to parse QML-code and create a website out of it using normal HTML/DOM elements and absolute positions within CSS, translating the QML properties into CSS properties.
QmlWeb is a small project by Lauri Paimen that he’s already developing for a few years now. QmlWeb of course doesn’t yet support everything Qt’s implementation of QML does, but it already supports a quite usable subset of it. It supports nearly all of the most basic QML syntax. Moreover it has support for HTML input elements (Button, TextInput, TextArea are currently supported, more to come).
Well, QmlWeb is not finished. I hope Digia help with this project to make it ready with mature features.
Interestingly, it is possible to compile Qt applications to javascript using emscripten-qt. These run fairly fast with Firefox's asm.js interpreter:
http://vps2.etotheipiplusone.com:30176/redmine/projects/emscripten-qt/wiki
Try "Qt for Webassembly".
Webassembly allows the C/C++ code to be compiled and run natively inside majority of the browsers:
WebAssembly (Wasm, WA) is a web standard that defines a binary format and a corresponding assembly-like text format for executable code in Web pages. ... It is executed in a sandbox in the web browser after a verification step. Programs can be compiled from high-level languages into Wasm modules and loaded as libraries from within JavaScript applets ... Its initial aim is to support compilation from C and C++, though support for other source languages such as Rust and .NET languages is also emerging.
To run a Qt application unchanged over the web so users can operate it in a browser, you can compile it for Android using the x86 Android ABI, run it inside an Android emulator on a server and supply the Android Cast videostream to users' browsers. You'll also need to have JavaScript in place that records the keyboard and mouse events on the web clients and relays them back to the server.
I had previously tried Qt WebGL streaming and found it to be good over the local network but too slow over the Internet. A 10 s application startup time is acceptable, but 3 s to show a new screen is rather not. I had the exact same experience with the Qt VNC platform plugin. Compared with that, the Android Cast streaming based appetize.io solution (see below) was much faster, providing a well usable user experience even over my 8 Mbit/s connection.
Existing solutions
Here is an overview of commercial products and open source software components that I found that can help you with this approach:
appetize.io. This is a commercial product to run Android applications over the web for demo and testing purposes. I have just done this with a Qt QML based application and liked the outcome. When choosing an Android 9 / 10 device you can see that the "Screencast" setting is on; which is why I believe that this solution uses the Android Cast technology.
runthatapp.com. This is another commercial offer. Not as sophisticated (yet) as appetize.io, but providing a nice pay-as-you-go scheme.
ScreenStream. An open source Android app that provides a web server to view the screen of one Android device in a web browser, also relying on the Android Cast technology. That Android device could be an emulator running on a web server. And to make this multi-user capable you can employ a small load balancer similar to a technique that I developed for Qt WebGL streaming. The ScreenStream README shows that the application might consume up to 20 Mbit/s per client in short bursts.
Ideas for future improvements
Serving your Qt app as an interactive live video stream seems a promising idea to me, given that I found it already less sluggish than VNC and similar solutions. There are ways to make this even faster, such as using a hardware H.265 video encoder to create a video stream with very little delay. By operating multiple such encoders on a single server, the server could serve multiple clients and still keep its CPU load low. Maybe there are even better video formats for such a purpose, given that user interfaces of programs lend themselves well to lossless compression.
Some hints for appetize.io
Finally: since I used the appetize.io product for a Qt application over the last few days, here are some tips from that experience:
It is necessary to compile your Qt application for the x86 Android ABI. The default armeabi-v7a ABI will not work because most appetize.io devices are actually server-based Android emulators and the only ARM based device ("Nexus 5 Physical") failed to start any Qt application I tried to use with it.
The x86_64 ABI may also work, but you might then have to also compile Qt yourself for it, as not all versions of Qt come pre-compiled for that architecture.
All appetize.io links (both for standalone pages and embeddable iframes) support GET parameters to configure the app presentation format. Especially relevant here is screenOnly=true to show the app without a picture of a phone or tablet around it.
Features that rely on phone hardware (camera, position etc.) will not work or only show dummy data. But if you really wanted, you could create a hybrid application combined with client-side JavaScript. It would run device-dependent code in the user's browser, for example to take a photo with the webcam, and then provide the results to the Qt application via the appetize.io cross-document messaging protocol. The following message types seem suitable to build a simple communication protocol: pasteText(value), keypress(key, shiftKey) and openUrl(value).
In the default appetize.io standalone app demo pages, only the key events of ordinary letter keys are sent to the app, not keyboard shortcuts or function keys like F2 and Esc. This might be possible to fix with JavaScript on an own page embedding the appetize.io iframe, as their cross-document messaging protocol provides the keypress(key, shiftKey) message type.
Qt does not support writing browser based web applications. Unfortunately.
You need to use common web programming technologies for this. There are a lot of ways, but Qt is not one of them.
Related
After i spent couple days to figure out how use QWebKit library in my QT application for my browser, i found that it removed year ago.
And no info what i can use instead, and also no information WHY such good feature was removed, it was easy to create web page just writing couple lines of code.
P.S. i'm already downloaded all kits but it didn"t help:
The QWebKit API was designed for synchronous function calls, which runs on the main thread and on loading heavy web pages can block your GUI.
The alternative is the QWebEngineView.
Qt WebEngine uses a multi-process architecture, he's designed for asynchronous function calls. And also based on chromium.
To Port to web-engine read this
QtWebEngine is a chromium based alternative which is supported on Desktop platforms, see webengine-in-qt. It also provides the Qt WebView module, which uses the native web engine of the platform and is available for Android/iOS as well.
Is it possible to use a Go API in a Qt C++ project?
I would like to use the following Google API written in Go: https://cloud.google.com/speech-to-text/docs/reference/libraries#client-libraries-install-go
Is it possible to use a Go API in a Qt C++ project?
It could be possible, but it might not be easy and would be very brittle to run Go and Qt code in the same process, since Go and Qt have very different thread (goroutine) and memory models.
However, Go has (in its standard library) many powerful packages to ease the development of server programs, in particular of HTTP or JSONRPC servers.
Perhaps you might consider running two different processes using inter-process communication facilities. Details are operating system specific. I assume you run Linux. Your Qt application could then start the Go program using QProcess and later communicate with it (behaving as a client to your Go specialized "server"-like program).
Then you could use HTTP or JSONRPC to remotely call your Go functions from your Qt application. You need some HTTP client library in Qt (it is there already under Qt Network, and you might also use libcurl) or some JSONRPC client library. Your Go program would be some specialized HTTP or JSONRPC server (and some Google Speech to Text client) and your Qt program would be its only client (and would start it). So your Go program would be some specialized proxy. You could even use pipe(7)-s, unix(7) sockets, or fifo(7)-s to increase the "privacy" of the communication channel.
If the Google Speech to Text API is huge (but it probably is not) you might use Go reflective or introspective abilities to generate some C++ glue code for Qt: go/ast, go/build, go/parser, go/importer, etc
BTW, it seems that Google Speech to Text protocol is using JSON with HTTP (it seems to be some Web API) and has a documented REST API, so you might directly code in C++ the relevant code doing that (of course you need to understand all the details of the protocol: relevant HTTP requests and JSON formats), without any Go code (or process). If you go that route, I recommend making your Qt (or C++) code for Google Speech to Text some separate free software library (to be able to get feedback and help from outside).
I am developing an application on Windows 10 that interacts with custom device drivers, the NTFS filesystem and DirectX 12. The app is a Windows Universal App written in C++, WRL, XAML and DirectX. For DirectX I have chosen a SwapChainPanel control and the DirectX portion of the app works great. The app is Sideloaded so I have a bit more freedom than an app that needs to go through the store
Unfortunately the Windows Universal Apps have a number of restrictions with regards to API calls. WinRt APIs are favored.
Here are a list of WinRt APIs to call to replace Win32 APIs:
https://msdn.microsoft.com/en-us/library/windows/apps/hh464945.aspx
In addition Windows Universal Apps can call Win32 APIs that are partitioned to the application (however not the ones partitioned to the desktop) as indicated in the documentation of each function and in the header file. Here is a link:
https://msdn.microsoft.com/en-us/library/windows/apps/br205762.aspx
In addition the Winsock APIs are now allowed from Windows Universal Apps
However I am still left without my favorite (and necessary APIs)
CreateFile()
ReadFile()
WriteFile()
DeviceIoControl()
CloseHandle()
In particular I need to read and write files to all locations without user interaction (and not to the locations restrict by the Windows Universal App Sandbox). In addition I need to send IOCTLs to my multiple device drivers.
I could abandon Windows Universal Apps and go with WPF. However I have a touch intensive application and I need touch to work really well. In addition I have to wonder about the lack of fixes and commitment to WPF on the behalf of Microsoft. I have considered other UI frameworks but none have been as promising as a Windows Universal App.
Microsoft has allowed two paths in Windows 10 for Universal Apps that will allow calling all Win32 functions (For side loaded apps).
Brokered Windows Runtime Component
and IPC though TCPIP
I have written a brokered windows runtime component and it works well. However the solution requires a C# app to be in the mix and I do not need/want that as I need fast load times of the app and do not want to pull the CLR in.
The next option is IPC through TCPIP. I would use Fast TCP Loopback as explained in the blog post: Fast TCP Loopback Performance and Low Latency with Windows Server 2012 TCP Loopback Fast Path. I would link to it but I am at my (very generous) two link limit for a first post.
I have a couple of questions:
1) If I go this route should I place the IPC between the XAML controls/buttons and the rest of the App? This would allow the rest of the app to be strictly Win32. Or should I just place the IPC between the app and calls to the specific functions I need that fall outside of the those allowed by Win32.
2) I have looked for a library or paper that has code and/or ideas for implementing IPC with TCPIP. However so far the papers that talk about IPC with TCPIP seem to simply describe winsock programming which is something I already know how to do. I would enjoy coding up IPC but would prefer a solution that has been tested. This needs to work flawlessly and I would rather have code with some time on it. Has anyone used or heard of code and or a design for IPC over TCPIP that is available to share?
I would like, from a native Windows application using C++, to receive video/audio data sent from a browser located in a remote location. It seems like WebRTC is the way to go for this.
Most information I find is about how to interact with the browser to write WebRTC apps, but it may case the data would be received by my C++ app. Is it correct that I would need to use the WebRTC Native Code package for this, which is described as being 'for browser developers'? Document is located here: http://www.webrtc.org/webrtc-native-code-package
And what if I want to send video/audio data that I generate (ie not directly coming from a webcam and microphone), would I be able to send it to the remote location browser?
Any sample code out there which does something like I'm trying to accomplish?
The wording in that link is a bit misleading. They intend people that are developing browsers to use the native code, and advise those that are developing "applications" in a browser to use the WebRTC API.
I have worked with their native code for over a year to develop an Android application that is capable of performing audio and / or video calls between other Android devices and to browsers. So, I a pretty sure that it is completely possible to to take their native code and create a Windows application (especially since they have example code that does that for Linux and Mac -- look at peerconnection client and peerconnection server for this). You might have to write and re-write code to get it to work on Windows.
As for as data that you generate. In the Android project that I worked with, we didn't rely on the Android device / system to provide us with video, we captured and sent that out our selves using the "LibJingle" / WebRTC libraries. So, I know that that is possible, as long as you provide the libraries with video data in the correct format. I would imagine that one would be able to do the same with audio, but we never fiddled with that, so I cannot say for sure.
And as for example code, I can only suggest Luke Weber's gitbug repositories. Although it is for Android, it might be of some help to look at how he interfaces with the two libraries. Probably the better code to look at is the peerconnection client stuff that comes in the "LibJingle" second of the native code. [edit]: That is located in /talk/examples/peerconection/client/ .
If you get lost from my use of "LibJingle", that will show you when I started working with all of this code. Sometime around July of 2013 they migrated "LibJingle" into the WebRTC "talk" folder. From everything that I have seen, they are the same thing, just with the location and named changed.
I understand that it's possible to write a plugin for a browser which lets you render to the browser window, so you can effectively run a normal app within the browser. NOT using JS or client technology, but a plugin which basically wraps your application - in our case C++ which does 3D rendering using DirectX or OpenGL.
I know that we'd have to have versions for both IE and other browsers but how does this work - in Windows-speak do we get a HWND through the plugin architecture or is it more complex?
Do you have to write a version of the plugin compiled for each platform - Win/Mac/Linux, since a plugin is a binary I assume this is the case, so you have one version for IE and then multiple versions for FF, Chrome, Safari (which share the same plugin setup IIRC)
With FF - is this an example of a plugin or an extension specifically?
An example of what I mean is QuakeLive - proper 3D rendering within the browser. We're actually using Ogre (cross-platform C++) but this uses Direct3D/OpenGL so it's the same thing.
Things like QuakeLive can be done rather quite simply with Google's NativeClient SDK. It abstracts away the whole plugin architecture so that you can focus on writing your software, and provides support for nearly all plugin-capable browsers on Windows, Mac OS X, and Linux, portably. The user installs the NaCl plugin (which is included in some versions of Chrome and Chromium), and your software runs inside NaCl, seamlessly on all supported platforms, from a single binary.
Note that you can use OpenGL portably from within NaCl, but not DirectX. Future versions will also support ARM and x86_64 with technology from the LLVM project.
FireBreath is a great cross-platform, cross-browser library for developing C++ browser plugins.
Flash Player 11 provides true 3D support via Stage API over DirectX, OpenGL or whatever available at the device:
http://techzoom.org/adobe-flash-player-11-air-3-beta-stage3d-and-64bit-support-on-linux-mac-and-windows/
Its in beta now, so user needs to install it manually, but when Adobe release it then majority of browsers will provide true 3D support instantly. Latest Away3D beta already supports Stage API.
I have a need to get some of this done soon, so if anyone here is an expert on this please look me up.
Steve Bell
Archiform 3D animation studio