Writing a 3D rendering browser plugin - c++

I understand that it's possible to write a plugin for a browser which lets you render to the browser window, so you can effectively run a normal app within the browser. NOT using JS or client technology, but a plugin which basically wraps your application - in our case C++ which does 3D rendering using DirectX or OpenGL.
I know that we'd have to have versions for both IE and other browsers but how does this work - in Windows-speak do we get a HWND through the plugin architecture or is it more complex?
Do you have to write a version of the plugin compiled for each platform - Win/Mac/Linux, since a plugin is a binary I assume this is the case, so you have one version for IE and then multiple versions for FF, Chrome, Safari (which share the same plugin setup IIRC)
With FF - is this an example of a plugin or an extension specifically?
An example of what I mean is QuakeLive - proper 3D rendering within the browser. We're actually using Ogre (cross-platform C++) but this uses Direct3D/OpenGL so it's the same thing.

Things like QuakeLive can be done rather quite simply with Google's NativeClient SDK. It abstracts away the whole plugin architecture so that you can focus on writing your software, and provides support for nearly all plugin-capable browsers on Windows, Mac OS X, and Linux, portably. The user installs the NaCl plugin (which is included in some versions of Chrome and Chromium), and your software runs inside NaCl, seamlessly on all supported platforms, from a single binary.
Note that you can use OpenGL portably from within NaCl, but not DirectX. Future versions will also support ARM and x86_64 with technology from the LLVM project.

FireBreath is a great cross-platform, cross-browser library for developing C++ browser plugins.

Flash Player 11 provides true 3D support via Stage API over DirectX, OpenGL or whatever available at the device:
http://techzoom.org/adobe-flash-player-11-air-3-beta-stage3d-and-64bit-support-on-linux-mac-and-windows/
Its in beta now, so user needs to install it manually, but when Adobe release it then majority of browsers will provide true 3D support instantly. Latest Away3D beta already supports Stage API.

I have a need to get some of this done soon, so if anyone here is an expert on this please look me up.
Steve Bell
Archiform 3D animation studio

Related

User Interface SDK for HiSilicon processor

We are developing a software on Hi3536 processor based board. The SDK provided by HiSilicon comes with samples for developing user interface using frame buffer API - which is too low level. i.e., to design Combo Box, Text box, we have to write code from Scratch.
We are now trying to use QT. Not sure what other vendors do use for developing software on Hi3535 or Hi3536.
Can somebody suggest which SDK is most suitable for developing user Interface on HiSilicon processor based boards ?
We referred the sample code given in the following link and we would bring up the GUI successfully in Hi3536 using QT 5.6 - http://bbs.ebaina.com/thread-8217-1-1.html.
Please note that you need to use Google translator to translate the text in chinese.
In a past job, a few years ago, I worked on a GUI for an older Hisilicon chip based board.
I greatly enjoyed using Qt for Embedded Linux, version 4.8 on Linux framebuffer.
As far as I remember, be sure to study the Hisilicon documentation on how the framebuffers can/must be initialized and used. Hisilicon SDK used to contain also some sample programs with source code, there should be one that deals with framebuffers too.
My knowledge of Qt for Embedded is stuck at 4.8, I know version 5.x has radically re-designed that part, but I can't help you on details related to Qt5.

Running a Qt app over the web

I am writing an application using Qt and want to try and deploy it as a web-application. I want user's to be able to use my application by accessing it through a web browser. I'm guessing that's what a web-application is? What kind of options do I have? I've never looked into doing anything like this but I'd like to learn something new.
EDIT: What if I deployed my application on a Linux server and had users access/run it through a terminal? I think writing web application is going to be more complicated than I had originally thought.
If all you have is a Qt application, then the best you can do is use Qt 5 and run it using a remote visualization package:
Use WebGL streaming, introduced in Qt 5.10. Qt exposes a browser-connectible interface directly, without need for third-party code.
For Qt 5.0-5.9, you can use the vnc platform plugin. Then connect using a web-browser based vnc client.
For many uses it might be sufficient, and certainly it's much less effort than coding up a web app.
You're looking for Wt which provides a different set of drawing routines for many Qt GUI elements, turning them from lines on screen to HTML controls.
http://www.webtoolkit.eu/wt
It also handles websocket calls to provide interactivity. It seems a great idea, let us know how it works in practice.
For the case of QML there is QmlWeb which is a JavaScript library that is able to parse QML-code and create a website out of it using normal HTML/DOM elements and absolute positions within CSS, translating the QML properties into CSS properties.
QmlWeb is a small project by Lauri Paimen that he’s already developing for a few years now. QmlWeb of course doesn’t yet support everything Qt’s implementation of QML does, but it already supports a quite usable subset of it. It supports nearly all of the most basic QML syntax. Moreover it has support for HTML input elements (Button, TextInput, TextArea are currently supported, more to come).
Well, QmlWeb is not finished. I hope Digia help with this project to make it ready with mature features.
Interestingly, it is possible to compile Qt applications to javascript using emscripten-qt. These run fairly fast with Firefox's asm.js interpreter:
http://vps2.etotheipiplusone.com:30176/redmine/projects/emscripten-qt/wiki
Try "Qt for Webassembly".
Webassembly allows the C/C++ code to be compiled and run natively inside majority of the browsers:
WebAssembly (Wasm, WA) is a web standard that defines a binary format and a corresponding assembly-like text format for executable code in Web pages. ... It is executed in a sandbox in the web browser after a verification step. Programs can be compiled from high-level languages into Wasm modules and loaded as libraries from within JavaScript applets ... Its initial aim is to support compilation from C and C++, though support for other source languages such as Rust and .NET languages is also emerging.
To run a Qt application unchanged over the web so users can operate it in a browser, you can compile it for Android using the x86 Android ABI, run it inside an Android emulator on a server and supply the Android Cast videostream to users' browsers. You'll also need to have JavaScript in place that records the keyboard and mouse events on the web clients and relays them back to the server.
I had previously tried Qt WebGL streaming and found it to be good over the local network but too slow over the Internet. A 10 s application startup time is acceptable, but 3 s to show a new screen is rather not. I had the exact same experience with the Qt VNC platform plugin. Compared with that, the Android Cast streaming based appetize.io solution (see below) was much faster, providing a well usable user experience even over my 8 Mbit/s connection.
Existing solutions
Here is an overview of commercial products and open source software components that I found that can help you with this approach:
appetize.io. This is a commercial product to run Android applications over the web for demo and testing purposes. I have just done this with a Qt QML based application and liked the outcome. When choosing an Android 9 / 10 device you can see that the "Screencast" setting is on; which is why I believe that this solution uses the Android Cast technology.
runthatapp.com. This is another commercial offer. Not as sophisticated (yet) as appetize.io, but providing a nice pay-as-you-go scheme.
ScreenStream. An open source Android app that provides a web server to view the screen of one Android device in a web browser, also relying on the Android Cast technology. That Android device could be an emulator running on a web server. And to make this multi-user capable you can employ a small load balancer similar to a technique that I developed for Qt WebGL streaming. The ScreenStream README shows that the application might consume up to 20 Mbit/s per client in short bursts.
Ideas for future improvements
Serving your Qt app as an interactive live video stream seems a promising idea to me, given that I found it already less sluggish than VNC and similar solutions. There are ways to make this even faster, such as using a hardware H.265 video encoder to create a video stream with very little delay. By operating multiple such encoders on a single server, the server could serve multiple clients and still keep its CPU load low. Maybe there are even better video formats for such a purpose, given that user interfaces of programs lend themselves well to lossless compression.
Some hints for appetize.io
Finally: since I used the appetize.io product for a Qt application over the last few days, here are some tips from that experience:
It is necessary to compile your Qt application for the x86 Android ABI. The default armeabi-v7a ABI will not work because most appetize.io devices are actually server-based Android emulators and the only ARM based device ("Nexus 5 Physical") failed to start any Qt application I tried to use with it.
The x86_64 ABI may also work, but you might then have to also compile Qt yourself for it, as not all versions of Qt come pre-compiled for that architecture.
All appetize.io links (both for standalone pages and embeddable iframes) support GET parameters to configure the app presentation format. Especially relevant here is screenOnly=true to show the app without a picture of a phone or tablet around it.
Features that rely on phone hardware (camera, position etc.) will not work or only show dummy data. But if you really wanted, you could create a hybrid application combined with client-side JavaScript. It would run device-dependent code in the user's browser, for example to take a photo with the webcam, and then provide the results to the Qt application via the appetize.io cross-document messaging protocol. The following message types seem suitable to build a simple communication protocol: pasteText(value), keypress(key, shiftKey) and openUrl(value).
In the default appetize.io standalone app demo pages, only the key events of ordinary letter keys are sent to the app, not keyboard shortcuts or function keys like F2 and Esc. This might be possible to fix with JavaScript on an own page embedding the appetize.io iframe, as their cross-document messaging protocol provides the keypress(key, shiftKey) message type.
Qt does not support writing browser based web applications. Unfortunately.
You need to use common web programming technologies for this. There are a lot of ways, but Qt is not one of them.

How can I 'break away' from Cocoa and develop Mac OpenGL applications in C/C++?

I am looking to get started with some 3D programming in C or C++. The problem I have is that it seems like the only tutorials I can find for Mac OS use objective C and Cocoa frameworks. I want to obtain the same environment as Windows users, more or less.
If I try to use a text editor and g++ compiler, I am missing headers, but, if I try to use Xcode, I am forced to grapple with Cocoa, which is frustrating to me. I don't really see any reason why the OpenGL/GLUT that comes pre-installed on Mac should force me to use Xcode, but it seems I can't get the header files without it.
How can I get through all of the Apple 'developer friendly' interfaces to write some old-fashioned code with full cross-platform portability?
Some portion of Objective-C is inevitable if you want to use the latest benefits of the OSX/Cocoa. The easiest way to port an existing application to MacOS would be the following:
Write the "bare bones" nibless application in Objective-C. It would only be a single AppDelegate class and a little setup in the main() function
Add the custom NSGLView descendant in your window which you create in the AppDelegate's didFinishLaunching event handler
Setup the CVDisplayLink and rendering callback in the NSGLView initialization
Use your existing OpenGL rendering code in the CVDisplayLink's callback
Now for the interesting part: where to get all of this ?
Surprisingly, a good nibless application sample is the UI for OSX's port of QEMU (yes, the emulator). Also the Apple's official GLEssenstialPractices demo shows all the information you need to set up OpenGL rendering pipeline. All the rest is up to you.
The detailed and modern introduction to system-level OSX programming can be found in the "Advanced Mac OS X Programming" book by Mark Dalrymple. It explains many things and after reading all of this I've understood most of the design decisions in the OS (it really makes you accept all the "non-standard" things if you think from the performance viewpoint).
To get through the "nibless" programming I would recommend you to read the blog posts like this one http://blog.kleymeyer.com/2008/05/creating-cocoa-applications-programatically-ie-nib-less/ The google search helps a lot.
The same tricks apply to the CocoaTouch/iOS and there are a lot of questions answered on SO, like this one Cocoa touch/Xcode - generating NIB-less graphics context
If you want to create cross-platform applications you could create a project with the Command Line Tool template.
Next, import the OpenGL and GLUT framework. This will get you a "blank" C++ project with the required OpenGL and GLUT headers.
Lighthouse 3D gives you some tips about portability and how to initiate your first project.
http://www.lighthouse3d.com/tutorials/glut-tutorial/initialization/
I have created a software layer (named cocoglut) that allows the translatation of basic or essential GLUT calls to COCOA. This library allows creating/destroying windows and register callbacks from a C/C++ application, just by using GLUT calls, without the need for nib files or for XCode project files (and can be compiled from the command line). This option uses full retina display resolution. The source is on GitHub.

App using 3D & 3rd-party plugins - forward compatible OpenGL or Direct3D?

I'm writing an app that's going to use 3rd-party created plugins to render all kinds of 3D trickery.
My main application is to create the context / render-object and a rendertarget/framebufferobject. The 3rd-party plugins are going to be rendering their fancy stuff to that, so they need access to that context / renderobject to perform their 3d-render-related calls.
I can choose to implement this using either OpenGL or Direct3D.
My decision will most probably be based on my understanding of the next problem :
Obviously, new versions of OpenGL / Direct3D will be coming out, and it would be nice if newly created plugins could benefit from newer versions of DX/OGL than the main program was compiled with. (if the computer running the application supports that newer version)
Using OpenGL (using OpenTK) I understood it's possible to create a forward-compatible context, as in "Give me the most up-to-date-version that is backward compatible with version X".
So when asked for a 3.2 context, if 4.0 is available it would return a 4.0 context.
For DirectX, I don't see anything like that, which would mean that if I create my main program with DirectX 11 for example, 3rd-party plugins would never be able to use newer versions when available ?
Am I getting this correct ?
Will OpenGL enable 3rd-party plugin writers to create plugins for newer versions of OpenGL, while DirectX will not allow me to do something like that ?
I'd be amazed if DirectX ever supported the sort of compatibility you're talking about within an application. Each version of the Direct3D APIs has basically been an independent (COM) object hierarchy with absolutely no acknowledgment that other generations of the system might exist, past or future. (Backwards compatibility at the platform level has generally been superb, of course, but you're after something quite different).
So either go with OpenGL (the support you mention at least sounds like it offers some hope), or maybe even consider a higher level API for plugins which somehow "compiles"/"adapts" to the actual target platform at runtime. That'd let you support OpenGL and Direct3D (although obviously shaders in particular would present severe difficulties; hence projects like AnySL).

C++ UI framework from scratch?

I want to create a C++ UI framework (something like QT or like ubuntu unity Desktop)
How is programmed , is it using OpenGL or lets take plasma ui of QT (how is this programmed )?
Direct answers , reference links anything will be helpful.
Some interesting opengl based UI I founf on the web
LiquidEngine
http://www.youtube.com/watch?v=k0saaAIjIEY
Libnui
en.wikipedia.org/wiki/Libnui
Some UI frameworks render everything themselves, and work based on some kind of clipping-window-within-the-host-systems-screen. Non-display aspects (such as input event handling) have to be translated to/from the host systems underlying APIs.
Some UI frameworks translate as much as possible to some underlying framework.
wxWidgets can do both. You can choose a native version (e.g. wxMSW if you're on Windows) and most wxWidgets controls will be implemented using native Windows controls. Equally, you can choose the wxUniversal version, where all controls are implemented by the wxWidgets library itself.
The trouble is that typical GUI frameworks are huge. If you want a more manageable example to imitate, you might look at FLTK. I haven't got around to studying it myself, but it has a reputation for being consise.
There are also some GUI toolkits that are specifically aimed at games programming, such as Crazy Eddies GUI. My guess - these are probably as idependent of the underlying API as possible, so that particular applications can implement the mapping to whichever underlying API they happen to target (OpenGL, DirectX, SDL, whatever) and can be the boss of the GUI rather than visa versa.
http://www.wxwidgets.org/
http://www.fltk.org/
http://www.cegui.org.uk/wiki/index.php/Main_Page
"no really, don't write your own wm or toolkit"
The #Xorg-devel guys on irc.freenode.org
doing one anyway means that you have to test against a wide range of more or less buggy WMs and X implementations, and that you have to frequently update to be compatible with the latest Xorg server and X protocol features (like Xinput 2.1)
understandably, the Xorg people are tired to support old, unmaintained toolkits and applications. They already have enough bugs.
The GUI frameworks are very dependant on a windows system, which dictates what is allowed and how windows are created and rendered. For example, pass a specific option to create a borderless or full-screen window.
Since you mentioned opengl and ubuntu, I guess you want to start on a linux platform. You should study xlib, for which you can find reference here.
Since the qt library is open source, you can download it and peek into it's sources.
A UI library isn't developed from scratch. It relies on the OS' windowing system, which relies on the driver from your graphics adapter, which relies on the OS kernel, which relies on... and so on.
To develop any software "from scratch", you can start by writing your own BIOS. Once you're done with that, move on to writing an OS, and then you should be just about ready to write the software you wanted. Good luck.
And this is assuming you're willing to cheat, of course, and use a compiler you didn't write from scratch.
Before you do that, it's worth that you spend one week on thinking:
1, Do you really know how to do it? I doubt that.
2, Do you really need to do it? I doubt that too.