Rendering over DirectX window with Awesomium (semi-transparent & rounded elements) - c++

I wonder if it's possible to use Awesomium to render the GUI over the DirectX 11 game (I do NOT use .NET, it's C++/DirectX 11 game)?
It would involve:
Rendering the scene on the window with DirectX 11 (just as I am doing it now).
Rendering the GUI with Awesomium from HTML/CSS over the previously rendered scene.
Note that some GUI elements should be semi-transparent or rounded - so it's not only rendering on some rect, but also blending.
Is it possible? Or maybe I could make it another way (e.g. telling Awesomium to use DirectX for rendering somehow)?
Or maybe I could draw an semi-transparent DirectX texture in Awesomium, and then render it over the scene with DirectX? I know that rendering to texture resource is possible with Awesomium, but does it supports transparency & semi-transparency?
If not, are the good alternatives for what I wanted to achive with Awesomium?

Yes. It can be done.
If you look at the documentation of the Awesomium WebView class it has a surface() method which will return the views backing bitmap.
Here is some c++ documentation for the class.
http://awesomium.com/docs/1_7_0/cpp_api/class_awesomium_1_1_web_view.html
You can copy this bitmap to a texture in DirectX and render it as layer on top of your game creating your UI.
You also have to route and translate input into Awesomium. You can style your UI however you like using HTML, CSS and Javascript. You can make it rounded in this way and introduce transparency.
I won't repeat a perfectly good tutorial on doing this. You can find one here.
http://www.gamedev.net/blog/32/entry-2260646-sweet-snippets-rendering-web-pages-to-texture-using-awesomium-and-direct3d/
How you render your texture after it is written doesn't have anything to do with Awesomium. Choose your blend modes and/or use shaders with output texture for desired effect.

Related

render a qt overlay window over opengl child window

I am looking for some information about rendering child windows in specific about how OpenGL interop with GDI. The problem that I have is that I have basically is that I have two windows, first, the main windows are created in qt, and inside of qt, a child window is hosted that leverages an OpenGL renderer.
Now what I wanted to do is to host an overlay on top of my OpenGL window, so I use that to overlay the OpenGL window. The problem that I am having is that when I render with OpenGL, the OpenGL generated graphics seem to obscure the graphics area including and effectively undo the graphics composited by qt.
In the image below the blue area is the qt overlay, in that picture I'm using GDI (BeginPaint/EndPaint) so and the windows seem to interact fine. That is, window order seems correct, the client region is correct. The moment I start to render with Opengl the blue area gets replaced with whatever OpenGL renders.
What I did I basically created to create the overlay I created a second frameless, topmost QMainWindow, and once the platform HWND was initialized I reparent it. Basically I change the new windows parent to be the same parent of my OpenGL window.
What I believed this would do is that the every window, gets drawn separately and the desktop composition manager would make the final composition and basically avoiding the infamous airspace problem as documented by Microsoft in their WPF framework.
What I would like to know is what could cause these issues? At this point, I lack understanding why once i render with OpenGL the pixels by qt overlay are obscured, even though windows hierarchy should say make them composited. What could I do to accomplish what I want?
Mixing OpenGL and GDI drawing on a shared drawable (that also includes sibling / childwindows without the CS_OWNDC windowclass style flag) never was supported. That's not something about Qt, but simply how OpenGL and GDI interact.
But the more important issue is: Why the hell aren't you using the OpenGL support built right into Qt in the first place? Ever since Qt-5 – if available – uses OpenGL to draw everything (all the UI elements). Qt-5 makes it trivial to mix Qt stuff and OpenGL drawing.

QOpenGLWindow z-order issues

I am working on updating an application for a client.
They use Qt and currently use a QGLWidget to display a full-screen view of 1 of 4 possible cameras selected by clicking the appropriate radio button. They then use OpenGL to draw on the image being displayed. This works great, but they want to update the UI to include a quad-split view of all 4 cameras.
My first thought on how to accomplish this was to keep the one QGLWidget for the full-screen display, and have 4 small QGLWidgets for the quad-split. From the documentation I found that you can't overlap QGLWidgets or QOpenGLWidgets because they don't handle z-order appropriately, but that this can be accomplished by using QOpenGLWindows and QWidget::createWindowContainer.
So, I coded up an application that uses a QOpenGLWidget (trying to bring them up to date) for the full-screen view, and 4 smaller QOpenGLWindows using QWidget::createWindowContainer, but this isn't working either.
The widgets built from QOpenGLWindows are always on top even if I use lower() to try to get them behind the full screen QOpenGLWidget. I've also tried using hide() on the widgets built from QOpenGLWindows, however, this has had no effect.
Do this at a lower level. Keep the one QGLWidget -- in fact don't touch your Qt objects. Instead, change the lower-level rendering so that it makes 4 calls to glViewport.
After each call to glViewport, update the modelview and projection and matrices according to the camera of interest, then draw the 3D scene.
This is simple and performant, because the driver only needs to deal with a single OpenGL context. You might have some extra work to adjust mouse input, but I think it'll be worthwhile.

How to get X to render to an OpenGL texture?

I am trying to write a compositor, like Compiz, but with different graphical effects. I am stuck at the first step, though, which is that I can't find how to get X to render windows to a texture instead of to the framebuffer. Any advice on where to start?
X11 composition goes like following.
you redirect windows into a offscreen area. The Composite extension has the functions for this
you use the Damage extension to find out which windows did change their contents
in the compositor you use the GLX_EXT_texture_from_pixmap extension to submit each windows' contents into corresponding OpenGL textures.
you draw the textures into a composition layer window; the Composite extension provides you with a special screen layer, between the regular window layer and the screensaver layer in which to create the window composition is taking place in.

Color Picker / Choser for an OpenGL Application

I am building an OpenGL application. I read through the GLUI tutorial on Code Project to create windows form controls on an OpenGL Application. But my requirement is to develop a color choser/picker, like an RGB chart or RGB cube to select a color. The tutorial on Code project shows the list of colors as a drop down box. However that wont really help me, as I require it to be present as a windows color picker. I know that color picker as a dialog box is a part of the windows application. Can anyone suggest me a way to use it with my OpenGL Application?
You can try fox-toolkit. It is a C++ based Toolkit for developing Graphical User Interfaces. It provides Color Picker and OpenGL widgets for 3D graphical manipulations.
On windows, you can call directly ChooseColor() from the windows API. It will open the native color color chooser.If you need a cross-platform solution, tiny file dialogs on sourceforge also has a color picker and no main loop.
You could render a Quad/Triangle/Cirle with different color on each vertex and activate smooth shading for interpolation between these points. Then just read back the color value from OpenGL at the mouse position.
edit: or like that where you calculate the color on the mouse position by yourself (reading values slows down OpenGL a lot!): http://sharathpatali.wordpress.com/2009/07/07/a-color-picker-for-pymt/
I did this with simple gradient image which i kept in memory. Then i just tracked my mouse position on the image, and simply read the data from the image (that was kept in RAM) and get the 32bit RGBA color value for it. This is easier than reading the pixels from screen (also faster and more reliable).
This also allows a lot more flexible way of presenting the palette, only your imagination is the limit on the looks of your palette. Note: you must use 32bit colors on the image, because if you want smooth edges, you simply just fade the alpha but keep the colors the same, so the colors wont get distorted at edges. Don't forget to enable blending when rendering the image.

Questions about OpenGL Settings and drawing over a mask in a window

I would like to know the OpenGL Rendering settings for having a program render OpenGL over top of any window on screen that has a specific color code (screen-level buffer?)
I.E. VLC Media Player and Media Player Classic both have rendering modes which allow you to full-screen then minimize player, but maintain watching media via allowing a specific color to act as a transparent mask. For example, you could set the background color of a terminal application to be 0x000010 for VLC 0x000001 for MPC and you could then type over the media using text (as it is in it's original color). When you try to do a "printscreen" all you get is the mask color, However, this is an acceptable side-effect.
Is it possible to do this as well with any OpenGL application with the right settings and hardware? If so, what are the settings or at least the terminology of this effect to further research it?
What you are trying to implement is called "overlay". You can try this angelcode tutorial. If I remember correctly, there was also a tutorial in DirectX SDK.
If you need to use OpenGL, you will need to perform offscreen rendering (using FBO or P-buffer), read the results using glReadPixels() and display using overlay.