I want to use Chromium Embedded Framework as GUI of my OpenGL application.
I am using Off-screen Rendering.
How to detect when a HTML button/link is clicked on?
I tried to google this but with so generic search term there is only noise.
The General Usage wiki also doesn't contain this.
I think you should catch an event in JS code and send a message to C++ code. It can be done using a message router described here:
https://bitbucket.org/chromiumembedded/cef/wiki/GeneralUsage#markdown-header-asynchronous-javascript-bindings
It is also possible to implement a custom message router using JS integration mechanic:
https://bitbucket.org/chromiumembedded/cef/wiki/JavaScriptIntegration.md
Do you pass any UI events to CEF from your OpenGL application?
In general you shall attach OpenGL and CEF instances to the same window. You shall also override request to invalidate window's area to notify your OpenGL code that it needs to rerender window including content of that on-screen buffer.
Sequence of actions:
Window receives mouse move event.
Window passes it to your CEF instance.
If CEF determines that it needs to apply :hover state to your button then it will call window.invalidateArea(areaOfTheButton).
You handle that window.invalidateArea() by updating OpenGL scene including new version of your off-screen bitmap.
From CEF you should also receive various secondary DOM events in response to mouse move/up/down/etc. on window.
And check this: http://sciter.com/sciter-and-directx/ - it is DirectX window with integrated HTML/CSS UI with my Sciter Engine. At the moment I am designing the same but for OpenGL.
Related
I have the following setup for the game:
launcher.exe - starts under Steam on Windows and provides some settings UI for the user.
Then launcher.exe starts actual game.exe.
Problem is that the launcher.exe is using H/W accelerated UI - uses Direct2D/DirectX.
This page https://partner.steamgames.com/doc/features/overlay states:
Your game does not need to do anything special for the overlay to
work, it automatically hooks into any game launched from Steam!
But in my case that creates problems - the overlay is created on wrong window. So launcher.exe (uses DirectX) has the overlay but window that is created by game.exe (real game, uses DirectX and/or OpenGL) is not.
And the question is: how can I modify code of my launcher.exe window to prevent Steam overlay to be created on it "automatically"?
Update, response from Valve's TS:
Sorry, there's no code in place to selectively enable or disable the
overlay between launchers and games!
The only "option" is to disable DirectX drawing in the launcher.exe. In this case their injected DLL will not create that thing. But that effectively means no GPU accelerated UI drawing under the Steam... Kind of "640kb is enough for everybody" type of design.
Ideally Steam should send some custom message to the window to ask how and where the window wants that overlay to be rendered. But apparently there is no such thing, or is it?
Just for the context, the launcher looks as this:
I'm developing a 3D desktop application like this where I duplicate the desktop by creating planes in 3D space using each window's bitmap as texture and then passing mouse and keyboard input to them (background windows) via windows API.
This approach causes several issues and the main one is that some clicked windows generate new popup windows like menus that popup on top of 3D app and steal focus.
Is it possible to properly duplicate a desktop behavior inside another app like this - without losing focus and keeping 3D app on top?
Only workaround for this that I can think of is to have 3D app running on secondary monitor, let user work with regular desktop on primary monitor as usual and 3D app will just duplicate that and use windows hooks for any 3D app specific input.
Apparently IInspectable is right. No reliable way to do this without losing focus.
I was looking at how sometimes when you right click, the menu goes outside of the window.
Is this implemented with a separate window? If so, how can I get this functionality. I am trying to use GLFW, but I understand if it isn't possible.
Currently I am on windows, but I like keeping my options open, which is why GLFW would be preferable.
I noticed that GLUT has such a feature. If you are confused to what I am looking at then look at that.
Thanks for any help!!
Overlapping menus (in MS Windows) have to be implemented as a new top-level window, you would have a new OpenGL rendering context and draw the menu in that space - yes, it's a fair bit of work all for the edge-case of a menu overspilling the parent window,
However this isn't often a problem in OpenGL programming because if you're working on a full-screen game then the menu will always be displayed within the main window, and even if it isn't a full-screen a game your users really won't notice them as games tend to use different UI concepts like radial-menus which wouldn't overspill the parent window.
Or if you're working on a non-game title, chances are it isn't full-screen and is going to be an OpenGL rendering area within a larger application that is rendered using a native UI toolkit (e.g. 3ds Max, AutoCAD, etc), in which case no problem: just use native menus.
You can, of course, use native menus in an OpenGL application anyway, provided you do the necessary plumbing for native window messages.
I've been looking around for awhile about how to produce buttons using Direct2D and DirectWrite with no luck. I'm comfortable with shapes, text and that jazz. However, it suddenly occurred to me I might be looking about it in the wrong way.
Take the sentence:
you draw your controls and content for your app using the Direct2D and
DirectWrite APIs, handling all the input events directly.
I'm now thinking this means that instead of being able to quickly produce a fully functional button as I would using XAML. I would draw the button, manually check the location of the mouse on click, whether it's within the button boundaries and then handle the event? Similar method for hovering without the click.
Is this the kind of method required when using Direct2D and DirectWrite?
I haven't any experience with DirectX, but in OpenGL I build my buttons from scratch. Assuming you have animated sprites implemented, your buttons are essentially sprites that play certain animations in response to being clicked, hovered over, etc., and which you can register callbacks with. In my 2D engine, I have a class called UiButton, which inherits Sprite, and listens for various UI events. It gets more complicated when you want to handle keyboard navigation (arrow keys + enter to select) as you have to think about how the buttons are connected and which of them has focus at any given moment.
Here is my implementation for reference:
Headers: https://github.com/RobJinman/dodge/tree/master/Dodge/include/dodge/ui
Source: https://github.com/RobJinman/dodge/tree/master/Dodge/src/ui
If you're not prepared to roll your own, Googling "direct2d gui framework" seems to bring up some promising results.
Sorry I can't be of more help.
Yes, to draw a UI Button with Direct2D, you need to handle everything yourself, why? Direct2D is a 2D graphics API, not controls library. you need to draw the layout of your button, and handle the message of your button(such as click, mouse hover...), you lost lots of convenient and that's time-consuming, but the most important thing is: you can control it by yourself!
Direct2D is a graphics library. UI controls like, Text-selection, Textbox, and Buttons is not a part of it. However the benefits of using Direct2D and DirectWrite is we can implement our own UI controls, and having a full control of it.
Please also see: ID2D1Geometry::FillsContainsPoint() for hit-testing task.
I have a general question on how to develop an image viewer plugin with Firebreath. For that, I would like to incorporate a GUI framework, like wxwidget or Qt. The GUI would be used to to fire up some dialogs, adding a toolbar on top, or to open context menus with right clicking an image.
As far as I understand I have a hwnd handle and so I can draw onto a window. I also understand that I have various events I can react on, like mouse button clicks or keyboard strokes. But it fails me how I would add graphical menus, buttons, etc. I know I could use html around the window but that's not the route I like to take.
For instance, does it makes sense to render an user interface offline (in memory) onto an image and then keep somehow track of the state internally?
Has anyone done such thing? Or can anyone give me some insight on how to accomplish adding a user interface.
Assuming you only care about windows and assuming that you don't mind using a windowed plugin, which is the easiest (but no HTML elements can float over the plugin), it should be no different than creating a GUI in any other windows application.
You are given a window that shows up with the AttachedEvent; when DetachedEvent is fired you need to stop using the window. Many people create a child window inside that parent window and use that for all their actual real code which makes it a little easier to use one of those other abstractions, but that's basically all there is to it. I don't know specifically how you'd do it with QT or wxwidget but you'd create a child window of that HWND that you are given and have the abstraction do your thing for you.
As to whether or not it would be rendering things offscreen, etc, I have no idea; that would totally depend on the window system. There is no reason that I know of that you would need to do that, and most things just draw directly to the HWND, but there are a zillion different ways you could do it. It looks to me like what you really need is to understand how drawing in Windows actually works.
I hope that helps