My application uses Gtkmm and gtkglextmm. It loads pictures form HDD and shows them using OpenGL capabilities. However when I (for example) resize mainwindow some parts of GUI goes black and I don't know why. On Ubuntu this problem doesn't exist.
Here is a video illustrating what I am talking about: http://youtu.be/XGNJmddh_m4
Without seeing your code, and assuming it does nothing arcane I'd attribute this to some bug in the GTK+ port for Windows itself. I suspect the doublebuffering built into GTK+ getting tangles up with the inherent doublebuffering of the composition process (Aero), and having a set a background erasure brush in the WNDCLASSEX of the windows GTK+ creates.
I'd file this as a bug with GTK+
Related
I have a desktop application where all windows (HWND) render itself with Direct2D 1.1. My question is how to do it more correctly?
Should each window has its own Direct2D device context derived from one Direct2D device? In this case, I cannot render transparent content on a child window without additional tricks (I have to change target on parent window’s context, render parent window to Direct2D bitmap and then draw this bitmap on child’s target).
May be it is better to have one Direct2D device context where all windows render itself? I believe DirectComposition works in a similar way. Unfortunately, I cannot use it because I target Windows 7.
You're asking a question whose answer will be very application specific. I recommend avoiding the whole problem of trying to get HWNDs to render with transparency amongst each other, especially if you're throwing Direct2D into the mix. There is just too much pain in that direction. Every version of Windows that you support will have different bugs that you'll be constantly bumping into and grasping at workarounds for.
Case in point: For the v4.0 release of Paint.NET, I converted all text rendering to DirectWrite, and almost all UI controls to use Direct2D. The image thumbnail control at the top of the window (the MDI selector) is using Direct2D for rendering but it also has to compose on top of what's behind it. And it has to play nice with glass on Win7 (it looks great though!). The code for this is awful, tricky, barely maintainable, and it seems to bump into a different rendering bug on every release of Windows: 7, 7 SP1, 8, 8.1, and 10 all behave slightly differently! It's really annoying to test, too; it's the only reason I have to set up and maintain VMs for every version of Windows that I support (other than the installer and updater). Windows 7 worked fine, then 7 SP1 added a bug which required some tuning to how I filled the alpha channel. Windows 8 has flickering when you resize the window unless I do a certain hack, but 8.1 works fine. 10 then has its own flickering bug if software rendering is used. Remote Desktop breaks things in its own way. Then you also have to worry about High Contrast, and whether DWM is enabled/disabled if you're supporting Windows 7. They all behave differently and it's really really painful.
Anyway. What you seem to really need is a UI system like WPF or XAML which doesn't use anything other than a top-level HWND container. At that point you're custom rendering everything and doing your own hit-testing and input routing (and accessibility and all sorts of other things), so it's not a small task.
Regarding the "how to do it more correctly" question and cardinality of device and device context: Have you thought about just using ID2D1Factory::CreateHWNDRenderTarget or ID2D1Factory::CreateDCRenderTarget ? They return ID2D1RenderTarget but you can call QueryInterface to cast them to ID2D1DeviceContext (this fact is missing from the docs but is also clearly intentional). This should simplify working with Direct2D and HWNDs quite a bit. This is what I do in Paint.NET: I still use an HWND for each control, but each control is using its own HWND or DC render target. If you're willing to poke around with Reflector or ILSpy, checkout Direct2DControl and Direct2DControlHandler in the Paint.NET DLLs.
Also, be careful about using more than 1 hardware accelerated HWND render target. You don't want to get into a weird area where every Direct2D-based UI control is waiting on VSync. Using D2D1_PRESENT_OPTIONS_IMMEDIATELY when creating the HWND render target should help. DWM already handles VSync, so you should be fine to tell Direct2D to ignore it unless you're doing some rather specific stuff with animations and timers.
For a while I've been using SDL to write my 3D engine,and have recently been implementing an editor that can export an optimized format for the type of engine Im building. Right now the editor is fairly simple, objects can just be moved around and their textures and models can be changed. As of right now, I'm using SDL with OpenGL to render everything, but I want to use Qt for the GUI part of the editor, that way it looks native on every platform. I've got it working great so far, I'm running a QApplication inside of the SDL application, so it basically just opens 2 windows, one that uses SDL and OpenGL, and the other using Qt. Doing a bit of research, I've found that you can manually update a QApplication, which totally removes any threading problems, and everything works. Just in case you're having a hard time visualizing this, heres a picture:
What my goal is to merge these windows into one, because on smaller screens (like my laptop's) it makes it really hard to keep track of all the different windows that I would eventually have. I know theres a way to render to Qt with OpenGL, but can this be integrated with SDL? Am I going to need to move away from using an SDL window and use a QT one if the editor is enabled? Just to clarify, when the engine isn't in editor mode, it won't use and Qt, just SDL, so optimally I wouldn't need to do this.
Drop the SDL part. You have Qt, which does everything SDL does as well. Just subclass a QGLWidget and use that.
You can keep your game and editor separate processes and still make them part of the same app.
Just spawn the window where you want the game to run as part of Qt, and at least in windows, you can then pass the window handle to your game, and make sure when your game is setting up, instead of creating the window yourself in SDL and binding the opengl context to it, you can actually bind to the existing handle.
There are some gotchas with this technique to watch out for such as input focus I believe (I tested with directX, but it might be similar with SDL). The problem is that the foreground mode does some dumb checks on the "root" window which for me was not the window that owned the opengl context, so it failed to initialize. However background mode worked. I think that was for a joystick now that I think about it, but anyway, that's how you can merge everything together.
When I browsed the source code of Qt I didn't find how it actually draws a GUI component, but I know it uses OpenGL.
I want to know how does a GUI library like Qt draw its GUI components (ex : QPushButton ,QWidget)?
Can any one help me with a basic idea ?
In Qt-project site :
Qt is painting QtWidgets using QPainter, which uses (usually) the raster engine to draw the content. It is not using native OS calls, apart from few exceptions (file dialog, for example, which can be drawn either natively or using QtWidgets).
QtQuick is painted using scenegraph, so OpenGL. Also, no native OS calls here.
I think you either misunderstood (there are several meanings of the word “native” in computing) the stackoverflow post, or your information source is wrong.
OK, then to be clear: by “native” I’ve meant using native OS controlls, like wxWidgets library does: asking the OS to draw native scroll bar, or combo box, etc. Qt does not do this. It paints all the widgets itself, and only tries to mimick the looks of the OS it is running on.
But obviously, some kind of native OS calling is happening deep inside, in order to actually draw some pixels on the screen, and open native window container. But that is usually not important at all to high level UI developers.
You have a clear choice whether the widget should be drawn by the CPU or the GPU: widgets can use different painting methods (native, raster, OpenGL, for more see here! [qt-project.org]), and the user has choice which one should be utilised. Most people do not use that, though, because the default settings work well.
Thanks.
I'm trying to develop a cross platform (or at least desktop + embedded hardware) application. I would like to use Qt Quick to create a touch friendly GUI. I have been implemented a classical application with a QGLWidget displaying data. It is important that only a part of the window is in OpenGL. Because of this there are problems with EGLFS and LinuxFB. Only X11 (or maybe Wayland) can display the application properly (others generates a couple of errors about missing setParent function and the whole screen is black). Now I'm trying to achieve the same thing in QML. I want to use this OpenGL renderer as part of my QML application and some Qt Quick widgets around it. I found a couple of people asking about the same thing and the answer is always to subclass QDeclarativeItem and call the painter's beginNativePainting() (the others says to export it through QDeclarativeItem, but I cannot figure out how to do this). The problem is that on desktop, Qt 5.11 the native painter is not OpenGL. And in QT5 there is no way to force OpenGL graphics system. So when I try to get the OpenGL context (QGLContext::currentContext()) I always get NULL. Another problem: If I export my widget with qmlRegisterType("Test", 1, 0, "Test"); it becomes only visible when I use QDeclarativeView, but then it doesn't sees Qt Quick. If I use QQuickView it says module "Test" is not installed. How can I implement this properly?
QDeclarativeItem is from Qt Quick 1 and Qt4. With Qt 5 and Qt Quick 2 you should use QQuickItem.
There is at least 1 example of this provided with qt docs, which you can find in Qt Creator in the Welcome tab in the Examples section.
I guess this question has been ask before, but I have not found sufficient answer to even start poking around. Most answer refers to catching WM_PAINT method directly and do custom rendering, or use a onwer draw object. However, I did not see a centralized place that has the info. to start researching. Hence, the question.
My goal is to create a very simple GUI program with custom look into it. I prefer the way winamp does their custom look that is customizable through "skins". However, I am not interested in using some cross-platform library like GTK+, QT or wxWidget.
I have some experience in system programming, but not much for GUI. I spent most of my time developing console applications, and I just started doing some QT development. If you can point me in the right direction, I'd be very appreciated.
PS: I am interested in both windows and linux environment.
Everybody,
Sorry for the late reply. I had a chance to have a quick talk with the original developer for winamp, and this is the quick answers I have:
Using skins: Artists create skins, developer will render the skins
To the OS (Windows), winamp is just one pretty box, nothing else. There is a container windows, and that's about it
All controls (button, label, list, etc) are implemented by winamp team themselves. All messages and stuffs are passed as relative position to the container window. WinAmp and the GUI engine has to decide if a button is clicked or if the label next to it is the target, etc.
Rendering artists skins created in XML
I do not have the details on if they use any libraries to do all that, but I am suspecting they do hook a window call directly, and do custom rendering themselves.
GUI skin usually using plug-in mechanism
I guess this is exactly what you are looking for:
https://www.linux.com/learn/tutorials/428800:weekend-project-creating-qt-interfaces-with-gimp?utm_medium=twitter&utm_source=twitterfeed
I also interested in creating custom look of window and widgets.
Speaking about widgets it's not hard, just need to create subclass (if you are using C++) or some widget and implement some methods like draw, handle etc. But this solution is good only if you use some high-level library like GTK, QT, etc. If you want to implement all controls by your own, you may get any graphics library, which can create window and do any graphics inside. For example, SDL2 + Cairo. SDL2 for creating window, Cairo for vector rendering controls/widgets. Both of this libraries are for win and linux. Another option is take opengl/vulkan + some lib for rendering window. It could be SDL2, SFML, GLFW.
If you really interested how it works on low level, then search Windows API for Windows and XLib or XCB for Linux/X.Org.
Speaking about window, I still investigate it. However I have one thought: you may create an empty window and then draw whatever you want. Then you need to add handlers for resizing window on the borders. But I am not sure if it's good solution, and if it won't freezes.