For an accessibility desktop application I must overlay the desktop screen with numbers, text and a grid of rectangles (stroked with e.g. red brush).
Ideally this should work on any window manager system (windows, linux KDE/GNOME, possibly even mac).
What is the standard approach to something like this? I was thinking of taking a screenshot of the screen and then drawing on top of it but I'm unsure on what to use to draw.
There is a library that could help you out in making cross-platform applications. glfw, this is capable of making an application window for windows, mac, Linux, and more.
For the graphics stuff, you could use OpenGL or Vulkan(personally not advised for new users) graphics APIs which are cross-platformed. I was thinking of taking a screenshot of the screen and then drawing on top of it but I'm unsure on what to use to draw. For this you could you framebuffers, learning OpenGL.
Related
I am looking for some information about rendering child windows in specific about how OpenGL interop with GDI. The problem that I have is that I have basically is that I have two windows, first, the main windows are created in qt, and inside of qt, a child window is hosted that leverages an OpenGL renderer.
Now what I wanted to do is to host an overlay on top of my OpenGL window, so I use that to overlay the OpenGL window. The problem that I am having is that when I render with OpenGL, the OpenGL generated graphics seem to obscure the graphics area including and effectively undo the graphics composited by qt.
In the image below the blue area is the qt overlay, in that picture I'm using GDI (BeginPaint/EndPaint) so and the windows seem to interact fine. That is, window order seems correct, the client region is correct. The moment I start to render with Opengl the blue area gets replaced with whatever OpenGL renders.
What I did I basically created to create the overlay I created a second frameless, topmost QMainWindow, and once the platform HWND was initialized I reparent it. Basically I change the new windows parent to be the same parent of my OpenGL window.
What I believed this would do is that the every window, gets drawn separately and the desktop composition manager would make the final composition and basically avoiding the infamous airspace problem as documented by Microsoft in their WPF framework.
What I would like to know is what could cause these issues? At this point, I lack understanding why once i render with OpenGL the pixels by qt overlay are obscured, even though windows hierarchy should say make them composited. What could I do to accomplish what I want?
Mixing OpenGL and GDI drawing on a shared drawable (that also includes sibling / childwindows without the CS_OWNDC windowclass style flag) never was supported. That's not something about Qt, but simply how OpenGL and GDI interact.
But the more important issue is: Why the hell aren't you using the OpenGL support built right into Qt in the first place? Ever since Qt-5 – if available – uses OpenGL to draw everything (all the UI elements). Qt-5 makes it trivial to mix Qt stuff and OpenGL drawing.
I'm developing a video player in Qt C++ using QtAV. QtAV uses ffmpeg internally. I need to show semi transparent overlays both my watermark logo and subtitles. I'm writing the application for windows. I use OpenAL library. OpenGL and Direct2D are the choice for renderers.
If I use OpenGL renderer, it works fine in some systems. The overlay works fine. But in some other systems the whole application will be just a black window. Nothing else I can see.
If I use Direct2D, the overlay wont work. And the renderer is a bit slow. But it works on all systems, without this overlays.
I have no code to show here because its not the coding issue. Even the examples in QtAV are not working. I need to find a way to show the overlays using Direct2D renderer OR find a solid way to use OpenGL rendering on all systems without fail.
Direct2D is not well supported in QtAV. So you may need to implement your own functions to add filters in your video render. That includes text draw functions, setting transparency etc.
I already have a Direct3d device at my beck and call...
I am working on a Windows 8 modern UI application (Metro if you will)
What's the general technique of getting text drawn to the screen?
Extra points: Can I do 3d stuff with it too? This is what originally got me here as I started to do some direct2d thing then I thought, but how can I do 3d with direct2d... second of all the d2d create text functions require a handle to a window hwnd and there is no such thing (or it has been abstracted away) in windows 8 metro apps.
Anyone got any good examples or demos I can take a look at?
You should look into DirectWrite.
Regarding your second question you can render your text to a texture and then when you render that texture on screen do 3d stuff with it.
Rendering text with DirectWrite and Direct2D it's relatively simple, however, if you want something higher level, you can look into Drawing Library for Windows Store Apps, which wraps raw DirectX calls into some more GDI like.
I want to draw a small dot at the center of the screen so that it must remain after running of any application. A dot should stay even after I launch an application in full screen mode. Like a dead pixel.
I have already installed Visual C++ on my computer with Windows 7. I have some experience with C++, but I never worked with graphics under Windows OS.
How can I draw a dot on a screen?
Many graphics cards have overlay features, and it is likely possible to set one up to be foremost on the screen regardless of what other applications are rendering in other layers.
But the method to do that would be specific to the video card model and driver.
Or, you can try to get your code inside the application doing full-screen rendering, find their rendering context, and draw to it at the ideal time. Which still requires a bunch of variants for all the different graphics APIs.
Here is someone who describes Steam's attempt to solve the portability issue (with a zillion implementations) and how to take advantage of that.
I would create a properly positioned 1x1 pixel (or whatever size you need) window with no borders or title bar, all client area and paint it appropriately. It's important that the window is created with the WS_EX_TOPMOST style. As long as your program is running, the window will be visible as long as there are no other windows with that style overlapping it.
I've done this as a prank. It worked really well over a full-screen OpenGL game (Quake III). I installed it on a friend's machine so that it would flash the word LOSER! in big letters in the center of the screen at random times during the game.
This worked perfectly well on an XP system. I imagine it should work on Windows 7.
I'm writing a cross platform open source Oculus Rift desktop viewer. I decided to start with Linux because I prefer developing on it. I've already got the texture warping working but now I need to capture the desktop to an OpenGL texture. There are other issues I'm not entirely sure how to resolve like rendering the warped desktop to my window while capturing every window except mine. Any clue how I would go about this?
I think your best course of action would be actually to write a fully fledged compositor.
There is the GLX_texture_from_pixmap extension, that allows you to source any pixmap compatible X11 drawable into a OpenGL texture. For a start it might be enough to simply pull the root window (pixmap) as it is into a OpenGL texture. Later you might want to use the Composite extension to redirect windows to off-screen rendering and composite them in 3D space as a stereoscopic picture in the Occulus Rift.