Hide GLUT window - opengl

Is it possible to hide OpenGL window and the rendering are still running? I use glutHideWindow which will never trigger display function.
If that is not possible, is it possible in the program to change the focus of the current window? I want to run opengl program but I don't need that window. In fact, I want to use the framebuffer that opengl updates at each frame in another program. But it's always annoying to toggle between the two programs. (They both have window)

Is it possible to hide OpenGL window and the rendering are still running?
Yes and No to both parts of the question.
If you hide a window, all the pixels of the window's viewport will fail the pixel ownership test when rendering. So you can't use a hidden window as a drawable for OpenGL to operate on.
What you need is an off-screen drawable to draw to.
The modern variant are Framebuffer Objects (FBOs), which you can create on a regular OpenGL context, that might even work on a hidden window. FBOs take some drawable attachments (render buffers, textures) and allow OpenGL to draw to these instead to the window.
An older method are PBuffers, also widely supported, but not as easy to use as FBOs.
Note that if you want to perform off-screen rendering on Linux/X11 the X server must be active, i.e. owning the VT so that the GPU actually processes the commands. So you can't just start an X server "in the background" but have another X server use the display device.

After creating the window, you can use glutHideWindow() to go offscreen. Then you still render as nomal and use glReadPixels to read back and get buffer to use it later.

Related

ways for Direct2D and Direct3D Interoperability

I want make a Direct2D GUI that will run on a DLL and will render with the Direct3D of the application that I inject into it.
I know that I can simply use ID2D1Factory::CreateDxgiSurfaceRenderTarget to make a DXGI surface and use it as d2d render target, but this require enabling the flag D3D11_CREATE_DEVICE_BGRA_SUPPORT on Direct3D's device.
The problem is that the application creates its device without enabling this flag and, for this reason, ID2D1Factory::CreateDxgiSurfaceRenderTarget fails.
I am trying to find a other way to draw on the application window (externally or inside window's render target) that also works if that window is in full-screen.
I tried these alternatives so far:
Create a d2d render target with ID2D1Factory::CreateDCRenderTarget. This worked, but the part I rendered was blinking/flashing (show and hide very fast in loop). I also called ID2D1DCRenderTarget::BindDC before ID2D1RenderTarget::BeginDraw, but it just blinks but a bit less, so I still had the same issue.
Create a new window that will always be on the top of every other window and render there with d2d but, if the application goes into full-screen, then this window does not show on screen.
Create a second D3D device with enabled the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag and share an ID3D11Texture2D resource between the device of the window and my own, but I wasn't able to make it work... There are not a lot of examples on how to do it. The idea was to create a 2nd device, draw with d2d on that device and then sync the 2 D3D devices – I followed this example (with direct11).
Create a D2D device and share the data of d2d device with d3d device; but, when I call ID2D1Factory1::CreateDevice to create the device it fails because the D3D device is created without enabling the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag. I started with this example.
I've heard of hardware overlay but it works only on some graphics cards and I think I will have problems with this https://learn.microsoft.com/el-gr/windows/win32/medfound/hardware-overlay-support.
I am currently at a dead end; I don't know what to do. Does anyone have any idea that may help me?
Maybe is there any way to draw on screen and work even if a window is in full-screen?
The #3 is the correct one. Here’s a few tips.
Don’t use keyed mutexes. Don’t use NT handles. The only flag you need is D3D11_RESOURCE_MISC_SHARED.
To properly synchronize access to the shared texture across devices, use queries. Specifically, you need a query of type D3D11_QUERY_EVENT. The workflow should look like following.
Create a shared texture on one device, open in another one. Doesn’t matter where it’s created and where imported. Don’t forget the D3D11_BIND_RENDER_TARGET flag. Also create a query.
Create D2D device with CreateDxgiSurfaceRenderTarget of the shared texture, render your overlay into the shared texture with D2D and/or DirectWrite.
On the immediate D3D device context with the BGRA flag which you use for D2D rendering, call ID3D11DeviceContext.End once, passing the query. Then wait for the ID3D11DeviceContext.GetData to return S_OK. If you care about electricity/thermals use Sleep(1), or if you prioritize latency, busy wait with _mm_pause() instructions.
Once ID3D11DeviceContext.GetData returned S_OK for that query, the GPU has finished rendering your 2D scene. You can now use that texture on another device to compose into 3D scene.
The way to compose your 2D content into the render target depends on how do you want to draw your 2D content.
If that’s a small opaque quad, you can probably CopySubresourceRegion into the render target texture.
Or, if your 2D content has transparent background, you need a vertex+pixel shaders to render a quad (4 vertices) textured with your shared texture. BTW you don’t necessarily need a vertex/index buffer for that, there’s a well-known trick to do without one. Don’t forget about blend state (you probably want alpha blending), depth/stencil state (you probably want to disable depth test when rendering that quad), also the D3D11_BIND_SHADER_RESOURCE flag for the shared texture.
P.S. There’s another way. Make sure your code runs in that process before the process created their Direct3D device. Then use something like minhook to intercept the call to D3D11.dll::D3D11CreateDeviceAndSwapChain, in the intercepted function set that BGRA bit you need then call the original function. Slightly less reliable because there’re multiple ways to create a D3D device, but easier to implement, will work faster, and use less memory.

How to get X to render to an OpenGL texture?

I am trying to write a compositor, like Compiz, but with different graphical effects. I am stuck at the first step, though, which is that I can't find how to get X to render windows to a texture instead of to the framebuffer. Any advice on where to start?
X11 composition goes like following.
you redirect windows into a offscreen area. The Composite extension has the functions for this
you use the Damage extension to find out which windows did change their contents
in the compositor you use the GLX_EXT_texture_from_pixmap extension to submit each windows' contents into corresponding OpenGL textures.
you draw the textures into a composition layer window; the Composite extension provides you with a special screen layer, between the regular window layer and the screensaver layer in which to create the window composition is taking place in.

Lazy rendering of Qt on OpenGL

i came about this problem and knew it could be done better.
The problem:
When overlaying a QGLWidget (Qt OpenGL contextview) with Qt widgets, Qt redraws those widgets after every Qt frame.
Qt isn’t built to redraw entire windows with >60fps constantly, so that’s enormously slow.
My idea:
Make Qt use something other to draw upon: a transparent texture. Make OpenGL use this texture whenever it redraws and draw it on top of everything else. Make Qt redirect all interaction with the OpenGL context view to the widgets drawn onto the texture.
The advantage would be that Qt only has to redraw whenever it has to (e.g. a widget is hovered or clicked, or the text cursor in a text field blinks), and can do partial redraws which are faster.
My Question:
How to approach this? how can I tell Qt to draw to a texture? how can i redirect interaction with a widget to another one (e.g. if i move the mouse above the region in the context view where a checkbox sits in the drawn-to-texture widget, Qt should register this event to the checkbox and repaint to reflect itshovered state)
I separate my 2D and 3D rendering out for my CAD-like app for the very same reasons you have, although in my case my the 2D stuff are not widgets - but it shouldn't make a difference. This is how would approach the problem:
When your widget changes render it onto a QGLFramebufferObject, do this by using the FBO as the QPaintDevice for a QPainter in your QGLWidget::paintEvent(..) and calling myWidget->render( myQPainter, ...). Repeat this for however many widgets you have, but only onto the same FBO - don't create an FBO for each one... Remember to clear it first, like a 'normal' framebuffer.
When your current OpenGL background changes, render it onto another QGLFramebufferObject using standard OpenGL calls, in the same way.
Create a pass through vertex shader (the 'camera' will just be a unit cube), and a very simple fragment shader that can layer the two textures on top of each other.
At the end of the QGLWidget::paintEvent(..), activate your shader program, bind your framebuffers as textures for it (myFBO->texture() gets the handle), and render a unit quad. Because your camera is a unit square, and the viewport size defined the FBO size, it will fill the viewport pixel perfect.
However, that's the easy part... The hard part is the widget interaction. Because you are essentially rendering a 'proxy', you going to have to relay the interaction between the 'real' and 'proxy' widget, whilst keeping the 'real' widget invisible. Here's how would I start:
Some operating systems are a bit weird about rendering widgets without ever showing them, so you may have to show and then hide the widget after instantiation - because of the clever painting queue in Qt, it's unlikely to actually make it to the screen.
Catch all mouse events in the viewport, work out which 'proxy' widget the cursor is over (if any), and then offset it to get the relative position for the 'real' hidden widget - this value will depend on what parent object the 'real' widget has, if any. Then pass the event onto the 'real' widget before redrawing the widget framebuffer.
I should state that I also had to create a 'flagging' system to handle redraws nicely. You don't want every widget event to trigger a widget FBO redraw, because there could many simultaneous events (don't just think about the mouse) - but you would only want one redraw. So I created a system where if anything in the application could change anything in the viewport visually, then it would flag the viewport as 'dirty'. Then setup a QTimer for however many fps you are aiming for (in my situation the scene could get very heavy, so I also timed how long a frame took and then used that value +10% as the timer delay for the next check, this way the system isn't bombarded when rendering gets laggy). And then check the dirty status: if it's dirty, redraw; otherwise don't. I found life got easier with two dirty flags, one for the 3D stuff and one for the 2D - but if you need to maintain a constant draw rate for the OpenGL drawing there's probably no need for two.
I imagine what I did wasn't the easiest way to do it, but it provides plenty of scope for tuning and profiling - which makes life easier in the long run. All the answers are definitely not in this post, but hopefully it will get you on the way to a strategy.

Creating a program that creates a full screen overlay

I want to write a program that would create a transparent overlay filling the entire screen in Windows 7, preferably with C++ and OpenGL. Though, if there is an API written in another language that makes this super easy, I would be more than willing to use that too. In general, I assume I would have to be able to read the pixels that are already on the screen somehow.
Using the same method screen capture software uses to get the pixels from the screen and then redrawing them would work initially, but the problem would then be if the screen updates. My program would then have to minimize/close and reappear in order for me to be able to read the underlying pixels.
Windows Vista introduced a new flag into the PIXELFORMATDESCRIPTOR: PFD_SUPPORT_COMPOSITION. If the OpenGL context is created with an alpha channel, i.e. AlphaBits of the PFD is nonzero, the alpha channel of the OpenGL framebuffer is respected by the Windows compositor.
Then by creating a full screen, borderless, undecorated window you get this exakt kind of overlay you desire. However this window will still receive all input events, so you'll have to do some grunt work and pass on all input events to the underlying windows manually.

Acquire direct write access to window's backbuffer, yet still allow read access to what is on the screen already

I was wondering; is it possible to acquire write access to the graphics card's primary buffer though the windows api, yet still allow read access to what should be there? To clarify, here is what I want:
Create a directx device on a window
and hide it. Use the stencil buffer
to apply an alpha channel to pixels
not written to by my code.
Acquire the entirety
of current display adapter's buffer.
I.e. have a pointer to a buffer, in
the current bit depth and
resolution, that contains the
current screen without whatever I
drew to the screen. I was thinking
of, instead of hiding my window,
simply use a LAYERED window and
somehow acquire the buffer before my
window's pixels get blitted to it.
Copy the buffer acquired in step 2 into a new memory location
Blit my directx's device's primary buffer to the buffer built in step 3
Blit the buffer in step 4 to the screen
GOTO 2
So the end result is drawing hardware accelerated 3D directly to the window's desktop while still rendering other applications.
There are better ways to create a window without borders. You might try experimenting with the dwStyle parameter of CreateWindow, for example. It looks as if you pass in WS_OVERLAPPED | WS_POPUP and it results in a borderless window, which is what you appear to want. (see this forum post).
I also think the term "borderless window" is not correct, because I'm hardly getting any results in Google for searches including those words.
Is there any reason why you wouldn't just do this normally with GDI and use windowed mode for DirectX? Why bother with full-screen mode when you need to render with a window?