OpenGL, Get Framebuffer of other process - opengl

I want to get other process's window handle to render it on GL program.
What i want to do is this.
Set a GL vertex array, and index buffer object.
Make a 1 texture.
Get other process's framebuffer and copy this buffer's data to
texture i made above.
GL's Fragment shader use this texture and draw it on cube's faces.
Is it possible to get other process's windows handle or
framebuffer with Apple's system library?
Target Platform is MacOS 12.01 Monterey.
OpenGL 4.0.
and Language is C++.

Related

Binding OpenGL texture to OpenCL buffer

I am working on some project with require of some rendering using OpenGL, and then passing output texture for OpenCL post-processing. The problem is that our kernels work with buffers, not images, and the final output also should be buffer, so changing kernels for work with image2d instead of buffers is not an option.
Of course, mapping OpenGL buffer/texture to the same type on OpenCL is an easy task, but it seems that there are no direct way to map OpenGL output (both texture or renderbuffer objects) to OpenCL buffer without additional steps/memory allocation as copying GL texture data to PBO or CL image to buffer etc. Ability to bind GL buffer objects as framebuffer output would be nice, but I haven't found anything like this so far. I thought about GL_TEXTURE_BUFFER as rendering target, but OpenGL prohibits to use it with framebuffer.
So, the question is - is there any way to directly render with OpenGL into vertex buffer object, and if no - what is the most efficient (time/memory) way to convert OpenGL texture into OpenCL buffer?

Get data back from OpenGL shader?

My computer doesn't support OpenCL on the GPU or OpenGL compute shaders so I was wondering if it would be a straight forward process to get data from a vertex or fragment shader?
My goal is to pass 2 textures to the shader and have the shader computer the locations where one texture exists in the other. Where there is a pixel match. I need to retrieve the locations of possible matches from the shader.
Is this plausible? If so, how would I go about it? I have the basic OpenGL knowledge, I have set up a program that draws polygons with colors. I really just need a way to get position values back from the shader.
You can render to memory instead of to screen, and then fetch data from it.
Create and bind a Framebuffer Object
Create a Renderbuffer Object and attach it to the Framebuffer Object
Render your scene. The result will end up in the bound Framebuffer Object instead of on the screen.
Use glReadPixels to pull data from the Framebuffer Object.
Be aware that glReadPixels, like most methods of fetching data from GPU memory back to main memory, is slow and likely unsuitable for real-time applications. But it's the best you can do if you don't have features intended for that, like Compute Shaders, or are willing to do it asynchronously with Pixel Buffer Objects.
You can read more about Framebuffers here.

How to render/draw buffer object to framebuffer without glDrawPixels

According to opengl spec 4.0 glDrawPixels is deprecated.
For cuda interoperability it seems best to use "opengl buffer objects". (An alternative could be textures or surfaces but these have caching/concurrency issues and are therefore unusable for my cuda kernel).
I simply want to create a cuda kernel which uses this mapped opengl buffer object and uses it as a "pixel array" or a piece of memory holding pixels, later the buffer is unmapped.
I then want the opengl program to draw the buffer object to the framebuffer. I would like to use an opengl api which is not deprecated.
What other ways/apis are there to draw a buffer object to the frame buffer ? (Also render buffers cannot be used since they probably have same issue as cuda arrays/caching issues, so this rules out framebuffer object/extension ?!?).
Is there a gap/missing functionality in opengl 4.0 now that glDrawPixels is deprecated ? Or is there an alternative ?
glDrawPixels has been removed from GL 3.2 and above (it is not deprecated. Deprecated means "available but to be removed in the future"). It was removed because it's generally not a fast way to draw pixel data to the screen.
Your best bet is to use glTexSubImage2D to upload it to a texture, then draw that to the screen. Or blit it from the texture with glBlitFramebuffer.

How to create textures within GPU

Can anyone pls tell me how to use hardware memory to create textures in OpenGL ? Currently I'm running my game in window mode, do I need to switch to fullscreen to get the use of hardware ?
If I can create textures in hardware, is there a limit for no of textures (other than the hardware memory) ? and then how can I cache my textures into hardware ? Thanks.
This should be covered by almost all texture tutorials for OpenGL. For example here, here and here.
For every texture you first need a texture name. A texture name is like a unique index for a single texture. Every name points to a texture object that can have its own parameters, data, etc. glGenTextures is used to get new names. I don't know if there is any limit besides the uint range (2^32). If there is then you will probably get 0 for all new texture names (and a gl error).
The next step is to bind your texture (see glBindTexture). After that all operations that use or affect textures will use the texture specified by the texture name you used as parameter for glBindTexture. You can now set parameters for the texture (glTexParameter) and upload the texture data with glTexImage2D (for 2D textures). After calling glTexImage you can also free the system memory with your texture data.
For static textures all this has to be done only once. If you want to use the texture you just need to bind it again and enable texturing (glEnable(GL_TEXTURE_2D)).
The size (width/height) for a single texture is limited by GL_MAX_TEXTURE_SIZE. This is normally 4096, 8192 or 16384. It is also limited by the available graphics memory because it has to fit into it together with some other resources like the framebuffer or vertex buffers. All textures together can be bigger then the available memory but then they will be swapped.
In most cases the graphics driver should decide which textures are stored in system memory and which in graphics memory. You can however give certain textures a higher priority with either glPrioritizeTextures or with glTexParameter.
Edit:
I wouldn't worry too much about where textures are stored because the driver normally does a very good job with that. Textures that are used often are also more likely to be stored in graphics memory. If you set a priority that's just a "hint" for the driver on how important it is for the texture to stay on the graphics card. It's also possible the the priority is completely ignored. You can also check where textures currently are with glAreTexturesResident.
Usually when you talk about generating a texture on the GPU, you're not actually creating texture images and applying them like normal textures. The simpler and more common approach is to use Fragment shaders to procedurally calculate the colors of for each pixel in real time from scratch for every single frame.
The canonical example for this is to generate a Mandelbrot pattern on the surface of an object, say a teapot. The teapot is rendered with its polygons and texture coordinates by the application. At some stage of the rendering pipeline every pixel of the teapot passes through the fragment shader which is a small program sent to the GPU by the application. The fragment shader reads the 2D texture coordinates and calculates the Mandelbrot set color of the 2D coordinates and applies it to the pixel.
Fullscreen mode has nothing to do with it. You can use shaders and generate textures even if you're in window mode. As I mentioned, the textures you create never actually occupy space in the texture memory, they are created on the fly. One could probably think of a way to capture and cache the generated texture but this can be somewhat complex and require multiple rendering passes.
You can learn more about it if you look up "GLSL" in google - the OpenGL shading language.
This somewhat dated tutorial shows how to create a simple fragment shader which draws the Mandelbrot set (page 4).
If you can get your hands on the book "OpenGL Shading Language, 2nd Edition", you'll find it contains a number of simple examples on generating sky, fire and wood textures with the help of an external 3D Perlin noise texture from the application.
To create a texture on GPU look into "render to texture" tutorials. There are two common methods: Binding a PBuffer context as texture, or using Frame Buffer Objects. PBuffer render to textures are the older method, and have the wider support. Frame Buffer Objects are easier to use.
Also you don't have to switch to "fullscreen" mode for OpenGL to be HW accelerated. In fact OpenGL doesn't know about windows at all. A fullscreen OpenGL window is just that: A toplvel window on top of all other windows with no decorations and the input focus grabed. Some drivers bypass window masking and clipping code, and employ a simpler, faster buffer swap method if the window with the active OpenGL context covers the whole screen, thus gaining a little performance, but with current hard- and software the effect is very small compared to other influences.

Fragment shader rendering to off-screen frame buffer

In a Qt based application I want to execute a fragment shader on two textures (both 1000x1000 pixels).
I draw a rectangle and the fragment shader works fine.
But, now I want to renderer the output into GL_AUX0 frame buffer to let the result read back and save to a file.
Unfortunately if the window size is less than 1000x1000 pixels the output is not correct. Just the window size area is rendered onto the frame buffer.
How can I execute the frame buffer for the whole texture?
The recommended way to do off-screen processing is to use Framebuffer Objects (FBO). These buffers act similar the render buffers you already know, but are not constrained by the window resolution or color depths. You can use the GPGPU Framebuffer Object Class to hide low-level OpenGL commands and use the FBO right away. If you prefer doing this on your own, have a look at the extension specification.