Is it possible to have OpenGL draw on a memory surface? - opengl

I am starting to learn OpenGL and I was wondering if it is possible to have it draw on a video memory buffer that I've obtained through other libraries?

For drawing into video memory you can use framebuffer objects to draw into OpenGL textures or renderbuffers (VRAM areas for offscreen rendering), like Stefan suggested.
When it comes to a VRAM buffer created by another library, it depends what library you are talking about. If this library also uses OpenGL under the hood, you need some insight into the library to get that "buffer" (be it a texture, into which you can render directly using FBOs, or a GL buffer object, into which you can read rendered pixel data using PBOs.
If this library uses some other API to interface the GPU, there are not so many possibilities. If it uses OpenCL or CUDA, these APIs have functions to directly use their memory buffers or images as OpenGL buffers or textures, which you can then render into with the mentioned techniques.
If this library uses Direct3D under the hood, it gets a bit more difficult. But at least nVidia has an extension to directly use Direct3D 9 surfaces and textures as OpenGL buffers and textures, but I don't have any experience with this and neither do I know if this is widely supported.

You cannot let OpenGL draw directly to arbitrary memory, one reason is that in most implementations OpenGL drawing happens in video RAM, not system memory. You can however draw to an OpenGL offscreen context and then read back the result to any place in system memory. A web search for framebuffer objects (FBOs) should point you to documentation and tutorials.
If the memory you have is already in VRAM, for example decoded by hardware acceleration, then you might be able to draw to it directly if it is available as an OpenGL texture - then you can use some render to texture techniques that will save you transferring data from and to VRAM.

Related

How to Convert Existing OpenGL Texture to Metal Texture

I am working on developing some FxPlug plugins for Motion and FCP X. Ultimately, I'd like to have them render in Metal as Apple is deprecating OpenGL.
I'm currently using CoreImage, and while I've been able to use the CoreImage functionality to do Metal processing outside of the FxPlug SDK, FxPlug only provides me the frame as an OpenGL texture. I've tried just passing this into the CoreImage filter, but I end up getting this error:
Cannot render image (with an input GL texture) using a metal-DG context.
After a bit of research, I found that I can supposedly use CVPixelBuffers to share textures between the two, but after trying to write code utilizing this method for a while, I've come to the belief that this was intended as a way to WRITE (as in, create from scratch) to a shared buffer, but not convert between. While this may be incorrect, I cannot find a way to get the existing GL texture to exist in a CVPixelBuffer.
TL;DR: I've found ways to get a resulting Metal or OpenGL texture FROM a CVPixelBuffer, but I cannot find a way to create a CVPixelBuffer from an existing OpenGL texture. My heart is not set on this method, as my ultimate goal is to simply convert from OpenGL to Metal, then back to OpenGL (ideally in an efficient way).
Has anyone else found a way to work with FxPlug with Metal? Is there a good way to convert from an OpenGL texture to Metal/CVPixelBuffer?
I have written an FxPlug that uses both OpenGL textures and Metal textures. The thing you're looking for is an IOSurface. They are textures that can be used with either Metal or OpenGL, though they have some limitations. As such, if you already have a Metal or OpenGL texture, you must copy it into an IOSurface to use it with the other system.
To create an IOSurface you can either use CVPixelBuffers (by including the kCVPixelBufferIOSurfacePropertiesKey) or you can directly create one using the IOSurface class defined in <IOSurface/IOSurfaceObjC.h>.
Once you have an IOSurface, you can copy your OpenGL texture into it by getting an OpenGL texture from the IOSurface via CGLTexImageIOSurface2D() (defined in <OpenGL/CGLIOSurface.h>). You then take that texture and use it as the backing texture for an FBO. You can, for example, draw a textured quad into it using the input FxTexture as the texture. Be sure the call glFlush() when done!
Next take the IOSurface and create a MTLTexture from it via -[MTLDevice newTextureWithDescriptor:ioSurface:plane:] (described here). You'll want to create an output IOSurface to draw into and also create a MTLTexture from it. Do your Metal rendering into the output MTLTexture. Next, take the output IOSurface and create an OpenGL texture out of it via CGLTexImageIOSurface2D(). Now copy that OpenGL texture into the output FxTexture either by using it as the backing of a texture-backed FBO or whatever other method you prefer.
As you can see, the downside of this is that each render requires 2 copies - 1 of the input into an IOSurface and 1 of the output IOSurface into the output texture the app gives you. The other downside is that this is probably all moot, as with Apple having announced publicly that they're ending support for OpenGL, they're probably working on a Metal-based solution already. It may be extra work to do it all yourself. (Though the upside is that you can use that same code in other host applications that only support OpenGL.)

Cuda and/or OpenGL for geometric image transformation

My question concerns the most efficient way of performing geometric image transformations on the GPU. The goal is essentially to remove lens distortion from aquired images in real time. I can think of several ways to do it, e.g. as a CUDA kernel (which would be preferable) doing an inverse transform lookup + interpolation, or the same in an OpenGL shader, or rendering a forward transformed mesh with the image texture mapped to it. It seems to me the last option could be the fastest because the mesh can be subsampled, i.e. not every pixel offset needs to be stored but can be interpolated in the vertex shader. Also the graphics pipeline really should be optimized for this. However, the rest of the image processing is probably going to be done with CUDA. If I want to use the OpenGL pipeline, do I need to start an OpenGL context and bring up a window to do the rendering, or can this be achieved anyway through the CUDA/OpenGL interop somehow? The aim is not to display the image, the processing will take place on a server, potentially with no display attached. I've heard this could crash OpenGL if bringing up a window.
I'm quite new to GPU programming, any insights would be much appreciated.
Using the forward transformed mesh method is the more flexible and easier one to implement. However performance wise there's no big difference, as the effective limit you're running into is memory bandwidth, and the amount of memory bandwidth consumed does only depend on the size of your input image. If it's a fragment shader, fed by vertices or a CUDA texture access that's causing the transfer doesn't matter.
If I want to use the OpenGL pipeline, do I need to start an OpenGL context and bring up a window to do the rendering,
On Windows: Yes, but the window can be an invisible one.
On GLX/X11 you need an X server running, but you can use a PBuffer instead of a window to get a OpenGL context.
In either case use a Framebuffer Object as the actual drawing destination. PBuffers may corrupt their primary framebuffer contents at any time. A Framebuffer Object is safe.
or can this be achieved anyway through the CUDA/OpenGL interop somehow?
No, because CUDA/OpenGL interop is for making OpenGL and CUDA interoperate, not make OpenGL work from CUDA. CUDA/OpenGL Interop helps you with the part you mentioned here:
However, the rest of the image processing is probably going to be done with CUDA.
BTW; maybe OpenGL Compute Shaders (available since OpenGL-4.3) would work for you as well.
I've heard this could crash OpenGL if bringing up a window.
OpenGL actually has no say in those things. It's just a API for drawing stuff on a canvas (canvas = window or PBuffer or Framebuffer Object), but it doesn't deal with actually getting a canvas on the scaffolding, so to speak.
Technically OpenGL doesn't care if there's a window or not. It's the graphics system on which the OpenGL context is created. And unfortunately none of the currently existing GPU graphics systems supports true headless operation. NVidia's latest Linux drivers may allow for some crude hacks to setup a truly headless system, but I never tried that, so far.

How can I create a buffer in (video) memory to draw to using OpenGL?

OpenGL uses two buffers, one is used to display on the screen, and the other is used to do rendering. They are swapped to avoid flickering. (Double buffering.)
Is it possible to create another 'buffer' in (I assume video memory), so that drawing can be done elsewhere. The reason I ask is that I have several SFML Windows, and I want to be able to instruct OpenGL to draw to an independent buffer for each of them. Currently I have no control over the rendering buffer. There is one for EDIT: ALL (not each) window. Once you call window.Display(), the contents of this buffer are copied to another buffer which appears inside a window. (I think that's how it works.)
The term you're looking for is "off-screen rendering". There are two methods to do this with OpenGL.
The one is by using a dedicated off-screen drawable provided by the underlying graphics layer of the operating system. This is called a PBuffer. A PBuffer can be used very much like a window, that's not mapped to the screen. PBuffers were the first robust method to implement off-screen rendering using OpenGL; they were introduced in 1998. Since PBuffers are fully featured drawables a OpenGL context can be attached to them.
The other method is using an off-screen render target provided by OpenGL itself and not by the operating system. This is called a Framebuffer Object. FBOs require a fully functional OpenGL context to work. But FBOs can not provide the drawable a OpenGL context requires to be attached to, to be functional. So the main use for FBOs is to render intermediate pictures to them, that are later used when rendering on screen visible pictures. Luckily for an FBO to work, the drawable the OpenGL context is bound to may be hidden. So you can use a regular window that's hidden from the user can be used.
If your desire is pure off-screen rendering, a PBuffer still is a very viable option, especially on GLX/X11 (Linux) where they're immediately available without having to tinker with extensions.
Look into Frame Buffer Objects (FBOs).
If you have a third buffer you lose the value of double buffering. Double buffer works because you are changing the pointer to the pixel array sent to the display device. If you include the third buffer you'll have to copy into each buffer.
I haven't worked with OpenGL in a while but wouldn't it serve better to render into a texture (bitmap). This lets each implementation of OpenGL choose how it want's to get that bitmap from memory into the video buffer for the appropriate region of the screen.

How to render/draw buffer object to framebuffer without glDrawPixels

According to opengl spec 4.0 glDrawPixels is deprecated.
For cuda interoperability it seems best to use "opengl buffer objects". (An alternative could be textures or surfaces but these have caching/concurrency issues and are therefore unusable for my cuda kernel).
I simply want to create a cuda kernel which uses this mapped opengl buffer object and uses it as a "pixel array" or a piece of memory holding pixels, later the buffer is unmapped.
I then want the opengl program to draw the buffer object to the framebuffer. I would like to use an opengl api which is not deprecated.
What other ways/apis are there to draw a buffer object to the frame buffer ? (Also render buffers cannot be used since they probably have same issue as cuda arrays/caching issues, so this rules out framebuffer object/extension ?!?).
Is there a gap/missing functionality in opengl 4.0 now that glDrawPixels is deprecated ? Or is there an alternative ?
glDrawPixels has been removed from GL 3.2 and above (it is not deprecated. Deprecated means "available but to be removed in the future"). It was removed because it's generally not a fast way to draw pixel data to the screen.
Your best bet is to use glTexSubImage2D to upload it to a texture, then draw that to the screen. Or blit it from the texture with glBlitFramebuffer.

How to create textures within GPU

Can anyone pls tell me how to use hardware memory to create textures in OpenGL ? Currently I'm running my game in window mode, do I need to switch to fullscreen to get the use of hardware ?
If I can create textures in hardware, is there a limit for no of textures (other than the hardware memory) ? and then how can I cache my textures into hardware ? Thanks.
This should be covered by almost all texture tutorials for OpenGL. For example here, here and here.
For every texture you first need a texture name. A texture name is like a unique index for a single texture. Every name points to a texture object that can have its own parameters, data, etc. glGenTextures is used to get new names. I don't know if there is any limit besides the uint range (2^32). If there is then you will probably get 0 for all new texture names (and a gl error).
The next step is to bind your texture (see glBindTexture). After that all operations that use or affect textures will use the texture specified by the texture name you used as parameter for glBindTexture. You can now set parameters for the texture (glTexParameter) and upload the texture data with glTexImage2D (for 2D textures). After calling glTexImage you can also free the system memory with your texture data.
For static textures all this has to be done only once. If you want to use the texture you just need to bind it again and enable texturing (glEnable(GL_TEXTURE_2D)).
The size (width/height) for a single texture is limited by GL_MAX_TEXTURE_SIZE. This is normally 4096, 8192 or 16384. It is also limited by the available graphics memory because it has to fit into it together with some other resources like the framebuffer or vertex buffers. All textures together can be bigger then the available memory but then they will be swapped.
In most cases the graphics driver should decide which textures are stored in system memory and which in graphics memory. You can however give certain textures a higher priority with either glPrioritizeTextures or with glTexParameter.
Edit:
I wouldn't worry too much about where textures are stored because the driver normally does a very good job with that. Textures that are used often are also more likely to be stored in graphics memory. If you set a priority that's just a "hint" for the driver on how important it is for the texture to stay on the graphics card. It's also possible the the priority is completely ignored. You can also check where textures currently are with glAreTexturesResident.
Usually when you talk about generating a texture on the GPU, you're not actually creating texture images and applying them like normal textures. The simpler and more common approach is to use Fragment shaders to procedurally calculate the colors of for each pixel in real time from scratch for every single frame.
The canonical example for this is to generate a Mandelbrot pattern on the surface of an object, say a teapot. The teapot is rendered with its polygons and texture coordinates by the application. At some stage of the rendering pipeline every pixel of the teapot passes through the fragment shader which is a small program sent to the GPU by the application. The fragment shader reads the 2D texture coordinates and calculates the Mandelbrot set color of the 2D coordinates and applies it to the pixel.
Fullscreen mode has nothing to do with it. You can use shaders and generate textures even if you're in window mode. As I mentioned, the textures you create never actually occupy space in the texture memory, they are created on the fly. One could probably think of a way to capture and cache the generated texture but this can be somewhat complex and require multiple rendering passes.
You can learn more about it if you look up "GLSL" in google - the OpenGL shading language.
This somewhat dated tutorial shows how to create a simple fragment shader which draws the Mandelbrot set (page 4).
If you can get your hands on the book "OpenGL Shading Language, 2nd Edition", you'll find it contains a number of simple examples on generating sky, fire and wood textures with the help of an external 3D Perlin noise texture from the application.
To create a texture on GPU look into "render to texture" tutorials. There are two common methods: Binding a PBuffer context as texture, or using Frame Buffer Objects. PBuffer render to textures are the older method, and have the wider support. Frame Buffer Objects are easier to use.
Also you don't have to switch to "fullscreen" mode for OpenGL to be HW accelerated. In fact OpenGL doesn't know about windows at all. A fullscreen OpenGL window is just that: A toplvel window on top of all other windows with no decorations and the input focus grabed. Some drivers bypass window masking and clipping code, and employ a simpler, faster buffer swap method if the window with the active OpenGL context covers the whole screen, thus gaining a little performance, but with current hard- and software the effect is very small compared to other influences.