How to Convert Existing OpenGL Texture to Metal Texture - opengl

I am working on developing some FxPlug plugins for Motion and FCP X. Ultimately, I'd like to have them render in Metal as Apple is deprecating OpenGL.
I'm currently using CoreImage, and while I've been able to use the CoreImage functionality to do Metal processing outside of the FxPlug SDK, FxPlug only provides me the frame as an OpenGL texture. I've tried just passing this into the CoreImage filter, but I end up getting this error:
Cannot render image (with an input GL texture) using a metal-DG context.
After a bit of research, I found that I can supposedly use CVPixelBuffers to share textures between the two, but after trying to write code utilizing this method for a while, I've come to the belief that this was intended as a way to WRITE (as in, create from scratch) to a shared buffer, but not convert between. While this may be incorrect, I cannot find a way to get the existing GL texture to exist in a CVPixelBuffer.
TL;DR: I've found ways to get a resulting Metal or OpenGL texture FROM a CVPixelBuffer, but I cannot find a way to create a CVPixelBuffer from an existing OpenGL texture. My heart is not set on this method, as my ultimate goal is to simply convert from OpenGL to Metal, then back to OpenGL (ideally in an efficient way).
Has anyone else found a way to work with FxPlug with Metal? Is there a good way to convert from an OpenGL texture to Metal/CVPixelBuffer?

I have written an FxPlug that uses both OpenGL textures and Metal textures. The thing you're looking for is an IOSurface. They are textures that can be used with either Metal or OpenGL, though they have some limitations. As such, if you already have a Metal or OpenGL texture, you must copy it into an IOSurface to use it with the other system.
To create an IOSurface you can either use CVPixelBuffers (by including the kCVPixelBufferIOSurfacePropertiesKey) or you can directly create one using the IOSurface class defined in <IOSurface/IOSurfaceObjC.h>.
Once you have an IOSurface, you can copy your OpenGL texture into it by getting an OpenGL texture from the IOSurface via CGLTexImageIOSurface2D() (defined in <OpenGL/CGLIOSurface.h>). You then take that texture and use it as the backing texture for an FBO. You can, for example, draw a textured quad into it using the input FxTexture as the texture. Be sure the call glFlush() when done!
Next take the IOSurface and create a MTLTexture from it via -[MTLDevice newTextureWithDescriptor:ioSurface:plane:] (described here). You'll want to create an output IOSurface to draw into and also create a MTLTexture from it. Do your Metal rendering into the output MTLTexture. Next, take the output IOSurface and create an OpenGL texture out of it via CGLTexImageIOSurface2D(). Now copy that OpenGL texture into the output FxTexture either by using it as the backing of a texture-backed FBO or whatever other method you prefer.
As you can see, the downside of this is that each render requires 2 copies - 1 of the input into an IOSurface and 1 of the output IOSurface into the output texture the app gives you. The other downside is that this is probably all moot, as with Apple having announced publicly that they're ending support for OpenGL, they're probably working on a Metal-based solution already. It may be extra work to do it all yourself. (Though the upside is that you can use that same code in other host applications that only support OpenGL.)

Related

How to draw texts on a 3D objects (such as sphere)

I learn OpenGL under Linux platform. Recently, I try to use texts created by glutBitmapCharacter() as the texture of some quadrics objects provided by glu or glut. However, glutBitmapCharacter() does not return a pointer so that I can't feed it to the glTexImage2D(). I had google it for quite a while, but all I found is some topic related to Android SDK which I have no experience to it.
All I can think of is to render texts and read it form buffer using glReadPixels(), then save it to a file. Next, read the pixels back from the file and refer it to a pointer. Finally, draw 3D objects with these texts as the texture (i.e. feed the pointer to the glTexImage2D()).
However, it's kind of silly. What I want to ask is: Are there some other alternative way to this?
Applying text on top of a 3D surface is not trivial with pure OpenGL. GLUT does not provide any tools for that. One possible option would be for you to implement your own text rendering methods, possibly loading glyphs using Freetype then create a texture with the glyphs and apply that texture to the polygons. Freetype-GL is a tiny helper library that would facilitate a lot if you were to do that.
Another option would be to again load the text glyphs into a texture and then apply them as decals over the geometry. That way you could still simulate a 2D text drawing in a flat surface (the decal) and then apply that on top of a 3D object.

Cuda and/or OpenGL for geometric image transformation

My question concerns the most efficient way of performing geometric image transformations on the GPU. The goal is essentially to remove lens distortion from aquired images in real time. I can think of several ways to do it, e.g. as a CUDA kernel (which would be preferable) doing an inverse transform lookup + interpolation, or the same in an OpenGL shader, or rendering a forward transformed mesh with the image texture mapped to it. It seems to me the last option could be the fastest because the mesh can be subsampled, i.e. not every pixel offset needs to be stored but can be interpolated in the vertex shader. Also the graphics pipeline really should be optimized for this. However, the rest of the image processing is probably going to be done with CUDA. If I want to use the OpenGL pipeline, do I need to start an OpenGL context and bring up a window to do the rendering, or can this be achieved anyway through the CUDA/OpenGL interop somehow? The aim is not to display the image, the processing will take place on a server, potentially with no display attached. I've heard this could crash OpenGL if bringing up a window.
I'm quite new to GPU programming, any insights would be much appreciated.
Using the forward transformed mesh method is the more flexible and easier one to implement. However performance wise there's no big difference, as the effective limit you're running into is memory bandwidth, and the amount of memory bandwidth consumed does only depend on the size of your input image. If it's a fragment shader, fed by vertices or a CUDA texture access that's causing the transfer doesn't matter.
If I want to use the OpenGL pipeline, do I need to start an OpenGL context and bring up a window to do the rendering,
On Windows: Yes, but the window can be an invisible one.
On GLX/X11 you need an X server running, but you can use a PBuffer instead of a window to get a OpenGL context.
In either case use a Framebuffer Object as the actual drawing destination. PBuffers may corrupt their primary framebuffer contents at any time. A Framebuffer Object is safe.
or can this be achieved anyway through the CUDA/OpenGL interop somehow?
No, because CUDA/OpenGL interop is for making OpenGL and CUDA interoperate, not make OpenGL work from CUDA. CUDA/OpenGL Interop helps you with the part you mentioned here:
However, the rest of the image processing is probably going to be done with CUDA.
BTW; maybe OpenGL Compute Shaders (available since OpenGL-4.3) would work for you as well.
I've heard this could crash OpenGL if bringing up a window.
OpenGL actually has no say in those things. It's just a API for drawing stuff on a canvas (canvas = window or PBuffer or Framebuffer Object), but it doesn't deal with actually getting a canvas on the scaffolding, so to speak.
Technically OpenGL doesn't care if there's a window or not. It's the graphics system on which the OpenGL context is created. And unfortunately none of the currently existing GPU graphics systems supports true headless operation. NVidia's latest Linux drivers may allow for some crude hacks to setup a truly headless system, but I never tried that, so far.

Is it possible to have OpenGL draw on a memory surface?

I am starting to learn OpenGL and I was wondering if it is possible to have it draw on a video memory buffer that I've obtained through other libraries?
For drawing into video memory you can use framebuffer objects to draw into OpenGL textures or renderbuffers (VRAM areas for offscreen rendering), like Stefan suggested.
When it comes to a VRAM buffer created by another library, it depends what library you are talking about. If this library also uses OpenGL under the hood, you need some insight into the library to get that "buffer" (be it a texture, into which you can render directly using FBOs, or a GL buffer object, into which you can read rendered pixel data using PBOs.
If this library uses some other API to interface the GPU, there are not so many possibilities. If it uses OpenCL or CUDA, these APIs have functions to directly use their memory buffers or images as OpenGL buffers or textures, which you can then render into with the mentioned techniques.
If this library uses Direct3D under the hood, it gets a bit more difficult. But at least nVidia has an extension to directly use Direct3D 9 surfaces and textures as OpenGL buffers and textures, but I don't have any experience with this and neither do I know if this is widely supported.
You cannot let OpenGL draw directly to arbitrary memory, one reason is that in most implementations OpenGL drawing happens in video RAM, not system memory. You can however draw to an OpenGL offscreen context and then read back the result to any place in system memory. A web search for framebuffer objects (FBOs) should point you to documentation and tutorials.
If the memory you have is already in VRAM, for example decoded by hardware acceleration, then you might be able to draw to it directly if it is available as an OpenGL texture - then you can use some render to texture techniques that will save you transferring data from and to VRAM.

Render cairo surface directly to OpenGL texture

I'm using cairo (http://cairographics.org) in combination with an OpenGL based 3D graphics library.
I'm currently using the 3D library on Windows, but I'm hoping to receive an answer that is platform independent.
This is all done in c++.
I've got the straight forward approach working which is to use cairo_image_surface_create in combination with glTexImage2D to get an OpenGL texture.
However, from what I've been able to gather from the documentation cairo_image_surface_create uses a CPU-based renderer and writes the output to main memory.
I've come to understand cairo has a new OpenGL based renderer which renders its output directly on the GPU, but I'm unable to find concrete details on how to use it.
(I've found some details on the glitz-renderer, but it seems to be deprecated and removed).
I've checked the surface list at: http://www.cairographics.org/manual/cairo-surfaces.html, but I feel I'm missing the obvious.
My question is: How do I create a cairo surface that renders directly to an OpenGL texture?
Do note: I'll need to be able to use the texture directly (without copying) to display the cairo output on screen.
As of 2015, Cairo GL SDL2 is probably the best way to use Cairo GL
https://github.com/cubicool/cairo-gl-sdl2
If you are on an OS like Ubuntu where cairo is not compiled with GL you will need to compile your own copy and let cairo-gl-sdl2 know where it is.
The glitz renderer has been replaced by the experimental cairo-gl backend.
You'll find a mention of it in:
http://cairographics.org/OpenGL/
Can't say if it's stable enough to be used though.
Once you have a gl backend working, you can render into a Framebuffer Object to render directly in a given texture.
I did it using GL_BGRA.
int tex_w = cairo_image_surface_get_width(surface);
int tex_h = cairo_image_surface_get_height(surface);
unsigned char* data = cairo_image_surface_get_data(surface);
then do
glTexImage2D(GL_TEXTURE_2D, 0, 4, tex_w,tex_h, 0,GL_BGRA, GL_UNSIGNED_BYTE, data);
when you create the texture.
Use glTexSubImage2D(...) to update th texture when the image content changes. For speed
set the filters for the texture to GL_NEAREST

openGL into png

I'm trying to convert an openGL [edit: "card that I drew"(?):) thx unwind]containing a lot of textures (nothing moving) into one PNG file that I can use in another part of the framework I'm working with. Is there a C++ library that does that?
thanks!
If you simply mean "take a scene rendered by OpenGL and save it as an image," then it is fairly straightforward. You need to read the scene with glReadPixels(), and then convert that data to an image format such as PNG (http://www.opengl.org/resources/faq/technical/miscellaneous.htm).
There are also more efficient ways of achieving this, such as using FBOs. Instead of rendering the scene directly into the framebuffer, you can render it to a texture via an FBO, then render that texture as a full-screen quad. You can then take this texture and save it to a file (using glGetTexImage, for example).
What is an "OpenGL file"? OpenGL is a graphics API, it doesn't specify any file formats. Do you mean a DDS file, or something?
There are better ways to make a compose texture than drawing them with the graphics card. This is really something you would want to do before hand on the cpu, store and then use as and when you need it with opengl