What are the benefits of using OpenGL in SDL 2? - opengl

I assume that SDL 2 uses OpenGL rendering in the background (or perhaps DirectX if on Windows) and this decision is made by SDl itself.
I have seen tutorials which show the use of OpenGL directly in SDL and wondered what benefit, if any would you get from using OpenGL direct? Are there things which SDL would not be able to achieve natively?

If you completely rely on SDL functionality for graphic purposes, you only have access to simple image and buffer functions.
Accelerated 2D render API:
Supports easy rotation, scaling and alpha blending,
all accelerated using modern 3D APIs
But what SDL also does is providing an OpenGL context. This means you also have full access to OpenGL functionality including 3D objects, shaders, etc.
You can use SDL simply to create your context and provide you with sound, input, and file i/o, and use OpenGL to bring color to the screen, or use the SDL video API to draw sprites and images.

http://wiki.libsdl.org/MigrationGuide
Simple 2D rendering API that can use Direct3D, OpenGL, OpenGL ES, or
software rendering behind the scenes
SDL2 just give you easy start with 2D graphics (and other matters) but you can't do "real 3D" only with SDL. (or i don't know about something?)
I don't know what SDL do "behind the scenes" but if you use directly OpenGL (or other API like Direct3D)
you have full control over the code and rendering process and you aren't limited to SDL graphics API.
I use SDL only for creating window, graphics context and using input devices like mouse.

Related

How to Convert Existing OpenGL Texture to Metal Texture

I am working on developing some FxPlug plugins for Motion and FCP X. Ultimately, I'd like to have them render in Metal as Apple is deprecating OpenGL.
I'm currently using CoreImage, and while I've been able to use the CoreImage functionality to do Metal processing outside of the FxPlug SDK, FxPlug only provides me the frame as an OpenGL texture. I've tried just passing this into the CoreImage filter, but I end up getting this error:
Cannot render image (with an input GL texture) using a metal-DG context.
After a bit of research, I found that I can supposedly use CVPixelBuffers to share textures between the two, but after trying to write code utilizing this method for a while, I've come to the belief that this was intended as a way to WRITE (as in, create from scratch) to a shared buffer, but not convert between. While this may be incorrect, I cannot find a way to get the existing GL texture to exist in a CVPixelBuffer.
TL;DR: I've found ways to get a resulting Metal or OpenGL texture FROM a CVPixelBuffer, but I cannot find a way to create a CVPixelBuffer from an existing OpenGL texture. My heart is not set on this method, as my ultimate goal is to simply convert from OpenGL to Metal, then back to OpenGL (ideally in an efficient way).
Has anyone else found a way to work with FxPlug with Metal? Is there a good way to convert from an OpenGL texture to Metal/CVPixelBuffer?
I have written an FxPlug that uses both OpenGL textures and Metal textures. The thing you're looking for is an IOSurface. They are textures that can be used with either Metal or OpenGL, though they have some limitations. As such, if you already have a Metal or OpenGL texture, you must copy it into an IOSurface to use it with the other system.
To create an IOSurface you can either use CVPixelBuffers (by including the kCVPixelBufferIOSurfacePropertiesKey) or you can directly create one using the IOSurface class defined in <IOSurface/IOSurfaceObjC.h>.
Once you have an IOSurface, you can copy your OpenGL texture into it by getting an OpenGL texture from the IOSurface via CGLTexImageIOSurface2D() (defined in <OpenGL/CGLIOSurface.h>). You then take that texture and use it as the backing texture for an FBO. You can, for example, draw a textured quad into it using the input FxTexture as the texture. Be sure the call glFlush() when done!
Next take the IOSurface and create a MTLTexture from it via -[MTLDevice newTextureWithDescriptor:ioSurface:plane:] (described here). You'll want to create an output IOSurface to draw into and also create a MTLTexture from it. Do your Metal rendering into the output MTLTexture. Next, take the output IOSurface and create an OpenGL texture out of it via CGLTexImageIOSurface2D(). Now copy that OpenGL texture into the output FxTexture either by using it as the backing of a texture-backed FBO or whatever other method you prefer.
As you can see, the downside of this is that each render requires 2 copies - 1 of the input into an IOSurface and 1 of the output IOSurface into the output texture the app gives you. The other downside is that this is probably all moot, as with Apple having announced publicly that they're ending support for OpenGL, they're probably working on a Metal-based solution already. It may be extra work to do it all yourself. (Though the upside is that you can use that same code in other host applications that only support OpenGL.)

How does windows gdi work in compare to directX/openGL?

I know that DirectX and OpenGL are both built into a graphics card and you can access them when you need to so all the rendering will be hardware based accelerated.
I can't find anything about GDI but i'm sure it's not a graphics card based acceleration.
How does those technologies differ and what am i missing here?
The GDI library dates back to the early 90s and the first versions of Windows. It's supports a mix of hardware-accelerated raster ops and software vector graphics drawing. It is primary for drawing 'presentation' graphics and text for GUIs. GDI+ was a C++ class-based wrapper for GDI, but is basically the same technology. Modern versions of Windows still support legacy GDI APIs but modern graphics cards don't implement the old fixed-function 'raster ops' hardware accelerated paths--it's not really accurate to even refer to the video cards of the early 90s "GPUs".
OpenGL and Direct3D are graphics APIs built around pixels, lines, and triangles with texturing. Modern versions of both use programmable shader stages instead of fixed-function hardware to implement transform & lighting, texture blending, etc. These enable 3D hardware acceleration, but by themselves do not support classic 'raster ops' or vector-based drawing like styled/wide lines, filled ellipses, etc. Direct2D is a vector graphics library that does these old "GDI-style" operations on top of Direct3D with a mix of software and hardware-accelerated paths, and is what is used by the modern Windows GUI in combination with DirectWrite which implements high-quality text rendering.
See Graphics APIs in Windows, Vector graphics, and Raster graphics.
In short, for modern Windows programming you should use Direct2D/DirectWrite for 'presentation graphics' perhaps via a wrapper like Win2D, and Direct3D for 2D raster "sprite" graphics or 3D graphics rendering (see DirectX Tool Kit).
GDI is pure 2D image access and rendering (with very limited acceleration on a raster level if not disabled). So in GDI there are no:
textures
shaders
buffers
depth/stencil buffers
transform matrices
gfx pipeline
The differences are:
transparency is done by specific color value instead of alpha channel
blending is possible in only per channel add,xor,copy modes
you can get direct pixel access
got windows font access and text functions support
window related GDI calls should be called just from it's main thread (WndProc)

Offscreen rendering with OpenGL version 1.2

With new versions of OpenGL I can do offscreen rendering using FBO, and read pixels from FBO and then do what I want to do with the data.
How could I do the offscreen rendering with older versions of OpenGL (like OpenGL 1.2) which does not support FBOs?
The legacy (pre-FBO) mechanism for off-screen rendering in OpenGL are pbuffers. For example for Windows, they are defined in the WGL_ARB_pbuffer extension. They are also available on other platforms, like Mac, Android, etc.
Aside from the ancientness of pbuffers, there is a fundamental difference to FBOs: pbuffer support is an extension of the window system interface of OpenGL, while FBO support is a feature of OpenGL itself.
One corollary of this is that rendering to a pbuffer happens in a separate context. Where for a FBO, you simply create a new OpenGL framebuffer object, and render to it using regular OpenGL calls within the same context, this is different in the pbuffer case. There you set up a context to render to a
off-screen surface instead of a window surface. The off-screen surface is then the primary framebuffer of this context, and you do all rendering to the pbuffer in this separate context.
That being said, please think twice before using pbuffers. FBO support, at least in an extension form, has been around for a long time. I have a hard time imagining writing new software that needs to run on hardware old enough to not support FBOs.
FBOs are superior to pbuffers in almost every possible way. The main benefits are that the can be used without platform specific APIs, and that they do not require additional contexts.
The only case supported by pbuffers that is not easily supported by FBOs is pure off-screen rendering where you do not want to create a window. For example, if you want to render images on a server that may not have a display, pbuffers are actually a useful mechanism.
Just use the FBO extension. FBOs are a driver feature, not a hardware feature and even the oldest GPUs there are (and I'm talking no support for shaders, just plain old fixed function pipeline) can do FBOs. I've still got an old GeForce-2 (the one that came out 1998) in use (in an old computer I use for data archeology; it got ZIP drives, a parallel SCSI bus and so on), and even that one can do FBOs.
https://www.opengl.org/registry/specs/ARB/framebuffer_object.txt

Cuda and/or OpenGL for geometric image transformation

My question concerns the most efficient way of performing geometric image transformations on the GPU. The goal is essentially to remove lens distortion from aquired images in real time. I can think of several ways to do it, e.g. as a CUDA kernel (which would be preferable) doing an inverse transform lookup + interpolation, or the same in an OpenGL shader, or rendering a forward transformed mesh with the image texture mapped to it. It seems to me the last option could be the fastest because the mesh can be subsampled, i.e. not every pixel offset needs to be stored but can be interpolated in the vertex shader. Also the graphics pipeline really should be optimized for this. However, the rest of the image processing is probably going to be done with CUDA. If I want to use the OpenGL pipeline, do I need to start an OpenGL context and bring up a window to do the rendering, or can this be achieved anyway through the CUDA/OpenGL interop somehow? The aim is not to display the image, the processing will take place on a server, potentially with no display attached. I've heard this could crash OpenGL if bringing up a window.
I'm quite new to GPU programming, any insights would be much appreciated.
Using the forward transformed mesh method is the more flexible and easier one to implement. However performance wise there's no big difference, as the effective limit you're running into is memory bandwidth, and the amount of memory bandwidth consumed does only depend on the size of your input image. If it's a fragment shader, fed by vertices or a CUDA texture access that's causing the transfer doesn't matter.
If I want to use the OpenGL pipeline, do I need to start an OpenGL context and bring up a window to do the rendering,
On Windows: Yes, but the window can be an invisible one.
On GLX/X11 you need an X server running, but you can use a PBuffer instead of a window to get a OpenGL context.
In either case use a Framebuffer Object as the actual drawing destination. PBuffers may corrupt their primary framebuffer contents at any time. A Framebuffer Object is safe.
or can this be achieved anyway through the CUDA/OpenGL interop somehow?
No, because CUDA/OpenGL interop is for making OpenGL and CUDA interoperate, not make OpenGL work from CUDA. CUDA/OpenGL Interop helps you with the part you mentioned here:
However, the rest of the image processing is probably going to be done with CUDA.
BTW; maybe OpenGL Compute Shaders (available since OpenGL-4.3) would work for you as well.
I've heard this could crash OpenGL if bringing up a window.
OpenGL actually has no say in those things. It's just a API for drawing stuff on a canvas (canvas = window or PBuffer or Framebuffer Object), but it doesn't deal with actually getting a canvas on the scaffolding, so to speak.
Technically OpenGL doesn't care if there's a window or not. It's the graphics system on which the OpenGL context is created. And unfortunately none of the currently existing GPU graphics systems supports true headless operation. NVidia's latest Linux drivers may allow for some crude hacks to setup a truly headless system, but I never tried that, so far.

Does it make sense to OpenGL instead of SDL blitting functions?

I'm writing a simple 2d game using SDL and I was wondering, what if I just apply a bitmap texture to rectangles with OpenGL and use them instead of sprites, eliminating any calls to SDL_BlitSurface? Will my application become faster as a result of OpenGL hardware acceleration?
P.S.: My application will be running in windowed mode (not fullscreen), if that's important.
Yes, in general, that should be faster and more scalable (you can draw more sprites before you start getting a performance hit).