How does windows gdi work in compare to directX/openGL? - opengl

I know that DirectX and OpenGL are both built into a graphics card and you can access them when you need to so all the rendering will be hardware based accelerated.
I can't find anything about GDI but i'm sure it's not a graphics card based acceleration.
How does those technologies differ and what am i missing here?

The GDI library dates back to the early 90s and the first versions of Windows. It's supports a mix of hardware-accelerated raster ops and software vector graphics drawing. It is primary for drawing 'presentation' graphics and text for GUIs. GDI+ was a C++ class-based wrapper for GDI, but is basically the same technology. Modern versions of Windows still support legacy GDI APIs but modern graphics cards don't implement the old fixed-function 'raster ops' hardware accelerated paths--it's not really accurate to even refer to the video cards of the early 90s "GPUs".
OpenGL and Direct3D are graphics APIs built around pixels, lines, and triangles with texturing. Modern versions of both use programmable shader stages instead of fixed-function hardware to implement transform & lighting, texture blending, etc. These enable 3D hardware acceleration, but by themselves do not support classic 'raster ops' or vector-based drawing like styled/wide lines, filled ellipses, etc. Direct2D is a vector graphics library that does these old "GDI-style" operations on top of Direct3D with a mix of software and hardware-accelerated paths, and is what is used by the modern Windows GUI in combination with DirectWrite which implements high-quality text rendering.
See Graphics APIs in Windows, Vector graphics, and Raster graphics.
In short, for modern Windows programming you should use Direct2D/DirectWrite for 'presentation graphics' perhaps via a wrapper like Win2D, and Direct3D for 2D raster "sprite" graphics or 3D graphics rendering (see DirectX Tool Kit).

GDI is pure 2D image access and rendering (with very limited acceleration on a raster level if not disabled). So in GDI there are no:
textures
shaders
buffers
depth/stencil buffers
transform matrices
gfx pipeline
The differences are:
transparency is done by specific color value instead of alpha channel
blending is possible in only per channel add,xor,copy modes
you can get direct pixel access
got windows font access and text functions support
window related GDI calls should be called just from it's main thread (WndProc)

Related

Offscreen rendering with OpenGL version 1.2

With new versions of OpenGL I can do offscreen rendering using FBO, and read pixels from FBO and then do what I want to do with the data.
How could I do the offscreen rendering with older versions of OpenGL (like OpenGL 1.2) which does not support FBOs?
The legacy (pre-FBO) mechanism for off-screen rendering in OpenGL are pbuffers. For example for Windows, they are defined in the WGL_ARB_pbuffer extension. They are also available on other platforms, like Mac, Android, etc.
Aside from the ancientness of pbuffers, there is a fundamental difference to FBOs: pbuffer support is an extension of the window system interface of OpenGL, while FBO support is a feature of OpenGL itself.
One corollary of this is that rendering to a pbuffer happens in a separate context. Where for a FBO, you simply create a new OpenGL framebuffer object, and render to it using regular OpenGL calls within the same context, this is different in the pbuffer case. There you set up a context to render to a
off-screen surface instead of a window surface. The off-screen surface is then the primary framebuffer of this context, and you do all rendering to the pbuffer in this separate context.
That being said, please think twice before using pbuffers. FBO support, at least in an extension form, has been around for a long time. I have a hard time imagining writing new software that needs to run on hardware old enough to not support FBOs.
FBOs are superior to pbuffers in almost every possible way. The main benefits are that the can be used without platform specific APIs, and that they do not require additional contexts.
The only case supported by pbuffers that is not easily supported by FBOs is pure off-screen rendering where you do not want to create a window. For example, if you want to render images on a server that may not have a display, pbuffers are actually a useful mechanism.
Just use the FBO extension. FBOs are a driver feature, not a hardware feature and even the oldest GPUs there are (and I'm talking no support for shaders, just plain old fixed function pipeline) can do FBOs. I've still got an old GeForce-2 (the one that came out 1998) in use (in an old computer I use for data archeology; it got ZIP drives, a parallel SCSI bus and so on), and even that one can do FBOs.
https://www.opengl.org/registry/specs/ARB/framebuffer_object.txt

What are the benefits of using OpenGL in SDL 2?

I assume that SDL 2 uses OpenGL rendering in the background (or perhaps DirectX if on Windows) and this decision is made by SDl itself.
I have seen tutorials which show the use of OpenGL directly in SDL and wondered what benefit, if any would you get from using OpenGL direct? Are there things which SDL would not be able to achieve natively?
If you completely rely on SDL functionality for graphic purposes, you only have access to simple image and buffer functions.
Accelerated 2D render API:
Supports easy rotation, scaling and alpha blending,
all accelerated using modern 3D APIs
But what SDL also does is providing an OpenGL context. This means you also have full access to OpenGL functionality including 3D objects, shaders, etc.
You can use SDL simply to create your context and provide you with sound, input, and file i/o, and use OpenGL to bring color to the screen, or use the SDL video API to draw sprites and images.
http://wiki.libsdl.org/MigrationGuide
Simple 2D rendering API that can use Direct3D, OpenGL, OpenGL ES, or
software rendering behind the scenes
SDL2 just give you easy start with 2D graphics (and other matters) but you can't do "real 3D" only with SDL. (or i don't know about something?)
I don't know what SDL do "behind the scenes" but if you use directly OpenGL (or other API like Direct3D)
you have full control over the code and rendering process and you aren't limited to SDL graphics API.
I use SDL only for creating window, graphics context and using input devices like mouse.

2D Graphics Acceleration

I have noticed that some modern mobile hardware support 2D hardware accelerated bitBLT operations. Is there are any fairly standard libraries that allow an application developer to take advantage of 2D hardware acceleration? (Something that is for 2D what OpenGL is for 3D would be nice)
Edit: I know that OpenGL can be used for 2D operations but when one uses openGL one uses the 3D Hardware block and not the 2D hardware block. I am looking for an API that makes direct use of the 2D hardware block.
OpenGL can be used not only for 3d, but for 2d too. So the answer is: OpenGL
Almost every game on the market (including 2d games) use OpenGL for rendering. You might want to take a look at Cocos2d-x. It is a crossplatform c++ game engine, that use OpenGL for 2d graphics

Is it possible to have OpenGL draw on a memory surface?

I am starting to learn OpenGL and I was wondering if it is possible to have it draw on a video memory buffer that I've obtained through other libraries?
For drawing into video memory you can use framebuffer objects to draw into OpenGL textures or renderbuffers (VRAM areas for offscreen rendering), like Stefan suggested.
When it comes to a VRAM buffer created by another library, it depends what library you are talking about. If this library also uses OpenGL under the hood, you need some insight into the library to get that "buffer" (be it a texture, into which you can render directly using FBOs, or a GL buffer object, into which you can read rendered pixel data using PBOs.
If this library uses some other API to interface the GPU, there are not so many possibilities. If it uses OpenCL or CUDA, these APIs have functions to directly use their memory buffers or images as OpenGL buffers or textures, which you can then render into with the mentioned techniques.
If this library uses Direct3D under the hood, it gets a bit more difficult. But at least nVidia has an extension to directly use Direct3D 9 surfaces and textures as OpenGL buffers and textures, but I don't have any experience with this and neither do I know if this is widely supported.
You cannot let OpenGL draw directly to arbitrary memory, one reason is that in most implementations OpenGL drawing happens in video RAM, not system memory. You can however draw to an OpenGL offscreen context and then read back the result to any place in system memory. A web search for framebuffer objects (FBOs) should point you to documentation and tutorials.
If the memory you have is already in VRAM, for example decoded by hardware acceleration, then you might be able to draw to it directly if it is available as an OpenGL texture - then you can use some render to texture techniques that will save you transferring data from and to VRAM.

Can OpenGL rendering be used for 3D monitors?

We have been considering buying a 3D-ready LCD monitor along with a machine with a 3D-stereoscopic vision capable graphics card (ATI Radeon 5870 or better). This is for displaying some scientific data that we are rendering in 3D using OpenGL.
Now can we expect the GPU, Monitor and shutter-glasses to take care of the stereoscopic display or do we need to modify the rendering program?
If there are specific techniques for graphics programming for 3D stereoscopic displays, some tutorial links will be much appreciated.
Techically OpenGL provisions for stereoscopic display. The keyword is "Quad Buffered Stereo". You can switch left-right eye rendering with glDrawBuffer(GL_BACK_LEFT); /* render left eye view */; glDrawBuffer(GL_BACK_RIGHT); /* render right eye view */;.
Unfortunately Quadbuffer stereo is considered a professional feature by GPU vendors. So you need either a NVidia Quadro or a AMD/ATI FireGL graphics card to get support for it.
Stereoscopic rendering is not done by tilting the eyes, but by applying a "lens shift" in the projection matrix. The details are a bit tricky, though. I did a talk last year about it, but apparently the link with my presentation slides went down.
The key diagram is this one (it's from my notes of my patch to Blender to directly support stereoscopic rendering)