Xlib's `Window` vs. GLX's `GLXWindow` confusion - opengl

I'm looking through all sorts of example codes for OpenGL context creation with GLX and I'm confused about the two types of window objects: Window used by Xlib and GLXWindow used by GLX, because in some example codes they use Xlib's windows directly for rendering, while in others they additionally create GLX's windows.
My questions are:
What's the difference between those two?
Do I need to create both, or just Window will do?
The GLX 1.4 documentation (are there any more recent ones?) tells me to create the GLXWindow using glXCreateWindow, and this one takes Window as its third parameter. I'm not sure if this is supposed to be a parent window for GLXWindow, or that the Window is getting "wrapped" by GLXWindow, the documentation seems to be unclear on this. Does anyone know how is this supposed to work and the rationale behind it? And why some examples use GLXWindows, and some of them don't, and yet they still seem to be working just fine?
Another thing that confuses me is that some examples use glXCreateContext for creating their OpenGL contexts, while others (including the GLX 1.4 specification) use glXCreateNewContext. Both seem to be available in my version of the library, but I don't quite get what's the difference between them.
There's also glXMakeCurrent and glXMakeContextCurrent – another source of confusion for me.
Could anyone please explain me the differences between these slightly differently spelled functions / types, or send me to some online resources where I could learn about that myself?
Oh, one more thing: Is this stuff still relevant for modern OpenGL (the 3.0+, programmable pipeline one)? Or just some legacy junk I shouldn't be bashing my poor pony head against? :q

As I never hear about GLXWindow, I become curious what is it. Most discussions, do not clarify why this one is necessary, and spread only guesses, including my answer.
First thing to clarify, is that GLXWindow has been introduced in GLX 1.3. Older versions, including GLX 1.2 define the following list of GLXDrawable: {Window, GLXPixmap}, with GLXPixmap created from off-screen X Pixmap. Specs does not clarify why GLXPixmap is needed on top of X Pixmap, while X Window can be used directly, but the guess is that something is missing in X Pixmap definition, which GLX needs to store within GLXPixmap...
GLX 1.3 extended the definition of GLXDrawable as { GLXWindow, GLXPixmap, GLXPbuffer, Window }. So that you can see a new item GLXPbuffer, which was an attempt to improve off-screen rendering before Frame Buffer Objects (FBO) was introduced in OpenGL itself. Unlike pixmaps and windows, GLXPbuffer is created from scratch and has no relation in X. There is also a new GLXWindow with the note:
For backwards compatibility with GLX versions 1.2 and earlier,
a rendering context can also be used to render into a Window.
So that GLXWindow looks more like an attempt to unify API (and internal logic) across all GLXDrawable, rather than a tool which fixes something, as it doesn't extend capabilities of X Window from a first glance.
Does GLXWindow introduce something new? Actually, it does! GLX 1.3 also introduced a new function glXSelectEvent(), which allows to process GLX-specific events within X11 stream. So that the function works with GLXWindow and GLXPbuffer, but not with X Window (as it has been specifically created to distinguish GLX events from normal ones). Confusingly, the only GLX-specific event defined by specs is of type GLXPbufferClobberEvent, which seems more relevant to obsolete PBuffers and Window ancillary buffers (replaced by FBOs nowadays in core OpenGL specs).
Therefore, personally I see no any practical reason for creating GLXWindow instead of using X Window itself rather than just following GLX specs recommendation (stating that using X Window is only for compatibility with applications written against older GLX versions) and using "cleaned up" API.
Concerning glXCreateNewContext() vs glXCreateContext(), this is related to introduction of GLXFBConfig, as X Visual definition has been found insufficient to represent all necessary details. From specs:
Calling glXCreateContext(dpy, visual, share list, direct) is
equivalent to calling glXCreateNewContext(dpy, config, render type,
share list, direct) where config is the GLXFBConfig identified by the
GLX_FBCONFIG_ID attribute of visual.
Concerning glXMakeCurrent() vs glXMakeContextCurrent(), the latter one introduced by newer GLX versions allows using different drawable and readable buffers in similar way as modern FBOs allow this, which is also clear from specs:
Calling glXMakeCurrent(dpy, draw, ctx) is equivalent to calling
glXMakeContextCurrent(dpy, draw, draw, ctx). Note that draw will be
used for both the draw and read drawable.
Concerning modern OpenGL 3+ usage, GLX is no problem here - just use GLX extensions like GLX_ARB_create_context_profile to create context (current Mesa library implementation provides higher OpenGL versions when creating Core Profile, but there usually no such different in case of proprietary drivers).
Of course, you may consider using EGL instead (but it rather limits OpenGL context abilities as some options are missing compared to GLX) or Wayland/Mir stuff, if you are fanatic to new display servers and want to get rid of X from dependencies (Wayland implements Xlib compatibility layer, so that it is not a show stopper for now).

Related

Can I write an OpenGL application without binding it to a certain windowing library?

As I mentioned in a question before, I am trying to make a simple game engine in C++ with OpenGL.
I am currently using GLFW for drawing the OpenGL context and I chose it because I heard it's one of the faster ones out there. However, it doesn't support widgets and I really don't want to write them myself. So I decided to get into Qt a bit, because it would allow me to have a pane for the render context and different handy bars as well as all the fancy elements for editing a world map, setting OpenGL rules, etc.
I want to use GLFW on the exported version of that game, though. Is that possible without an abstraction layer of some kind?
Thanks in advance! :)
Yes it is definitely possibile, infact I'm writing a 3D engine that is not coupled to any windowing library and can be used with Qt, SDL or whatever.
You of course have just to wrap regular GL calls into a higher level layer, this require you don't call "SwapBuffers" inside your GL code.
If by abstraction layer you mean "inversion of control" so, you don't want to override a "Render/Update" method that's exactly what I done. If by "abstraction layer" you mean you want to use GL directly than it is still possible.
Basically every windowing system have "some place" where you can make your GL calls (between MakeCurrent and SwapBuffers). Just read documentation of your windowing system.

Does OpenGL code work regardless of what input/window handler I use?

I'm following an OpenGL tutorial that uses a certain input/window handler (i.e. GLUT, GLFW), but I cannot, due to platform issues, use that handler. Can I use the exact code from the tutorial despite my using a different input handler? Does the OpenGL code have to be modified to work with a different handler?
It will work.
OpenGL is completely agnostic of input. It is a graphics library, and as such, cares only about graphics. Everything else, including input, audio, and all else, is completely and utterly irrelevant.
The only difference for you is that toolkits like GLUT, GLFW, SFML, and others do the setup of an OpenGL context for you.
If you want to use another toolkit, that's fine, and it will probably also set up your context for you. You can also use OpenGL directly, in which case, you will need to create the context yourself, which will require calling into the WGL (Windows), AGL (Mac), GLX (X-Windows on *nix), or EGL (everything else) APIs.

DirectComposition render to texture?

I would like to have directcomposition render to a texture. Is this possible?
The reason for this is that I would like to be able to render a gpu accelerated windowless transparent flash player activex control to a texture. Something that is usually not possible, but which I hope to achieve with DirectComposition.
It's unlikely that this is possible, to quote MSDN (emphasis mine)
DirectComposition does not offer any rasterization services. An application must use some other software-based or hardware-accelerated rasterization library such as Direct2D or Direct3D to populate the bitmaps that are to be composed. After composing, DirectComposition passes composed bitmap content to Desktop Window Manager (DWM) for rendering to the screen.
As far as I know there are only official APIs to share your offscreen surfaces with DWM, but no API allowing you to get read-access to a DWM surface.
What DWM does allow you is redirecting HWND surfaces, so you can display the surfaces of other HWNDs on your window. This can be done either through DirectComposition (via CreateSurfaceFromHwnd) or the DWM API (via DwmRegisterThumbnail). For an example of the latter look here.
If you want to go the "hacking route" as indicated in your comment, there are undocumented APIs which look like they can give you access to the DWM surfaces, in particular DwmpDxGetWindowSharedSurface sounds promising. Someone else already did some reverse engineering and figured out the signature, but couldn't get it to work (texture works but renders black). This guy seems to have had more luck and was able to render window textures in 3d. I don't understand his language but you seem to have to use DwmpDxUpdateWindowSharedSurface (also undocumented).
You should be aware however that using undocumented functions is not a good idea, Microsoft can change them anytime (even in service pack releases) or remove them completely, since they are only used by Microsoft themselves they have no reason to maintain compatibility. Also there is a good chance that you are going to use them wrong (e.g. you might be missing necessary synchronization and cause random crashes, or worse).
However since the functionality is actually available there is hope that Microsoft may actually open it for puplic use in some future version of Windows.

Cross Platform GUI - Rendering Process

I have been using a few cross-platform GUI libraries (such as FLTK, wxWidgets, GTK++), however I feel like none fulfil my needs as I would like to create something that looks the same regardless of the platform (I understand that there will be people against building GUI's that don't have a native look on the platforms but that's not the issue here).
To build my controls, I usually rely on basic shapes provided by the library and make my way up binding & coding everything together...
So I decided to give it a try and do some opengl for 2D GUI programming (as it would still be cross-platform. With that in mind, I couldn't help to notice that the applications that I have written using wxWidgets & FLTK usually have a average RAM consume of 1/2MB, whereas a very basic openGL window with a simple background ranges from 6 to 9 MB.
This brings me to the actual question for this thread,
I thought that all the rendering of the screen was made using either opengl/direct (under the covers).
Could someone please explain or link me some sort of article that could give me some insight of how these things actually work?
Thanks for reading!
These multiplatform toolkits usually support quite a lot of backends which does the drawing. Even though some of the toolkits support OpenGL as their backend, the default is usually the "native" backend.
Take a look eg. at Qt. On Windows it uses GDI for drawing for its native backend. On linux it uses XRender I think. Same on Symbian and Mac. Qt also has its own software rasterizer. And of course there is an OpenGL backend.
So why the application using some of these GUI toolkits can take less memory than a simple OpenGL application? If the toolkit use the "native" backend, everything is already loaded in memory, because it is very likely that all visible GUI uses the same drawing API. The native APIs can also use only one buffer representing a whole screen in which all applications can draw.
However when using OpenGL you have your own buffer which represents the application window. Not to mention that an OpenGL application usually has several framebuffers, like z-buffer, stencil buffer, back buffer, which are not essential for 2D drawing, but they take some space (even though its probably the space in graphics card memory). Finally, when using OpenGL, it is possible that the necessary libraries are not yet loaded.
Your question is exceedingly vague, but it seems like you're asking about why your GL app takes up more memory than a basic GUI window.
It's because it's an OpenGL application. This means it has to store all of the machinery needed to make OpenGL work. It means it needs a hefty-sized framebuffer: back buffer, z-buffer, etc. It needs a lot of boilerplate to function.
Really, I wouldn't worry about it. It's something every application does.

Multi-monitor 3D Application

I've been challenged with a C++ 3D application project that will use 3 displays, each one rendering from a different camera.
Recently I learned about Ogre3D but it's not clear if it supports output of different cameras to different displays/GPUs.
Does anyone have any experience with a similar Setup and Ogre or another engine?
At least on most systems (e.g., Windows, MacOS) the windowing system creates a virtual desktop, with different monitors mapped to different parts of the desktop. If you want to, you can (for example) create one big window that will cover all three displays. If you set that window up to use OpenGL, almost anything that uses OpenGL (almost certainly including Ogre3D) will work just fine, though in some cases producing that much output resolution can tax the graphics card to the point that it's a bit slower than usual.
If you want to deal with a separate window on each display, things might be a bit more complex. OpenGL itself doesn't (even attempt to) define how to handle display in multiple windows -- that's up to a platform-specific set of functions. On Windows, for example, you have a rendering context for each window, and have to use WGLMakeCurrent to pick which rendering context you draw to at any given time.
If memory serves, the Windows port of Ogre3D supports multiple rendering contexts, so this shouldn't be a problem either. I'd expect it can work with multiple windows on other systems as well, but I haven't used it on any other systems, so I can't say with any certainty.
My immediate guess, however, is that the triple monitor support will be almost inconsequential in your overall development effort. Of course, it does mean that you (can tell your boss) need a triple monitor setup for development and testing, which certainly isn't a bad thing! :-)
Edit: OpenGL itself doesn't specify anything about full-screen windows vs. normal windows. If memory serves, at least on Windows to get a full screen application, you use ChangeDisplaySettings with CDS_FULLSCREEEN. After that, it treats essentially the entire virtual desktop as a single window. I don't recall having done that with multiple monitors though, so I can't say much with any great certainty.
There are several things to be said about multihead support in the case of OGRE3D. In my experience, a working solution is to use the source version of Ogre 1.6.1 and apply this patch.
Using this patch, users have managed to render an Ogre application on a 6 monitors configuration.
Personnaly, I've successfully applied this patch, and used it with the StereoManager plugin to hook up Ogre applications with a 3D projector. I only used the Direct3D9 backend. The StereoManager plugin comes with a modified demo (Fresnel_Demo), which can help you to set up your first multihead application.
I should also add that the multihead patch is now part of the Ogre core, as of version 1.7. Ogre1.7 was recently released as a RC1, so this might be the quickest and easiest way to have it working.