How to detect that Indirect GLX is needed but disabled? - opengl

For a few years now, indirect GLX (IGLX) has been disabled by default in xorg and other X Servers. I'm writing an application that will use OpenGL if available, but can fall back to other graphics if it is not. Is there a standard way to detect that it is going to fail, other than trying it and responding to the errors?
My current test (written about 20 years ago) just checks if XOpenDisplay and glXQueryExtension work, but that's not sufficient: things fail later when calling glXCreateContext and other functions.
I'd prefer not to try to open a window and check for success, because at the time I want to do the test I don't know if the user is going to need one. My preference is to do an invisible test at startup so I can warn the user that they're going to be using the backup graphics methods.

Creating a OpenGL context with GLX doesn't require a window. Neiter glxCreateContext nor glxCreateNewContext take a drawable paramter. And even if they did, you can create a window without ever mapping it, i.e. make it visible, or even trigger some action from the window manager.
In X11 creating windows is a rather cheap operation, especially if the initial size of the window is 0×0 and the window is never mapped. You can still perform the whole range of X11 and GLX operations.
The upshot of all of this is, that to test if the OpenGL capabilities are available, the usual approach is to actually attempt to create an window and OpenGL context with the desired attributes and see, if this succeeds.
Since the X11 resources used for probing don't have to be mapped, this will not create any visible output; and apart from constantly polling the X server for the window tree, not even a window manager will take notice (since this depends on mapping the window).
Of course to keep thins cheap and fast, such tests should be programmed directly against X11 / Xlib, without any toolkits inbetween (since GLX is written against Xlib, even if Xcb is used, you'll have to use Xlib, for at least that part, but you'd have to do that anyway).

Related

Get image data of known process

For Imageprocessing I want to get all pixel information from a given process.
Concrete its for testing an image hashing algorithm for identifying hearthstone cards, so i need to get a screenshot of the given process.
How can I solve it in windows?
My idea so far:
Get the process name.
Get the process ID
Get Window Handle
I have no idea how to go further from this point.
I hope it understandable what I want to achieve.
Unfortunately, there is no general method for getting the pixels of a particular window that I would be aware of. Depending on how the target application draws itself, this task can be very simple or very complicated. If we were talking about an application that uses good old GDI, then you could just get yourself an HDC to the window via GetWindowDC() and BitBlt/StretchBlt the content over into a bitmap of your own.
Unfortunately, the target application in your case appears to be a game. Games typically use 3D graphics APIs like Direct3D or OpenGL for drawing. Assuming that you cannot simply modify the target application to just send the desired data over to you out of its own free will, the only way to specifically record output from such applications that I'm aware of is to hook into the graphics API and capture the data from underneath the API. This can be done. However, implementing such a system is quite involved. There might be existing libraries to aid with writing such applications, but I don't know any that I could recommend here. If you don't have to capture the game content in real-time, you could just use a screen recording application to, e.g., record a video and then use that video as input for your algorithm. There are also graphics debugging tools like NSight Graphics or RenderDoc that you could use. Be aware that games, particularly online games, these days often have cheat protection systems that are likely to get very angry at you if you attempt to hook into the game…
Apart from all that, one alternative approach might be to use DXGI Output Duplication to just capture the entire desktop. While you won't be able to target one specific application (as far as I know), this would potentially have several advantages: First of all, it's only moderately complex to set up compared to a fully-fledged API-hook-based approach. Second, it should work regardless of what API the target application uses and even if the application is in fullscreen mode. Third, since you will have the data delivered straight from the operating system, you shouldn't have any issues with cheat protection. You can use MonitorFromWindow() to get the monitor your target window appears on and then enumerate all outputs of all DXGI adapters to find the one that corresponds to that HMONITOR…

Opengl Context Loss

So interestingly enough I have never had an Opengl context lost (where all buffer resources are wiped) until now. I currently am using OpenGL 4.2, via SDL 1.2 and GLEW on Win7 64, also my application is windowed without the ability to switch to fullscreen while running (only allowed on start up).
On my dev machine context never seems to be lost on re-size, but on other machines my application can lose the OpenGL context (it seems rare). Due to memory constraints (I have alot of memory being used by other parts of the application) I do not back up my gl buffer contents (VBOs, FBOs, Textures, etc) in system memory, oddly this hasn't been a problem for me in the past because the context never got wiped.
Its hard to discern from googling under what circumstances an OpenGL context will be lost (where all GPU memory buffers are wiped), other than maybe toggleing between fullscreen and windowed.
Back in my DX days, context lost could happen for many reasons, and I would be notified when it happened and reload my buffers from system memory backups. I was under the assumption (and I was perhaps wrong in that assumption) that OpenGL (or a managing library like SDL) would handle this buffer reload for me. Is this in any way even partially true?
One of the issues I have is that losing context on a resize, is pretty darn inconvenient, I am using ALOT of GPU memory, and having to reload everything could pause the app for while (well longer than I would like).
Is this a device dependent thing or driver dependent? Is it some combination of device, driver, and SDL version? How can a context loss like this be detected so that I can react to it?
Is it standard practice to keep system memory contents of all gl buffer contents, so that they may be reloaded on context loss? Or is a context loss rare enough that it isn't standard practice?
Context resets (loss) in OpenGL are ordinarily handled behind the scenes completely transparently. Literally nobody keeps GL resources around in application memory in order to handle a lost context because unless you are using a very new extension to OpenGL (robust context) there is no way to ever know when a context reset occurs in OpenGL in order to handle lost state. The driver does all that for you ordinarily, but you can receive notifications and define behavior related to context resets as described in heading 2.6 - "Graphics Reset Recovery".
But be aware that a lost context in OpenGL is very different from a lost context in D3D. In GL, a lost context occurs because some catastrophic error occurred (e.g. shader taking too long or memory access violation) and are most useful in something like WebGL, which has stricter security/reliability constraints than regular GL. In D3D you can lose your context simply by Alt + Tabbing or switching from windowed mode to fullscreen mode. In any event, I believe this is an SDL issue and not at all related to GL's notion of a context reset.
You're using SDL-1.2. With SDL-1.2 it's perfectly possible that the OpenGL context gets recreated (i.e. properly shut down and reinitialized) when the window gets resized. This is a known limitation of SDL and has been addressed in SDL-2.
So either use SDL-2 or use a different framework that's been tailored specifically for OpenGL, like GLFW.
Or is a context loss rare enough that it isn't standard practice?
OpenGL context's are not "lost". They're deallocated and that's what SDL-1.2 is doing in certain conditions.

Handle Alt Tab in fullscreen OpenGL application properly

When trying to implement a simple OpenGL application, I was surprised that while it is easy to find plenty of examples and documentation on advanced rendering stuff, the Win32 framework is poorly documented and even most samples and tutorials do not implement this properly even for basic cases, not mentioning advanced stuff like multiple monitors. Despite of several hours of searching I was unable to find a way which would handle Alt-Tab reliably.
How should OpenGL fullscreen application respond to Alt-Tab? Which messages should the app react to (WM_ACTIVATE, WM_ACTIVATEAPP)? What should the reaction be? (Change the display resolution, destroy / create the rendering context, or destroy / create some OpenGL resources?)
If the application uses some animation loop, suspend the loop, then just minimize the window. If it changed display resolution and gamma revert to the settings before changing them.
There's no need to destroy OpenGL context or resources; OpenGL uses an abstract resource model: If another program requires RAM of the GPU or other resources, your programs resources will be swapped out transparently.
EDIT:
Changing the window visibility status may require to reset the OpenGL viewport, so it's a good idea to either call glViewport apropriately in the display/rendering function, or at least set it in the resize handler, followed by a complete redraw.

Multi-monitor 3D Application

I've been challenged with a C++ 3D application project that will use 3 displays, each one rendering from a different camera.
Recently I learned about Ogre3D but it's not clear if it supports output of different cameras to different displays/GPUs.
Does anyone have any experience with a similar Setup and Ogre or another engine?
At least on most systems (e.g., Windows, MacOS) the windowing system creates a virtual desktop, with different monitors mapped to different parts of the desktop. If you want to, you can (for example) create one big window that will cover all three displays. If you set that window up to use OpenGL, almost anything that uses OpenGL (almost certainly including Ogre3D) will work just fine, though in some cases producing that much output resolution can tax the graphics card to the point that it's a bit slower than usual.
If you want to deal with a separate window on each display, things might be a bit more complex. OpenGL itself doesn't (even attempt to) define how to handle display in multiple windows -- that's up to a platform-specific set of functions. On Windows, for example, you have a rendering context for each window, and have to use WGLMakeCurrent to pick which rendering context you draw to at any given time.
If memory serves, the Windows port of Ogre3D supports multiple rendering contexts, so this shouldn't be a problem either. I'd expect it can work with multiple windows on other systems as well, but I haven't used it on any other systems, so I can't say with any certainty.
My immediate guess, however, is that the triple monitor support will be almost inconsequential in your overall development effort. Of course, it does mean that you (can tell your boss) need a triple monitor setup for development and testing, which certainly isn't a bad thing! :-)
Edit: OpenGL itself doesn't specify anything about full-screen windows vs. normal windows. If memory serves, at least on Windows to get a full screen application, you use ChangeDisplaySettings with CDS_FULLSCREEEN. After that, it treats essentially the entire virtual desktop as a single window. I don't recall having done that with multiple monitors though, so I can't say much with any great certainty.
There are several things to be said about multihead support in the case of OGRE3D. In my experience, a working solution is to use the source version of Ogre 1.6.1 and apply this patch.
Using this patch, users have managed to render an Ogre application on a 6 monitors configuration.
Personnaly, I've successfully applied this patch, and used it with the StereoManager plugin to hook up Ogre applications with a 3D projector. I only used the Direct3D9 backend. The StereoManager plugin comes with a modified demo (Fresnel_Demo), which can help you to set up your first multihead application.
I should also add that the multihead patch is now part of the Ogre core, as of version 1.7. Ogre1.7 was recently released as a RC1, so this might be the quickest and easiest way to have it working.

How to enable VSYNC in OpenGL

The WGL_EXT_swap_control extension allows doing this on Windows, but I am unable to find anything even remotely cross-platform doing the same, i.e. syncing my buffer swaps with screen refresh. My application uses GLEW, so something offered by that would be preferable. Cross-platform support for Linux, Mac and Windows is necessary, but my application will not break if the sync cannot be set (e.g. the user has forced it off in his graphics drivers).
I will accept program code to do it on many platforms, with GLEW, as a valid answer.
There is a reason it's not easy to find a cross-platform solution. The platform ultimately owns the display (and the swapping behavior). So it necessarily is part of the platform API (if exposed). There can't really be a cross-platform solution. Even glew has some platform specific bits when it comes down to interaction with the platform.
Now you could argue that all the platforms should use the same API for that specific bit of their interface, but I doubt you'd get any traction from them.
Last, not all framebuffers are displayed directly. If you happen to be using a window management system that actually blends the framebuffer pixels to the desktop (like Aero does when active), then you don't get to control the swap behavior anyways.
For reference, the various APIs to do this on major platforms:
wglSwapIntervalEXT
glXSwapIntervalSGI
AGLSetInteger
From http://www.opengl.org/wiki/Swap_Interval
(and indirectly http://www.opengl.org/registry/specs/SGI/swap_control.txt):
In Linux, things are much simpler. If
GLX_SGI_swap_control is present in the
string returned by
glGetString(GL_EXTENSIONS), then you
can use glXSwapIntervalSGI(0) to
disable vsync or you can use
glXSwapIntervalSGI(1) to enable vsync
(aka vertical synchronization).
For OS X, check out http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/ApplicationKit/Classes/NSOpenGLContext_Class/Reference/Reference.html
NSOpenGLCPSwapInterval
Sets or gets the swap interval. The swap
interval is represented as one long. If the swap interval is set to 0
(the default), the flushBuffer method executes as soon as possible,
without regard to the vertical refresh rate of the monitor. If the
swap interval is set to 1, the buffers are swapped only during the
vertical retrace of the monitor. Available in Mac OS X v10.0 and
later.
Declared in NSOpenGL.h.