How to enable VSYNC in OpenGL - c++

The WGL_EXT_swap_control extension allows doing this on Windows, but I am unable to find anything even remotely cross-platform doing the same, i.e. syncing my buffer swaps with screen refresh. My application uses GLEW, so something offered by that would be preferable. Cross-platform support for Linux, Mac and Windows is necessary, but my application will not break if the sync cannot be set (e.g. the user has forced it off in his graphics drivers).
I will accept program code to do it on many platforms, with GLEW, as a valid answer.

There is a reason it's not easy to find a cross-platform solution. The platform ultimately owns the display (and the swapping behavior). So it necessarily is part of the platform API (if exposed). There can't really be a cross-platform solution. Even glew has some platform specific bits when it comes down to interaction with the platform.
Now you could argue that all the platforms should use the same API for that specific bit of their interface, but I doubt you'd get any traction from them.
Last, not all framebuffers are displayed directly. If you happen to be using a window management system that actually blends the framebuffer pixels to the desktop (like Aero does when active), then you don't get to control the swap behavior anyways.
For reference, the various APIs to do this on major platforms:
wglSwapIntervalEXT
glXSwapIntervalSGI
AGLSetInteger

From http://www.opengl.org/wiki/Swap_Interval
(and indirectly http://www.opengl.org/registry/specs/SGI/swap_control.txt):
In Linux, things are much simpler. If
GLX_SGI_swap_control is present in the
string returned by
glGetString(GL_EXTENSIONS), then you
can use glXSwapIntervalSGI(0) to
disable vsync or you can use
glXSwapIntervalSGI(1) to enable vsync
(aka vertical synchronization).

For OS X, check out http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/ApplicationKit/Classes/NSOpenGLContext_Class/Reference/Reference.html
NSOpenGLCPSwapInterval
Sets or gets the swap interval. The swap
interval is represented as one long. If the swap interval is set to 0
(the default), the flushBuffer method executes as soon as possible,
without regard to the vertical refresh rate of the monitor. If the
swap interval is set to 1, the buffers are swapped only during the
vertical retrace of the monitor. Available in Mac OS X v10.0 and
later.
Declared in NSOpenGL.h.

Related

How to detect that Indirect GLX is needed but disabled?

For a few years now, indirect GLX (IGLX) has been disabled by default in xorg and other X Servers. I'm writing an application that will use OpenGL if available, but can fall back to other graphics if it is not. Is there a standard way to detect that it is going to fail, other than trying it and responding to the errors?
My current test (written about 20 years ago) just checks if XOpenDisplay and glXQueryExtension work, but that's not sufficient: things fail later when calling glXCreateContext and other functions.
I'd prefer not to try to open a window and check for success, because at the time I want to do the test I don't know if the user is going to need one. My preference is to do an invisible test at startup so I can warn the user that they're going to be using the backup graphics methods.
Creating a OpenGL context with GLX doesn't require a window. Neiter glxCreateContext nor glxCreateNewContext take a drawable paramter. And even if they did, you can create a window without ever mapping it, i.e. make it visible, or even trigger some action from the window manager.
In X11 creating windows is a rather cheap operation, especially if the initial size of the window is 0×0 and the window is never mapped. You can still perform the whole range of X11 and GLX operations.
The upshot of all of this is, that to test if the OpenGL capabilities are available, the usual approach is to actually attempt to create an window and OpenGL context with the desired attributes and see, if this succeeds.
Since the X11 resources used for probing don't have to be mapped, this will not create any visible output; and apart from constantly polling the X server for the window tree, not even a window manager will take notice (since this depends on mapping the window).
Of course to keep thins cheap and fast, such tests should be programmed directly against X11 / Xlib, without any toolkits inbetween (since GLX is written against Xlib, even if Xcb is used, you'll have to use Xlib, for at least that part, but you'd have to do that anyway).

Draw Graphics w/o Desktop Environment C++?

Okay, this is a really strange question and I'm not sure how to phrase but, but I can't seem to find anything on it anywhere, most likely because I'm not using the correct terminology. Also, this may be operating system specific, if it is, I'm using Debian.
Basically, when you boot an older computer or a modern server computer, or stuff along those lines, they boot to a terminal screen. Where all you do is type stuff. And if you want to do anything graphically, you usually download a desktop environment.
But I'm wondering, how could I go about drawing graphics without a desktop environment?
I remember back on MS-DOS you could use QBASIC to change the screen mode and you could then draw colored lines onto the screen like that. It's probably much more complicated in C++, but I'd still like to be pointed in the right direction.
Sorry if this question is a bit unspecific, but I'd really like to be pointed in the right direction.
This is done by using a framebuffer console. Then you use a framework/library that can draw on that. For example DirectFB. There's also some small libraries floating around, like libFB. I think SDL can also use the framebuffer. Never tried it myself though.
Then there's framebuffer versions of GUI toolkits like Gtk+ and Qt, if GUI widgets is that you want.
There's also SVGAlib, which talks to graphics cards directly, but it's outdated by now. Not recommended. In general, you're looking for "Linux framebuffer graphics". That should get you a few starting points.
To get a framebuffer console, you need to configure your kernel accordingly. Usually you enable a KMS driver for you graphics card, and also enable the KMS framebuffer. If there isn't a KMS driver for your card, you can use a generic VESA framebuffer console that works on most hardware (although, it being just generic VESA, is slow and non-accelerated.)
Commonly, a "desktop environment" (on Linux) is made of two parts: XWindow-like graphics "library" plus a "window management" (Gnome, KDE, Xcfe,..). So, if I understand your question, you only have to setup a XWindow system without a window manager.
On MS-DOS, you could write software which wrote to the screen, either by writing into a range of RAM which was shared by the video controller, or calling a BIOS API.
A newer O/S (i.e. Windows) will prevent you from doing either of those: instead you call an O/S API, which calls to an O/S-specific video device driver, which outputs to the hardware.
As I read it you're asking how to deal directly with the graphics hardware.
That depends on the hardware.
If you have an old PC at hand and want to experiment with it, then you need correspondingly old development software that can run on that hardware under the particular OS, i.e. some C compiler from those days running in MS-DOS. You may be able to do this is in a "DOS-box" in Windows (not a console window but an emulation of the old PC). 64-bit Windows 7 does not support DOS boxes, but there is a free alternative called DOSbox.
Then, if you go that route, you can search for "graphics adapter" graphics modes etc. on the net.
Basically, with the old PC architecture and a program running under DOS, you used a DOS service to switch the graphics mode, and then you accessed the graphics memory at a known memory address for the mode.
The curses (or ncurses) library is the old way of doing it in Unix flavours, although these days there is probably something better...

Capture window content to texture

let me first specify my development essentials. I am writing an Windows DLL. The programming language i do focus on is C/C++. Asm blocks are possible aswell when required for my task. Maybe even a driver, but i do not have any experience with them at all.
The DLL is being injected into a host process. That's always a Directx environment. Either Dx9, Dx10 or Dx11 and may run in fullscreen or windowed mode.
The method should support windows xp up to windows 7 and is being compiled in x86 only.
The goal is to come up with a function taking a screenshot of a given process-window. The screenshot is never being taken from the host process itself. Its always another process! The window may contain directx or gdi32 content. Maybe other contents are possible i do not think of at the moment (windows forms comes to my mind. i am not sure how that is being rendered internally). The windows may be minimized.
That screenshot needs to be accessable/convertable to an directx texture such as Texture2D, depending on the Directx environment i am working in. Saving the screenshot as an png/bmp is enough thoe, as i do know how to create such a texture from memory.
I've already tried the oldstyle BitBlt way, that didnt work on minimized applications thoe. The minimized applications are being drawn, when i send WM_PAINT messages to the targeting window. That aint a solution for me, as i also need to keep up with directx applications which doesnt react to such messages.
Maybe i need to hook each single DirectX window to accomblish my task, to access the backbuffer directly, i do hope for some better methods anyways.
For the reason that i do take a lot of screenshots from multiple windows, i would like to implement a fast method, which isnt such a cpu bogus. Copying from VideoRAM may be a bad way to go when having such performance needs.
I do hope for some ideas, maybe code samples as i am not familar with all the possibilities i could go for. I've looked at some windows thumbnail api, but that didnt support xp from what i could read.
Thanks in advance,
Frank

Crossplatform screen grabbing with OpenGl is it possible and how to do it?

So I found this intresting file (ref to it I found in here). It is sad
Also chech out glGrab which uses OpenGL to grab the screen and is very fast.
so I wonder can we grab desctop screen frames via openGl on Windows and Linux using some openGL wrapper like SDL?
OpenGL can (easily, fast, and in a straightforward way) grab the front/back buffers that it owns and that you have a valid context for.
In other words: no.
The desktop is not owned by OpenGL. Under Windows, it is managed by the driver under pre-Vista, and by the window manager under Vista/7. You'll need the BitBlt function here, which is neither portable, nor fast.
Under Linux, the desktop may at least sometimes indeed be owned by OpenGL (compositing window managers), but you don't have a context handle for that.
If you can lessen your requirements from "Desktop" to "my window's content", then it all becomes super easy. In the simplest case, it's one function call, and if you want to do it asynchronously with DMA, it's 3-4 more.

Multi-monitor 3D Application

I've been challenged with a C++ 3D application project that will use 3 displays, each one rendering from a different camera.
Recently I learned about Ogre3D but it's not clear if it supports output of different cameras to different displays/GPUs.
Does anyone have any experience with a similar Setup and Ogre or another engine?
At least on most systems (e.g., Windows, MacOS) the windowing system creates a virtual desktop, with different monitors mapped to different parts of the desktop. If you want to, you can (for example) create one big window that will cover all three displays. If you set that window up to use OpenGL, almost anything that uses OpenGL (almost certainly including Ogre3D) will work just fine, though in some cases producing that much output resolution can tax the graphics card to the point that it's a bit slower than usual.
If you want to deal with a separate window on each display, things might be a bit more complex. OpenGL itself doesn't (even attempt to) define how to handle display in multiple windows -- that's up to a platform-specific set of functions. On Windows, for example, you have a rendering context for each window, and have to use WGLMakeCurrent to pick which rendering context you draw to at any given time.
If memory serves, the Windows port of Ogre3D supports multiple rendering contexts, so this shouldn't be a problem either. I'd expect it can work with multiple windows on other systems as well, but I haven't used it on any other systems, so I can't say with any certainty.
My immediate guess, however, is that the triple monitor support will be almost inconsequential in your overall development effort. Of course, it does mean that you (can tell your boss) need a triple monitor setup for development and testing, which certainly isn't a bad thing! :-)
Edit: OpenGL itself doesn't specify anything about full-screen windows vs. normal windows. If memory serves, at least on Windows to get a full screen application, you use ChangeDisplaySettings with CDS_FULLSCREEEN. After that, it treats essentially the entire virtual desktop as a single window. I don't recall having done that with multiple monitors though, so I can't say much with any great certainty.
There are several things to be said about multihead support in the case of OGRE3D. In my experience, a working solution is to use the source version of Ogre 1.6.1 and apply this patch.
Using this patch, users have managed to render an Ogre application on a 6 monitors configuration.
Personnaly, I've successfully applied this patch, and used it with the StereoManager plugin to hook up Ogre applications with a 3D projector. I only used the Direct3D9 backend. The StereoManager plugin comes with a modified demo (Fresnel_Demo), which can help you to set up your first multihead application.
I should also add that the multihead patch is now part of the Ogre core, as of version 1.7. Ogre1.7 was recently released as a RC1, so this might be the quickest and easiest way to have it working.