glScissor behaving strangely in my code - opengl

A laptop user running an ATI Radeon HD 5800 Series just showed me a video of my 2D game running on his machine, and it would appear my glScissor code was not working as intended on it (despite it working on soo many other computers).
Whenever I want to restrict rendering to a certain rectangle on the screen, I call my SetClip function to either the particular rectangle, or Rectangle.Empty to reset the scissor.
public void SetClip(Rectangle theRect)
{
if (theRect.IsEmpty)
OpenGL.glDisable(OpenGL.EnableCapp.ScissorTest);
else
{
if (!OpenGL.glIsEnabled(OpenGL.EnableCapp.ScissorTest))
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
OpenGL.glScissor(theRect.Left, myWindow.clientSizeHeight - theRect.Bottom,
theRect.Width, theRect.Height);
CheckError();
}
}
Is this approach wrong? For instance, I have a feeling that glEnable / glDisable might require a glFlush or glFinish to guarantee it's executed in the order in which I call them.

I'd put my money on a driver bug. Your code looks fine.
The only thing I suggest changing this:
if (!OpenGL.glIsEnabled(OpenGL.EnableCapp.ScissorTest))
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
into a mere
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
Changing glEnable state comes practically free, so doing that test is a microoptimization. In fact first testing with glIsEnabled probably costs as much, as the overhead causes by redundantly setting it. But it happens that some drivers may be buggy in what they report by glIsEnabled, so I'd remove that to cut another potential error source.
I have a feeling that glEnable / glDisable might require a glFlush or glFinish to guarantee it's executed in the order in which I call them.
No they don't. In fact glFlush and glFinish are not required for most programs. If you're using a double buffer you don't need them at all.

Related

intercept the opengl calls and make multi-viewpoints and multi-viewports

I want to creates a layer between any other OpenGL-based application and the original OpenGL library. It seamlessly intercepts OpenGL calls made by the application, and renders and sends images to the display, or sends the OpenGL stream to the rendering cluster.
I have completed my openg32.dll to replace the original library, I don't know what to do next,
How to convert OpenGL calls to images and what are OpenGL stream?
For an accurate description. visit the Opengl Wrapper
First and foremost OpenGL is not a libarary. It's an API. The opengl32.dll you have on your system is a library that provides the API and acts as a anchoring point for the actual graphics driver to attach to the programs.
Next it's a terrible idea to intercept OpenGL calls and turn them into something different, like multiple viewports. It may work for the fixed function pipeline, but as soon as shaders get involved it will break the program you hooked into. OpenGL is designed as an API to draw things to the screen, it's not a scene graph. Programs expect that when they make OpenGL calls they will produce an image in a pixel buffer according to their drawing commands. Now if you hook into that process and wildly alter the outcome, any graphics algorithm that relies on the visual outcome of the previous rendering for the following steps will break. For example any form of shadow mapping will be broken by what you do.
Also things like multiple viewport hacks will likely not work if the program does things like frustum culling internally, before making the actual OpenGL calls. Again this is because OpenGL is a drawing API, not a scene graph.
In the end yes you can hook into OpenGL, but whatever you do, you must make sure that OpenGL calls as made by the application get executed according to the specification. There is a authorative OpenGL specification for a reason, namely that programs rely on it to have predictable results.
OpenGL almost undoubtedly allows you to do the things you want to do without doing crazy modifications to it. Multi-viewpoints can be done by, in your render function, doing the following
glViewport(/*View 1 window coords*/0, 0, window_width, window_height / 2);
// Do all of your rendering for the first camera
glViewport(/*View 2 window coords*/0, window_height / 2, window_width, window_height);
glMatrixMode(GL_MODELVIEW);
// Redo your modelview matrix for a different viewpoint here, then re-render it all.
It's as simple as rendering twice into two areas which you specify with glViewport. If you Google around you can get a more detailed tutorial. I highly do not recommend messing with OpenGL as a good deal if it is implemented by the graphics card, and you should really just use what you're given. Chances are if you're modifying it you're doing it wrong. It probably allows you to do it a FAR better way.
Good luck!

double buffering with FBO+RBO and glFinish()

I am using an FBO+RBO, and instead of regular double buffering on the default framebuffer, I am drawing to the RBO and then blit directly on the GL_FRONT buffer of the default FBO (0) in a single buffered OpenGL context.
It is fine and I dont get any flickering, but if the scene gets a bit complex, I experience a HUGE drop in fps, something so weird that I knew something had to be wrong. And I dont mean from 1/60 to 1/30 because of a skipped sync, I mean a sudden 90% fps drop.
I tried a glFlush() after the blit - no difference, then I tried a glFinish() after the blit, and I had a 10x fps boost.
So I used regular doble buffering on the default framebuffer and swapbuffers(), and the fps got a boost as well, as when using glFinish().
I cannot figure out what is happening. Why glFinish() makes so much of a difference when it should not? and, is it ok to use a RBO and blit directly on the front buffer, instead of using a swapbuffers call in a double buffering context? I know Im missing vsync but the composite manager will sync anyway (infact im not seeing any tearing), it is just as if the monitor is missing 9 out of 10 frames.
And just out of curiosity, does a native swapbuffers() use glFinish() on either windows or linux?
I believe it is a sync-related issue.
When rendering directly to the RBO and blitting to the front buffer, there is simply no sync whatsoever. Thus on complex scenes the GPU command queue will fill quite quickly, then the CPU driver queue will fill quickly as well, until a CPU sync will be forced by the driver during an OpenGL command. At that point the CPU thread will be halted.
What I mean is that, without any form of sync, complex renderings (renderings for which one or more OpenGL command will be put in a queue) will always cause the CPU thread to be halted at some point, since as the queues will fill, the CPU will be issuing more and more commands.
In order to get a smooth (more constant) user interaction, a sync is needed (either with a platform-specific swapbuffers() or a glFinish()) so to stop the CPU from making things worse issuing more and more commands (which in turn would bring the CPU thread to a stop later)
reference: OpenGL Synchronization
There are separate issues here, that are also a little bit connected.
1) Re-implementing double buffering yourself, while on spec the same thing, is not the same thing to the driver. Drivers are highly optimized for the common case. For example, many chips have distinct 2d and 3d units. The swap in swapBuffers is often handled by the 2d unit. Blitting a buffer is probably still done with the 3d unit.
2) glFlush (and Finish) are ignored by many drivers. Flush is a relic of client server rendering. Finish was intended for profiling. But it got abused to work around driver bugs. So now drivers often ignore it to improve the performance of legacy code that used Finish as a workaround.
3) Just don't do single buffered. There is no performance benefit and you are working off the "good" path of the driver. Window managers are super optimized for double buffered opengl.
4) What you are seeing looks a lot like you are simply leaking resources. Do you allocate buffers without freeing them? A quick and dirty way to check is if any glGen* functions return ever increasing ids.

How to do exactly one render per vertical sync (no repeating, no skipping)?

I'm trying to do vertical synced renders so that exactly one render is done per vertical sync, without skipping or repeating any frames. I would need this to work under Windows 7 and (in the future) Windows 8.
It would basically consist of drawing a sequence of QUADS that would fit the screen so that a pixel from the original images matches 1:1 a pixel on the screen. The rendering part is not a problem, either with OpenGL or DirectX. The problem is the correct syncing.
I previously tried using OpenGL, with the WGL_EXT_swap_control extension, by drawing and then calling
SwapBuffers(g_hDC);
glFinish();
I tried all combinations and permutation of these two instructions along with glFlush(), and it was not reliable.
I then tried with Direct3D 10, by drawing and then calling
g_pSwapChain->Present(1, 0);
pOutput->WaitForVBlank();
where g_pSwapChain is a IDXGISwapChain* and pOutput is the IDXGIOutput* associated to that SwapChain.
Both versions, OpenGL and Direct3D, result in the same: The first sequence of, say, 60 frames, doesn't last what it should (instead of about 1000ms at 60hz, is lasts something like 1030 or 1050ms), the following ones seem to work fine (about 1000.40ms), but every now and then it seems to skip a frame. I do the measuring with QueryPerformanceCounter.
On Direct3D, trying a loop of just the WaitForVBlank, the duration of 1000 iterations is consistently 1000.40 with little variation.
So the trouble here is not knowing exactly when each of the functions called return, and whether the swap is done during the vertical sync (not earlier, to avoid tearing).
Ideally (if I'm not mistaken), to achieve what I want, it would be to perform one render, wait until the sync starts, swap during the sync, then wait until the sync is done. How to do that with OpenGL or DirectX?
Edit:
A test loop of just WaitForVSync 60x takes consistently from 1000.30ms to 1000.50ms.
The same loop with Present(1,0) before WaitForVSync, with nothing else, no rendering, takes the same time, but sometimes it fails and takes 1017ms, as if having repeated a frame. There's no rendering, so there's something wrong here.
I have the same problem in DX11. I want to guarantee that my frame rendering code takes an exact multiple of the monitor's refresh rate, to avoid multi-buffering latency.
Just calling pSwapChain->present(1,0) is not sufficient. That will prevent tearing in fullscreen mode, but it does not wait for the vblank to happen. The present call is asynchronous and it returns right away if there are frame buffers remaining to be filled. So if your render code is producing a new frame very quickly (say 10ms to render everything) and the user has set the driver's "Maximum pre-rendered frames" to 4, then you will be rendering four frames ahead of what the user sees. This means 4*16.7=67ms of latency between mouse action and screen response, which is unacceptable. Note that the driver's setting wins - even if your app asked for pOutput->setMaximumFrameLatency(1), you'll get 4 frames regardless. So the only way to guarantee no mouse-lag regardless of driver setting is for your render loop to voluntarily wait until the next vertical refresh interval, so that you never use those extra frameBuffers.
IDXGIOutput::WaitForVBlank() is intended for this purpose. But it does not work! When I call the following
<render something in ~10ms>
pSwapChain->present(1,0);
pOutput->waitForVBlank();
and I measure the time it takes for the waitForVBlank() call to return, I am seeing it alternate between 6ms and 22ms, roughly.
How can that happen? How could waitForVBlank() ever take longer than 16.7ms to complete? In DX9 we solved this problem using getRasterState() to implement our own, much-more-accurate version of waitForVBlank. But that call was deprecated in DX11.
Is there any other way to guarantee that my frame is exactly aligned with the monitor's refresh rate? Is there another way to spy the current scanline like getRasterState used to do?
I previously tried using OpenGL, with the WGL_EXT_swap_control extension, by drawing and then calling
SwapBuffers(g_hDC);
glFinish();
That glFinish() or glFlush is superfluous. SwapBuffers implies a glFinish.
Could it be, that in your graphics driver settings you set "force V-Blank / V-Sync off"?
We use DX9 currently, and want to switch to DX11. We currently use GetRasterState() to manually sync to the screen. That goes away in DX11, but I've found that making a DirectDraw7 device doesn't seem to disrupt DX11. So just add this to your code and you should be able to get the scanline position.
IDirectDraw7* ddraw = nullptr;
DirectDrawCreateEx( NULL, reinterpret_cast<LPVOID*>(&ddraw), IID_IDirectDraw7, NULL );
DWORD scanline = -1;
ddraw->GetScanLine( &scanline );
On Windows 8.1 and Windows 10, you can make use of the DXGI 1.3 DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT. See MSDN. The sample here is for Windows 8 Store apps, but it should be adaptable to class Win32 windows swapchains as well.
You may find this video useful as well.
When creating a Direct3D device, set PresentationInterval parameter of the D3DPRESENT_PARAMETERS structure to D3DPRESENT_INTERVAL_DEFAULT.
If you run in kernel-mode or ring-0, you can attempt to read bit 3 from the VGA input register (03bah,03dah). The information is quite old but although it was hinted here that the bit might have changed location or may be obsoleted in later version of Windows 2000 and up, I actually doubt this. The second link has some very old source-code that attempts to expose the vblank signal for old Windows versions. It no longer runs, but in theory rebuilding it with latest Windows SDK should fix this.
The difficult part is building and registering a device driver that exposes this information reliably and then fetching it from your application.

Is double buffering needed any more

As today's cards seem to keep a list of render commands and flush only on a call to glFlush or glFinish, is double buffering really needed any more? An OpenGL game I am developing on Linux (ATI Mobility radeon card) with SDL/OpenGL actually flickers less when SDL_GL_swapbuffers() is replaced by glFinish() and with SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,0) in the init code. Is this a particular case of my card or are such things likely on all cards?
EDIT: I've discovered that the cause for this is KWin. It appears that as datenwolf said, compositing without sync was the cause. When I switched off KWin compositing, the game works fine without ANY source code patches
Double buffering and glFinish are two very different things.
glFinish blocks the program, until all drawing operations are completed.
Double buffering is used to hide the rendering process from the user. Without double buffering, each and every single drawing operation would become visible immediately, assuming that the display refresh frequency is infinitely high. In practice you will get some display artifacts, like parts of the scene visible in one state, the rest not visible or in some other state, the picture could be incomplete, etc. Double buffering avoids this by first rendering into a back buffer, and only after the rendering has been finished swapping this back with the front buffer, that gets sent to the display device.
Now today compositing window management becomes prevalent: Windows has Aero, MacOS X Quartz Extreme and on Linux at least Unity and the GNOME3 shell use compositing if available. The point is: Compositing technically creates doublebuffering: Windows draw to offscreen buffers and of these the final screen is composited. So if you're running on a machine with compositing, then double buffering is kind of redundant if performed in your program, and all it'd take was some kind of synchronization mechanism, to tell the compositor when the next frame is ready. MacOS X has this. X11 still lacks a proper synchronization scheme, see this post on the maillist: http://lists.freedesktop.org/archives/xorg/2004-May/000607.html
TL;DR: Double buffering and glFinish are different things, and you need double buffering (of some sort) to make things look good.
I would expect that it has more to do with what you're rendering or your hardware than anything that could be generalized to something not on your machine. So no: don't try to do this.
Oh, and don't forget multisampling. Many implementations only multisample the back buffer; the front buffer is not multisampled. Doing a swap will downsample from the multisampled buffer.

OpenGL Error underflow turns into overflow?

I am working on a project which uses OpenGL only (it's supposed to become a game one time to be specific), now after some weeks of development I stumbled across the possibility to catch OpenGL errors with GL.GetError().
Since I dislike that it only says what went wrong but not where, I want to get the error that occurs fixed though.
So here is what happens:
When launching the app there are few frames (three or four) with StackUnderflow, it switches to StackOverflow and stays that way.
I checked my Matrix-Push-Pop consistency and didn't find any unclosed matrices. It might be interesting to know that, from what I see, lighting doesn't work (all faces of the various object have the very same brightness).
Is there any other possbile cause?
(If you want to see source, there is plenty at: http://galwarcom.svn.sourceforge.net/viewvc/galwarcom/trunk/galwarcom/ )
You need to set the matrix mode before popping since each mode has a separate stack. If you do something like this, it will underflow:
glMatrixMode(GL_MODELVIEW)
glPushMatrix();
... stuff with model view ...
glMatrixMode(GL_PROJECTION)
glPushMatrix()
... stuff with project matrix ...
glPopMatrix() // projection popped
glPopMatrix() // projection again
You are doing something like this in drawHUD(), probably other places.