Unhandled Exception using glGenBuffer on Release mode only - QT - c++

I'm having some trouble while compiling my project on Windows 7 using Qt 4.8 on Release mode. Everything works fine on Debug, but on Release I get an Unhandled Exception: 0xC0000005: Access violation.
I narrowed it down to the line where this happens, which is when I generate my Pixel Buffers.
My first guess would be wrong DLLs loading, but I checked the executable with Dependency Walker and every DLL loaded is correct.
Here goes some of my code:
class CameraView : public QGLWidget, protected QGLFunctions;
void CameraView::initializeGL()
{
initializeGLFunctions(this->context());
glGenBuffers(1, &pbo_); //<<<<< This is where I get the unhandled exception on Release mode
glBindBuffer(QGLBuffer::PixelUnpackBuffer, pbo_);
glBufferData(QGLBuffer::PixelUnpackBuffer, 3 * sizeof(BYTE) * image_width_ * image_height_, NULL, GL_STREAM_DRAW);
...
}
Again, this works great on debug. Why would this only happen on Release?

I got it.
Seems like this issue is related to this one:
https://forum.qt.io/topic/12492/qt-4-8-qglfunctions-functions-crash-in-release-build
and there's a bug report that may be related also:
https://bugreports.qt.io/browse/QTBUG-5729
Perhaps the initializeGLFunctions() method is not getting all function pointers for the GL extension functions, I don't really know why but this seems to be it.
The solution for me was to stop using Qt's GL extensions and start using glew.
So, here's what worked for me:
#include <QtGui/QtGui>
#include <gl/glew.h>
#include <QtOpenGL/QGLWidget>
class CameraView : public QGLWidget;
void CameraView::initializeGL()
{
//initializeGLFunctions(this->context());
GLenum init = glewInit();
// Create buffers
glGenBuffers(1, &pbo_);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo_);
glBufferData(GL_PIXEL_UNPACK_BUFFER, 3 * sizeof(BYTE) * image_width_ * image_height_, NULL, GL_STREAM_DRAW);
// Set matrixes
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glOrtho(0, this->width(), 0, this->height(), 0, 1);
glClearColor(0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_FLAT);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
}
Be sure that glew.h is included before any QTOpenGL headers or else you'll get a compilation error.

Related

OpenEXR + OpenGL unhandled exception

I've had quite a lot problems when trying to execute an example from OpenGL SuperBible 5th ed. from Chapter09/hdr_bloom.
Problems were caused from linking OpenEXR libs so I've build them manualy and replaced them with libs from authors.
Right now, I can manage to run the program but I get unhandled exception error when I try to load HDR image that Is used as a texture.
This is the piece of the code that is used to load HDR texture, if I comment it all out, program runs without problems but there's just no texture on my object.
bool LoadOpenEXRImage(char *fileName, GLint textureName, GLuint &texWidth, GLuint &texHeight)
{
// The OpenEXR uses exception handling to report errors or failures
// Do all work in a try block to catch any thrown exceptions.
try
{
Imf::Array2D<Imf::Rgba> pixels;
Imf::RgbaInputFile file(fileName); // UNHANDLED EXCEPTION
Imath::Box2i dw = file.dataWindow();
texWidth = dw.max.x - dw.min.x + 1;
texHeight = dw.max.y - dw.min.y + 1;
pixels.resizeErase(texHeight, texWidth);
file.setFrameBuffer(&pixels[0][0] - dw.min.x - dw.min.y * texWidth, 1, texWidth);
file.readPixels(dw.min.y, dw.max.y);
GLfloat* texels = (GLfloat*)malloc(texWidth * texHeight * 3 * sizeof(GLfloat));
GLfloat* pTex = texels;
// Copy OpenEXR into local buffer for loading into a texture
for (unsigned int v = 0; v < texHeight; v++)
{
for (unsigned int u = 0; u < texWidth; u++)
{
Imf::Rgba texel = pixels[texHeight - v - 1][u];
pTex[0] = texel.r;
pTex[1] = texel.g;
pTex[2] = texel.b;
pTex += 3;
}
}
// Bind texture, load image, set tex state
glBindTexture(GL_TEXTURE_2D, textureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, texWidth, texHeight, 0, GL_RGB, GL_FLOAT, texels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
free(texels);
}
catch (Iex::BaseExc & e)
{
std::cerr << e.what() << std::endl;
//
// Handle exception.
//
}
return true;
}
It is called like this:
LoadOpenEXRImage("window.exr", windowTexture, texWidth, texHeight);
Please notice my mark, which shows where exactly the unhandled exception happens.
If i try to run it, I get this error:
Unhandled exception at 0x77938E19 (ntdll.dll) in hdr_bloom.exe:
0xC0000005: Access violation writing location 0x00000014.
My debugger points to this piece of code:
virtual void __CLR_OR_THIS_CALL _Lock()
{ // lock file instead of stream buffer
if (_Myfile)
_CSTD _lock_file(_Myfile); // here
}
Which is part of the fstream
My declarations look like this:
#include <ImfRgbaFile.h> // OpenEXR headers
#include <ImfArray.h>
#ifdef _WIN32
#pragma comment (lib, "half.lib")
#pragma comment (lib, "Iex.lib")
#pragma comment (lib, "IlmImf.lib")
#pragma comment (lib, "IlmThread.lib")
#pragma comment (lib, "Imath.lib")
#pragma comment (lib, "zlib.lib")
#endif
#pragma warning( disable : 4244)
I've no idea if this matters but when I tried to run it for the first time I got SAFESEH errors about my zlib.lib so I turned SAFESEH off in Linker->Advanced.
And the project provided by the authors was created in VisualStudio2008 where I'm using newer version and I've converted it while opening.
Also, I'm using Windows 7 64-bit and Microsoft Visual Studio 2013 Ultimate.
If it's needed, let me know and I will post more detailed information, I've tried to keep it as short as possible.
I've finally found the issue although I'm not sure how it is happening.
To solve this I had to create a brand new project and just copy everything from the original project so my prediction is that somewhere during conversion of original project, some project properties changed and this caused some errors.
I found out that it's possible that such converted project may not how permission to write or read files in some directories and that's why I got unhandled exception from fstream.
So for future people with similar problems, instead of converting your project, create a brand new one and just copy what you need, in my case, I just had to copy Library and Include directories : ).

OpenGL 3.3/GLSL & C++ error: "must write to gl_Position"

I'm currently trying to get a triangle to render using OpenGL 3.3 and C++ with the GLM, GLFW3 and GLEW libraries, but get an error when trying to create my shaderprogram.
Vertex info
(0) : error C5145: must write to gl_Position
I already tried to find out why this happens and asked on other forums, but no one knew what the reason is. There are three possible points where this error could have his origin - in my main.cpp, where I create the window, the context, the program, the vao etc. ...
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <glm/glm.hpp>
#include <iostream>
#include <string>
#include "util/shaderutil.hpp"
#define WIDTH 800
#define HEIGHT 600
using namespace std;
using namespace glm;
GLuint vao;
GLuint shaderprogram;
void initialize() {
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glClearColor(0.5, 0.7, 0.9, 1.0);
string vShaderPath = "shaders/shader.vert";
string fShaderPath = "shaders/shader.frag";
shaderprogram = ShaderUtil::createProgram(vShaderPath.c_str(), fShaderPath.c_str());
}
void render() {
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(shaderprogram);
glDrawArrays(GL_TRIANGLES, 0, 3);
}
void clean() {
glDeleteProgram(shaderprogram);
}
int main(int argc, char** argv) {
if (!glfwInit()) {
cerr << "GLFW ERROR!" << endl;
return -1;
}
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
GLFWwindow* win = glfwCreateWindow(WIDTH, HEIGHT, "Rendering a triangle!", NULL, NULL);
glfwMakeContextCurrent(win);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
cerr << "GLEW ERROR!" << endl;
return -1;
} else {
glGetError();
//GLEW BUG: SETTING THE ERRORFLAG TO INVALID_ENUM; THEREFORE RESET
}
initialize();
while (!glfwWindowShouldClose(win)) {
render();
glfwPollEvents();
glfwSwapBuffers(win);
}
clean();
glfwDestroyWindow(win);
glfwTerminate();
return 0;
}
...the ShaderUtil class, where I read in the shader files, compile them, do error checking and return a final program...
#include "shaderutil.hpp"
#include <iostream>
#include <string>
#include <fstream>
#include <vector>
using namespace std;
GLuint ShaderUtil::createProgram(const char* vShaderPath, const char* fShaderPath) {
/*VARIABLES*/
GLuint vertexShader;
GLuint fragmentShader;
GLuint program;
ifstream vSStream(vShaderPath);
ifstream fSStream(fShaderPath);
string vSCode, fSCode;
/*CREATING THE SHADER AND PROGRAM OBJECTS*/
vertexShader = glCreateShader(GL_VERTEX_SHADER);
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
program = glCreateProgram();
/*READING THE SHADERCODE*/
/*CONVERTING THE SHADERCODE TO CHAR POINTERS*/
while (vSStream.is_open()) {
string line = "";
while (getline(vSStream, line)) {
vSCode += "\n" + line;
}
vSStream.close();
}
const char* vSCodePointer = vSCode.c_str();
while (fSStream.is_open()) {
string line = "";
while (getline(fSStream, line)) {
fSCode += "\n" + line;
}
fSStream.close();
}
const char* fSCodePointer = fSCode.c_str();
/*COMPILING THE VERTEXSHADER*/
glShaderSource(vertexShader, 1, &vSCodePointer, NULL);
glCompileShader(vertexShader);
/*VERTEXSHADER ERROR CHECKING*/
GLint vInfoLogLength;
glGetShaderiv(vertexShader, GL_INFO_LOG_LENGTH, &vInfoLogLength);
if (vInfoLogLength > 0) {
vector<char> vInfoLog(vInfoLogLength + 1);
glGetShaderInfoLog(vertexShader, vInfoLogLength, &vInfoLogLength, &vInfoLog[0]);
for(int i = 0; i < vInfoLogLength; i++) {
cerr << vInfoLog[i];
}
}
/*COMPILING THE FRAGMENTSHADER*/
glShaderSource(fragmentShader, 1, &fSCodePointer, NULL);
glCompileShader(fragmentShader);
/*FRAGMENTSHADER ERROR CHECKING*/
GLint fInfoLogLength;
glGetShaderiv(fragmentShader, GL_INFO_LOG_LENGTH, &fInfoLogLength);
if (fInfoLogLength > 0) {
vector<char> fInfoLog(fInfoLogLength + 1);
glGetShaderInfoLog(fragmentShader, fInfoLogLength, &fInfoLogLength, &fInfoLog[0]);
for(int i = 0; i < fInfoLogLength; i++) {
cerr << fInfoLog[i];
}
}
/*LINKING THE PROGRAM*/
glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);
glLinkProgram(program);
//glValidateProgram(program);
/*SHADERPROGRAM ERROR CHECKING*/
GLint programInfoLogLength;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &programInfoLogLength);
if (programInfoLogLength > 0) {
vector<char> programInfoLog(programInfoLogLength + 1);
glGetProgramInfoLog(program, programInfoLogLength, &programInfoLogLength, &programInfoLog[0]);
for(int i = 0; i < programInfoLogLength; i++) {
cerr << programInfoLog[i];
}
}
/*CLEANUP & RETURNING THE PROGRAM*/
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
return program;
}
...and the vertex shader itself, which is nothing special. I just create an array of vertices and push them into gl_Position.
#version 330 core
void main() {
const vec3 VERTICES[3] = vec3[3] {
0.0, 0.5, 0.5,
0.5,-0.5, 0.5,
-0.5,-0.5, 0.5
};
gl_Position.xyz = VERTICES;
gl_Position.w = 1.0;
}
The fragmentshader just outputs a vec4 called color, which is set to (1.0, 0.0, 0.0, 1.0). The compiler doesn't show me any errors, but when I try to execute the program, I just get a window without the triangle and the error message that's shown above.
There's a few things I already tried to solve this problem, but none of them worked:
I tried creating the vertices inside my main.cpp and pushing them into the vertex-shader via a vertex buffer object; I changed some code inspired by opengl-tutorials.org and finally got a triangle to show up, but the shaders weren't applied; I only got the vertices inside my main.cpp to show up on the screen, but the "must write to gl_Position" problem remained.
I tried using glGetError() on different places and got 2 different error-codes: 1280 and 1282; the first one was caused by a bug inside GLEW, which causes the state to change from GL_NO_ERROR to GL_INVALID_ENUM or something like that. I was told to ignore this one and just change the state back to GL_NO_ERROR by using glGetError() after initializing GLEW. The other error code appeared after using glUseProgram() in the render-function. I wanted to get some information out of this, but the gluErrorString() function is deprecated in OpenGL 3.3 and I couldn't find an alternative provided by any of my libraries.
I tried validating my program via glValidateProgram() after linking it. When I did this, the gl_Position error message didn't show up anymore, but the triangle didn't either, so I assumed that this function just clears the infolog to put in some new information about the validation process
So right now, I have no idea what causes this error.
The problem got solved! I tried to print the source that OpenGL tries to compile and saw that there was no source loaded by the ifstream. Things I had to change:
Change the "while (vVStream.is_open())" to "if (vVStream.is_open())".
Error check, if the condition I listed first is executed (add "else {cerr << "OH NOES!" << endl}
Add a second parameter to the ifstreams I'm creating: change "ifstream(path)" to "ifstream(path, ios::in)"
Change the path I'm passing from a relative path (e.g "../shaders/shader.vert") to an absolute path (e.g "/home/USERNAME/Desktop/project/src/shaders/shader.vert"); this somehow was necessary, because the relative path wasn't understood; using an absolute one isn't a permanent solution though, but it fixes the problem of not finding the shader.
Now it actually loads and compiles the shaders; there are still some errors to fix, but if someone has the same "must write to gl_Position" problem, double, no, triple-check if the source you're trying to compile is actually loaded and if the ifstream is actually open.
I thank everyone who tried to help me, especially #MtRoad. This problem almost made me go bald.
Vertex shaders run on each vertex individually, so gl_Position is the output vertex after whatever transforms you wish to apply to the vertex being processed by vertex shader, so trying to emit multiple vertices doesn't make sense. Geometry shaders can emit additional geometry on the fly and can be used to do this to create motion blur for example.
For typical drawing, you bind a vertex array object like you did, but put data into buffers called Vertex Buffer Objects and tell OpenGL how to interpret the data's "attributes" using glVertexAttrib which you can read in your shaders.
Recently encountered this issue & I suspect the cause may be same as yours.
I'm not familiar with g++ however, on VS ones' build environment & the location on where your *exe is running from when you're debugging can impact on this. For example one such setting:
Project Properties -> General -> Output directory ->
Visual Studios Express - change debug output directory
And another similar issue here "The system cannot find the file specified" when running C++ program
You need to make sure if you've change the build environment and you're debugging from a different output directory that any of the relevant files are relative from where the *exe is being executed from.
This would explain why you've had to resort to using "if (vVStream.is_open())", which I suspect fails, & so then subsequently use full filepath of the shaders as the original referenced files are not relative.
My issue was exactly as yours but only in release mode. Once I copied over my shaders, into the release folder where the *exe could access them, the problem went away.

Can't generate mipmaps with off-screen OpenGL context on Linux

This question is a continuation of the problem I described here .This is one of the weirdest bugs I have ever seen.I have my engine running in 2 modes:display mode and offscreen.The OS is Linux.I generate mipmaps for the textures and in Display mode it all works fine.In that mode I use GLFW3 for context creation.Now,the funny part:in the offscreen mode,context for which I create manually with the code below,the mipmap generation fails OCCASIONALLY!That's on some runs the resulting output looks ok,and in other the missing levels are clearly seen as the frame is full of texture junk data or entirely empty.
At first I though I had my mipmap gen routine wrong which goes like this:
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_2D, textureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, imageInfo.Width, imageInfo.Height, 0, imageInfo.Format, imageInfo.Type, imageInfo.Data);
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0 );
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
I also tried to play with this param:
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, XXX);
including Max level detection formula:
int numMipmaps = 1 + floor(log2(glm::max(imageInfoOut.width, imageInfoOut.height)));
But all this stuff didn't work consistently.Out of 10-15 runs 3-4 come out with broken Mipmaps.What I then found was that switching to GL_LINEAR solved it.Also in mipmap mode,setting just 1 level worked as well.Finally I started thinking there could a problem on a context level because in screen mode it works!I switched context creation to GLFW3 and it works.So I wonder what's going on here?Do I miss something in Pbuffer setup which breaks mipmaps generation?I doubt it because AFAIK it is done by the driver.
Here is my custom off-screen context creation setup:
int visual_attribs[] = {
GLX_RENDER_TYPE,
GLX_RGBA_BIT,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_ALPHA_SIZE, 8,
GLX_DEPTH_SIZE, 24,
GLX_STENCIL_SIZE, 8,
None
};
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, vmaj,
GLX_CONTEXT_MINOR_VERSION_ARB, vmin,
GLX_CONTEXT_FLAGS_ARB,
GLX_CONTEXT_ROBUST_ACCESS_BIT_ARB
#ifdef DEBUG
| GLX_CONTEXT_DEBUG_BIT_ARB
#endif
,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
None
};
_xdisplay = XOpenDisplay(NULL);
int fbcount = 0;
_fbconfig = NULL;
// _render_context
if (!_xdisplay) {
throw();
}
/* get framebuffer configs, any is usable (might want to add proper attribs) */
if (!(_fbconfig = glXChooseFBConfig(_xdisplay, DefaultScreen(_xdisplay), visual_attribs, &fbcount))) {
throw();
}
/* get the required extensions */
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc) glXGetProcAddressARB((const GLubyte *) "glXCreateContextAttribsARB");
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc) glXGetProcAddressARB((const GLubyte *) "glXMakeContextCurrent");
if (!(glXCreateContextAttribsARB && glXMakeContextCurrentARB)) {
XFree(_fbconfig);
throw();
}
/* create a context using glXCreateContextAttribsARB */
if (!(_render_context = glXCreateContextAttribsARB(_xdisplay, _fbconfig[0], 0, True, context_attribs))) {
XFree(_fbconfig);
throw();
}
// GLX_MIPMAP_TEXTURE_EXT
/* create temporary pbuffer */
int pbuffer_attribs[] = {
GLX_PBUFFER_WIDTH, 128,
GLX_PBUFFER_HEIGHT, 128,
None
};
_pbuff = glXCreatePbuffer(_xdisplay, _fbconfig[0], pbuffer_attribs);
XFree(_fbconfig);
XSync(_xdisplay, False);
/* try to make it the current context */
if (!glXMakeContextCurrent(_xdisplay, _pbuff, _pbuff, _render_context)) {
/* some drivers does not support context without default framebuffer, so fallback on
* using the default window.
*/
if (!glXMakeContextCurrent(_xdisplay, DefaultRootWindow(_xdisplay),
DefaultRootWindow(_xdisplay), _render_context)) {
throw();
}
}
Almost forgot:My system and hardware:
Kubuntu 13.04 64bit. GPU: NVidia Geforce GTX 680 . The engine uses OpenGL 4.2 API
Full OpenGL info:
**OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 680/PCIe/SSE2
OpenGL version string: 4.4.0 NVIDIA 331.49
OpenGL shading language version string: 4.40 NVIDIA via Cg compiler**
Btw,I used also older drivers and it doesn't matter.
UPDATE:
Seems like my assumption regarding GLFW was wrong.When I compile the engine and run it from the terminal the same is happening.BUT - if I run the engine from IDE (debug or release )there are no issues with the mipmaps.Is it possible the standalone app works against different SOs?
To make it clear,I dont't use Pbuffers to render into.I render into custom Frame buffers.
UPDATE1:
I have read that non-power of 2 textures can be tricky to auto generate mipmaps.And that in case OpenGL fails to generate all the levels it turns of texture usage.Is it possible that's what I am experiencing here?Because once the mipmapped texture goes wrong the rest of textures (non mipmapped) disappear too.But if this is the case then why this behavior is inconsistent?
Uh, why are you using PBuffers in the first place? PBuffers have just too many caveats as that there was only one valid reason to use them in a new project?
You want offscreen rendering? Then use Framebuffer Objects (FBOs).
You need a purely off-screen context? Then create a normal window which you simply don't show and create an FBO on it.

glMapBufferRange Access Violation

I want to store some particles in a shader-storage-buffer. I use the glMapBufferRange() function to set the particles values but I always get an Access Violation error whenever this function is called.
glGenBuffers(1, &bufferID);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, bufferID);
glBufferData(GL_SHADER_STORAGE_BUFFER, numParticles*sizeof(Particle), NULL ,GL_STATIC_DRAW);
struct Particle* particles = (struct Particle*) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, numParticles*sizeof(Particle), GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
for(int i = 0; i < numParticles; ++i){
//.. Do something with particles..//
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
When I use glMapBuffer() instead, everything works fine. I already made sure that I have created an OpenGL context with glfw and initialized glew properly.
Ok, I finally found the problem. When I designed my GLFW-Window class I used the GLFW_OPENGL_FORWARD_COMPAT hint to create a forward-compatible OpenGL context. I don't know why I did this, but when I don't use this hint everything works fine. :)

Windows OpenGL implementation bug?

I am having a very tough time sorting out this strange clipping bug in my app.
Basically, for some reason, OpenGL is clipping (using the scissor test) my call to glClear(), but not the rendering I do afterwards.
The real problem, however, is that the problem goes away when I resize my window. I can guarantee that resizing the window doesn't change anything in my app or run any code. It is very stange. Worse still, simply putting
glDisable(GL_SCISSOR_TEST);
glDisable(GL_SCISSOR_TEST);
where I need to disable the scissor test, instead of having just one call to glDisable() solves the problem. So does removing the code all together (the scissor test is already disabled in this test case, but the code is there for when it wasn't left to disabled in previous code). It even solves the problem to put:
glEnable(GL_SCISSOR_TEST);
glDisable(GL_SCISSOR_TEST);
There are only two explanations I can think of. Either I am somehow calling UB (which I doubt, because opengl doesn't have UB AFAIK), or there is an implementation bug, because calling glDisable() twice with the same parameter consecutively SHOULD be the same as calling it once... if I'm not mistaken.
JUST incase it is of interest, here is the function for which the problem is happening:
void gle::Renderer::setup3DCamera(gle::CameraNode& cam, gle::Colour bkcol,
int clrmask, int skymode, gle::Texture* skytex, bool uselight) {
// Viewport
Rectangle wr(cam.getViewport()?*cam.getViewport():Rectangle(0,0,1,1));
if (cam.isRatioViewport()||(!cam.getViewport())) {
if (i_frameBind==NULL)
wr.scale(selectedWindow->getWidth(),selectedWindow->getHeight());
else wr.scale(i_frameBind->getWidth(),i_frameBind->getHeight());
}
gle::Rectangle_t<int> iport; iport.set(wr);
int winHei;
if (i_frameBind==NULL)
winHei = selectedWindow->getHeight();
else
winHei = i_frameBind->getHeight();
glViewport(iport.x1(),winHei-iport.y2(),iport.wid(),iport.hei());
// Viewport Clipping
if (cam.isClipping()) {
/* This is never executed in the test case */
glEnable(GL_SCISSOR_TEST);
glScissor(iport.x1(),winHei-iport.y2(),iport.wid(),iport.hei());
} else {
/* This is where I disable the scissor test */
glDisable(GL_SCISSOR_TEST);
glDisable(GL_SCISSOR_TEST);
}
float w=wr.wid()/2, h=wr.hei()/2;
// Projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
Projection proj = cam.getProjection();
gluPerspective(proj.fov,proj.aspect*(w/h),proj.cnear,proj.cfar);
// Camera
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
float m[] = { 1,0,0,0, 0,0,-1,0, 0,1,0,0, 0,0,0,1 };
glMultMatrixf(m);
static gle::Mesh *skyBox = NULL;
// Screen Clearing
switch (clrmask&GLE_CLR_COLOUR&0x00F?skymode:GLE_SKYNONE) {
case GLE_SKYNONE:
clear(clrmask&(~GLE_CLR_COLOUR)); break;
case GLE_SKYCOLOUR:
clearColour(clrmask,bkcol); break;
case GLE_SKYBOX:
glDisable(GL_DEPTH_TEST);
if (!(clrmask&GLE_CLR_DEPTH&0x00F)) glDepthMask(0);
float m = (cam.getProjection().cnear+cam.getProjection().cfar)/2.0f;
if (skyBox==NULL) skyBox = gle::createStockMesh(GLE_MESHSKYBOX,GLE_WHITE,0,m);
glEnable(GL_TEXTURE_2D);
glDisable(GL_CULL_FACE);
skytex->flush();
glBindTexture(GL_TEXTURE_2D,skytex->getID());
glDisable(GL_LIGHTING);
glPushMatrix();
float m3[16];
Orientation::matrixSet(m3,cam.pos().getMatrix(GLE_ROTMATRIX));
Orientation::matrixTranspose(m3);
glMultMatrixf(m3);
if (i_reflectionOn) glMultMatrixf(Orientation::matrixGet3x3(i_reflectionTransform));
renderMesh(*skyBox,NULL,1);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
if (clrmask&GLE_CLR_DEPTH) glClear(GL_DEPTH_BUFFER_BIT);
else glDepthMask(1);
glAble(GL_DEPTH_TEST,depthmode!=GLE_ALWAYS);
break;
}
// Camera
glMultMatrixf(cam.getAbsInverseMatrix());
if (i_reflectionOn) glMultMatrixf(i_reflectionTransform);
// Lighting
i_lightOn = uselight;
glAble(GL_LIGHTING,i_lightOn);
}
This looks like a driver bug to me. However, there are two cases where this may actually be a bug in your code.
First, you might be in the middle of the glBegin() / glEnd() block when calling that glDisable(), causing some error and also ending the block, effectively making the second call to glDisable() legit and effective. Note that this is a dumb example with glBegin() / glEnd(), it could be pretty much any case of OpenGL error being caught. Insert glGetError() calls throughout your code to be sure. My guess is the first call to glDisable() generates GL_INVALID_OPERATION.
Second, you are not scissor testing, but you are still calling glViewport() with the same values. This would have the opposite effect (not clip glClear() and clip drawing) on NVIDIA, but it might very well be it does the opposite on some other driver / GL implementation.