glMapBuffer causes glDrawElementsInstanced to fail with GL_INVALID_OPERATION error - c++

I have an issue with glMapBuffer on windows
The following code works fine, and the scene renders
glBufferSubData(GL_ARRAY_BUFFER, from, to, bufferData)
But if I try to map the buffer and change glBufferSubData with memcpy like this
mappedBuffer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
memcpy(mappedBuffer, (char *)bufferData + from, to);
Then I get this error
OpenGL error
Debug message (1000): glDrawElementsInstanced has generated an error (GL_INVALID_OPERATION)
Source: API
Type: Error
Severity: high
In macos this works just fine so I wonder if I'm really doing a mistake.
Also I find quite strange that the error happens when glDrawElementsInstanced gets called instead of failing on the glMapBuffer call.

I just forgot to use glUnmapBuffer after memcpy, for some reason on macos is not needed

Related

glGenTextures gives GL_INVALID_OPERATION despite valid OpenGL context

I get a GL_INVALID_OPERATION error when calling glGenTextures and I have no idea what could be responsible for it.
I am using QtOpenGLWidget to retrieve the context and it looks valid at the time I call glGenTextures() (at least I have one since glGetString(GL_VERSION) and glxGetCurrentContext() both return something which is not crap)
The faulty code is called from the QOpenGLWidget::resizeGL() method. In the QOpenGLWidget::initializeGL() I compile successfully some shader programs and I create / upload data to VAO / VBOs.
So my questions are :
What are the common faulty cases of glGenTextures() except not having an OpenGL context at all
Can an OpenGL context be invalid or messed up and, in such a case
How can I check that my OpenGL context is valid ?
EDIT: Since I strongly believe this is related to the fact my machine has no proper GPU, here is the return of glxinfo.
I don't think that it is the glGenTextures() call, that causes the error. This call can only throw GL_INVALID_ENUM and GL_INVALID_VALUE. Most likely some other call is wrong and the bind() of the new texture is the invalid call.
Try to localise your offending call with glGetError().
You can check the documentation for possible failures of gl calls here:
https://www.opengl.org/sdk/docs/man4/html/glCreateTextures.xhtml
Ok found the problem. It appears a shader was silently not compiling properly (hardcoded shader not checked for proper compilation, with incorrect #version) and that was messing up the next OpenGL error check which was the glGenTextures().

Strange SDL getAttribute() behaviour

I instructed SDL to use at least 8 bits for the stencil buffer:
if (SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8) < 0)
{
printf("couldn't set stencil size: %s\n", SDL_GetError());
}
That works because it gives no error.
But later in the code, I try to get the stencil size value:
int rc, i;
rc = SDL_GL_GetAttribute(SDL_GL_STENCIL_SIZE, &i);
printf("stencil buffer size: (returns %i):%i sdl-error:%s\n", rc, i, SDL_GetError());
That returns -1, and outputs this:
stencil buffer size: (returns -1):0 sdl-error:OpenGL error: GL_INVALID_ENUM
If cleared any SDL error using SDL_ClearError(), so this must be the reason. But I have no idea why.
Maybe there might be a bigger error, since glGetError() returns GL_INVALID_ENUM right after GLEW initialization, the same error as SDL gives.
Note: Thanks #Nicol Bolas, I edited the wrong call.
EDIT:
I tried to change the context version, and the greatest version which works, is 3.1, glew generates no error, and sdl returns 8 as stencil size.
But why? I read the GLEW changelog, and it says, that my glew version (1.13.0) should be able to handle opengl 4: http://glew.sourceforge.net/
So, what's wrong?
Since the stencil stuff works now, I just cleared the error generated by GLEW by calling glGetError(). For those who are interested, here is paragraph concerning the error: (link:https://www.opengl.org/wiki/OpenGL_Loading_Library)
GLEW has a problem with core contexts. It calls glGetString(GL_EXTENSIONS)​, which causes GL_INVALID_ENUM on GL 3.2+ core context as soon as glewInit()​ is called. It also doesn't fetch the function pointers. The solution is for GLEW to use glGetStringi​ instead. The current version of GLEW is 1.10.0 but they still haven't corrected it. The only fix is to use glewExperimental​ for now.glewExperimental​ is a variable that is already defined by GLEW. You must set it to GL_TRUE before calling glewInit()​. You might still get GL_INVALID_ENUM (depending on the version of GLEW you use), but at least GLEW ignores glGetString(GL_EXTENSIONS)​ and gets all function pointers.

OpenGL: INVALID_OPERATION following glEnableVertexAttribArray

I'm porting a functioning OpenGL app from Windows to OSX, and keep getting an "invalid operation" (code 1282) error after calling glEnableVertexAttribArray(). Here's the render method:
gl::Disable(gl::DEPTH_TEST);
gl::Disable(gl::CULL_FACE);
gl::PolygonMode(gl::FRONT_AND_BACK,gl::FILL);
/// render full-screen quad
gl::UseProgram(m_program);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER, m_vertexBuffer);
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER, m_indexBuffer);
check_gl_error();
GLint positionLocation = -1;
positionLocation = gl::GetAttribLocation(m_program,"Position");
check_gl_error();
/// positionLocation now == 0
gl::EnableVertexAttribArray(positionLocation);
//// ************ ERROR RETURNED HERE **********************
//// ************ ERROR RETURNED HERE **********************
check_gl_error();
gl::VertexAttribPointer(positionLocation,3,gl::FLOAT,false,3 * sizeof(GLfloat),(const GLvoid*)0);
check_gl_error();
gl::DrawElements(gl::TRIANGLES,m_indexCount,gl::UNSIGNED_SHORT,0);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER,0);
check_gl_error();
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER,0);
check_gl_error();
check_gl_error() just gets the last GL error and returns a somewhat-readable description thereof.
This code works fine under Windows. But, as I'm rapidly learning, that doesn't necessarily mean that it is correct. I've verified that all of the previously-bound objects (program, vertex buffer, index buffer) are valid handles. glGetAttribLocation() returns a valid location (0 in this case) for the Position attribute.
What are the failure cases for glEnableVertexAttribArray()? Is there some state that I've not set before this?
If I comment out the draw code, the window is cleared to my test color (red) (called from a method not shown in the code snippet) on every frame and everything else works fine, which implies that everything else is correct.
Suggestions?
Oh, for a GL state machine simulator that would tell me why it is an "invalid operation." (Or a reference to some mystical, magical documentation that describes required input state for each gl* call.)
You're seeing this error on OS X because it only supports the OpenGL Core Profile if you're using OpenGL 3.x or higher. Your code is not Core Profile compliant. You were most likely using the Compatibility Profile on Windows.
Specifically, the Core Profile requires a Vertex Array Object (VAO) to be bound for all vertex related calls. So before calling glEnableVertexAttribArray(), or other similar functions, you will need to create and bind a VAO:
GLuint vaoId = 0;
glGenVertexArrays(1, &vaoId);
glBindVertexArray(vaoId);
On how to find out the error conditions: In this case, it's not nearly as easy as it should be. Let's say you work with a GL3 level feature set. In an ideal world, you would go to www.opengl.org, pull down the "Documentation" menu close to the top-left corner, choose "OpenGL 3.3 Reference Pages", click on glEnableVertexAttribArray in the left pane, and look at the "Errors" section on the page. Then you see that... GL_INVALID_OPERATION is not listed as a possible error.
Next, you might want to check if there's anything better in the latest version. You do the same, but choose "OpenGL 4 Reference Pages" instead. The error condition is still not listed.
By now you realize, like many before you, that these man pages are often faulty. So you go to the ultimate source: the specs. This time you choose "OpenGL Registry" in the Documentation menu. This gives you links to all the spec documents in PDF format. Again, let's try 3.3 first. Search for "EnableVertexAttribArray" in the document and there is... still no GL_INVALID_OPERATION documented as a possible error.
Last resort, checking the very latest spec document, which is 4.4. Again looking for "EnableVertexAttribArray", it's time for a heureka:
An INVALID_OPERATION error is generated if no vertex array object is bound.
I'm quite certain that the error also applies to GL3. While it's reasonably common for the man pages to be incomplete, it's much rarer for the spec documents to be missing things. The very closely related glVertexAttribPointer() call has this error condition documented in GL3 already.

Error when create FrameBuffer: GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS

I'm using libgdx, to create some program. I need used some operation in framebuffer. For this operation I create new framebuffer, after this operation I'm call in framebuffer dispose(). When I create framebuffer 10 time, I have crash program with error: frame buffer couldn't be constructed: incomplete dimensions. I see at code libgdx and see that this is GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS status of framebuffer. Why is it happened? What am I must to do to fixing this problem?
Code:
if(maskBufferer != null){
maskBufferer.dispose();
}
maskBufferer = new FrameBuffer(Pixmap.Format.RGBA8888, width, height, true);
mask = createMaskImageMask(aspectRatioCrop, maskBufferer);
...
private Texture createMaskImageMask(boolean aspectRatioCrop, FrameBuffer maskBufferer) {
maskBufferer.begin();
Gdx.gl.glClearColor(COLOR_FOR_MASK, COLOR_FOR_MASK, COLOR_FOR_MASK, ALPHA_FOR_MASK);
Gdx.graphics.getGL20().glClear(GL20.GL_COLOR_BUFFER_BIT);
float[] coord = null;
...
PolygonRegion polyReg = new PolygonRegion( new TextureRegion(new Texture(Gdx.files.internal(texturePolygon)) ),
coord);
PolygonSprite poly = new PolygonSprite(polyReg);
PolygonSpriteBatch polyBatch = new PolygonSpriteBatch();
polyBatch.begin();
poly.draw(polyBatch);
polyBatch.end();
maskBufferer.end();
return maskBufferer.getColorBufferTexture();
}
EDIT
To summarize, GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS can occur in libgdx when too many FrameBuffer objects are created (without calling .dispose()), possibly to do with OpenGL running out of FBO or texture/renderbuffer handles.
If no handle is returned with glGenFrameBuffers then an FBO won't be bound when attaching targets or checking the status. Likewise an attempt to attach (from a failed call to glGenTextures) an invalid target will cause the FBO will be incomplete. Though it seems incorrect to report GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS in either case.
One possibility may be the call to allocate memory for the target, such as glTexImage2D or glRenderbufferStorage has failed (out of memory). This leaves the dimensions of the target not equal to other targets already successfully attached to the FBO, and could then produce the error.
It's pretty standard to create a framebuffer once, attach your render targets and reuse each frame. By dispose do you mean glDeleteFrameBuffers?
It looks like there should be delete maskBufferer; after and as well as maskBufferer.dispose();. EDIT if it were C++
Given this error happens after a number of frames it could be many things. Double check you aren't creating framebuffers or attachments each frame, not deleting them and running out of objects/handles.
It also looks like GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS is no longer used (see the specs), something along the lines of the ability to have mixed dimensions now. Seems like it'd be worth checking that your attachments are all the same size though.
A quick way to narrow down which attachment is causing issues is to comment out half of them and see when the error occurs (or check the status after each attach).
I'm resolved problem. Dispose was helped. I was re-create the class every time, because of dispose was never call.

glBufferData fails silently with overly large sizes

i just noticed, that glBufferData fails silently when i try to call it with size: 1085859108 and data: NULL.
Following calls to glBufferSubData fail with a OUT_OF_MEMORY 'Exception'. This is on Windows XP 32bit, NVIDIA Gforce 9500 GT (1024MB) and 195.62 Drivers.
Is there any way to determinate if a buffer was created sucessfully? (Something like a proxy texture for example?)
kind regards,
Florian
I doubt that it's really silent. I'd guess that glGetError would return GL_OUT_OF_MEMORY after that attempt.