glEnableVertexArrayAttrib throw invalide size - c++

Got 2 computers with both having modern graphic card. Both handle OpenGL 4.6. I got this code :
GLuint VAO;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
//Setting up the VAO Attribute format
glVertexArrayAttribFormat(VAO, 0, 3 * sizeof(float), GL_FLOAT, GL_FALSE, 0);
glEnableVertexArrayAttrib(VAO, 0);
For some reason, this code work well in one of my computer (my triangle is printed correctly) (however OpenGL return error : but the program is running normally without crashing)
On my other computer (which got the exact same code) I get this error when glEnableVertexArrayAttrib(VAO,0) is called and my program crash.
I have no idea of why both error are occuring, my VAO is well generated in both computer. I checked for OpenGL context before calling this code and it is ok.
I can share, if needed, the GLFW and Glew Init code and also my shader generation code (both don't raise any error, and it have been used in some other test code I have done without any problem)

Your size is indeed incorrect, it is not in bytes. The size denotes number of elements per vertex, e.g. vec3 corresponds to 3:
glVertexArrayAttribFormat(VAO, 0, 3 , GL_FLOAT, GL_FALSE, 0);
As why it works on one PC, well, get used to it. OpenGL implementations in GPU drivers are usually quite magical and forgiving until they are not.
The actual sizes in bytes come to play in relativeoffset argument and also in glVertexArrayVertexBuffer while specifying also the offset and stride.
Consider using glCreateVertexArrays with direct state access functions. The reason is glGenVertexArrays only reserves a VAO object, but does not create it (on some drivers it does). Only the first glBindVertexArray(VAO); will create such object. This is dangerous since any calls using DSA might fail before the bind and there is no reason to bind if you use DSA in the first place. I suspect you are doing it already, but other people who will see your code and the "wasteful" bind might not known this.

Related

Is glDrawElements supposed to be working in Emscripten's current release? (v1.37.1)

Is glDrawElements() supposed to be working in Emscripten's current release? (v1.37.1) Because no matter what I do, calling glDrawElements() gives me Error 1282 and of course, nothing is rendered in the browser.
Important: program runs perfectly after compiling with VS for PC, even with the shaders written for WebGL. Everything works as expected, and no errors are produced. But on the web: Error 1282.
Main loop:
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VaoId);
glGetError(); // Clear any previous errors
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
int error = glGetError(); if (error != 0) printf("Error: %i\n", error);
glBindVertexArray(0);
glfwPollEvents();
glfwSwapBuffers(m_Instance);
I'm only trying to render a quad as well, 1 VBO in a VAO, indices and positions both stored in one VBO. Indicies first, starting at 0. VertexAttribPointers are set correctly. Shaders compile for the web-browser without errors. Literally the only time glGetError() produces an error-code is straight after the glDrawElements() call.
Is this an emscripten bug, or a WebGL bug?
[EDIT]
Compiling using:
em++ -std=c++11 -s USE_GLFW=3 -s FULL_ES3=1 -s ALLOW_MEMORY_GROWTH=1 --emrun main.cpp -o t.html
Is this an emscripten bug, or a WebGL bug?
It's probably a bug in your code.
I find NVIDIA's drivers to be more forgiving than AMD's drivers, leading it to execute code which the latter would violently reject. It may be possible something similar is happening here; your code works fine as a native application, but in a browser environment, the bug becomes a critical issue. This could be explained, for example, by Chrome's use of ANGLE, which implements OpenGL with Direct3D (on Windows), which could lead to some differences to your native graphics driver. Of course, this is just speculation, nevertheless, it's unlikely that a function as essential as glDrawElements would be broken.
Error code 1282 is GL_INVALID_OPERATION, which for glDrawElements "is [in this case most likely] generated if a non-zero buffer object name is bound to an enabled array or the element array and the buffer object's data store is currently mapped" (source).
Alternatively, it is possible that your shader might be causing the error, in which case, it would be useful if you shared its source with us.
For anyone who might be having issues with the glDrawElements() call specifically on the Web:
My problem turned out to be the fact that I was storing the indices in 1 buffer with all the vertext attributes. This worked fine on PC but for the web, what you need to do is create 2 separate buffers - 1 for indices, and another 1 for all the positions/uvs/normals etc.. then set the "vertexAttributePointers" appropriately in the vbo. Because in the browser, you can't bind a buffer to an array, and also to another buffer, which is what OpenGL will let you do on the PC, and it will work with no errors/warnings.
Make sure to bind both the VBO and the IBO to the VAO upon initialization. Then, just rebind the VAO, draw, unbind VAO, go render next object - job done.

Transform feedback without a framebuffer?

I'm interested in using a vertex shader to process a buffer without producing any rendered output. Here's the relevant snippet:
glUseProgram(program);
GLuint tfOutputBuffer;
glGenBuffers(1, &tfOutputBuffer);
glBindBuffer(GL_ARRAY_BUFFER, tfOutputBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(double)*4*3, NULL, GL_STATIC_READ);
glEnable(GL_RASTERIZER_DISCARD_EXT);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, tfOutputBuffer);
glBeginTransformFeedbackEXT(GL_TRIANGLES);
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 4, GL_FLOAT, GL_FALSE, sizeof(double)*4, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementBuffer);
glDrawElements(GL_TRIANGLES, 1, GL_UNSIGNED_INT, 0);
This works fine up until the glDrawElements() call, which results in GL_INVALID_FRAMEBUFFER_OPERATION. And glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); returns GL_FRAMEBUFFER_UNDEFINED.
I presume this is because my GL context does not have a default framebuffer, and I have not bound another FBO. But, since I don't care about the rendered output and I've enabled GL_RASTERIZER_DISCARD_EXT, I thought a framebuffer shouldn't be necessary.
So, is there a way to use transform feedback without a framebuffer, or do I need to generate and bind a framebuffer even though I don't care about its contents?
This is actually perfectly valid behavior, as-per the specification.
OpenGL 4.4 Core Specification - 9.4.4 Effects of Framebuffer Completeness on Framebuffer Operations
A GL_INVALID_FRAMEBUFFER_OPERATION error is generated by attempts to render to or read from a framebuffer which is not framebuffer complete. This error is generated regardless of whether fragments are actually read from or written to the framebuffer. For example, it is generated when a rendering command is called and the framebuffer is incomplete, even if GL_RASTERIZER_DISCARD is enabled.
What you need to do to work around this is create an FBO with a 1 pixel color attachment and bind that. You must have a complete FBO bound or you get GL_INVALID_FRAMEBUFFER_OPERATION and one of the rules for completeness is that at least 1 complete image is attached.
OpenGL 4.3 actually allows you to skirt around this issue by defining an FBO with no attachments of any sort (see: GL_ARB_framebuffer_no_attachments). However, because you are using the EXT form of FBOs and Transform Feedback, I doubt you have a 4.3 implementation.

Can OpenGL be used to draw real valued triangles into buffer?

I need to implement an image reconstruction which involves drawing triangles in a buffer representing the pixels in an image. These triangles are assigned some floating point value to be filled with. If triangles are drawn such that they overlap, the values of the overlapping regions must be added together.
Is it possible to accomplish this with OpenGL? I would like to take advantage of the fact that rasterizing triangles is a basic graphics task that can be accelerated on the graphics card. I have a cpu-only implementation of this algorithm already but it is not fast enough for my purposes. This is due to the huge number of triangles that need to be drawn.
Specifically my questions are:
Can I draw triangles with a real value using openGL? (Or can I come up with a hack using color etc?)
Can OpenGL add the values where triangles overlap? (Once again I could deal with a hack, like color mixing)
Can I recover the real values for the pixels as an array of floats or similar to be further processed?
Do I have misconceptions about the idea that drawing in OpenGL -> using GPU to draw -> likely faster execution?
Additionally, I would like to run this code on a virtual machine so getting acceleration to work with OpenGL is more feasible than rolling my own implementation in something like Cuda as far as I understand. Is this true?
EDIT: Is an accumulation buffer an option here?
If 32-bit floats are sufficient then it looks like the answer is yes: http://www.opengl.org/wiki/Image_Format#Required_formats
Even under the old fixed pipeline you could use a blending mode with the function GL_FUNC_ADD, though I'm sure fragment shaders can do it more easily now.
glReadPixels() will get you the data back out of the buffer after drawing.
There are software implementations of OpenGL, but you get to choose when you set up the context. Using the GPU should be much faster than the CPU.
No idea. I've never used OpenGL or CUDA on a VM. I've never used CUDA at all.
I guess giving pieces of code as an answer wouldn't be appropriate here as your question is extremely broad. So I'll simply answer your questions individually with bits of hints.
Yes, drawing triangles with openGL is a piece of cake. You provide 3 vertice per triangle and with the proper shaders your can draw triangles, with filling or just edges, whatever you want. You seem to require a large set (bigger than [0, 255]) since a lot of triangles may overlap, and the value of each may be bigger than one. This is not a problem. You can fill a 32bit precision one channel frame buffer. In your case only one channel may suffice.
Yes, the blending exists since forever on openGL. So whatever the version of openGL you choose to use, there will be a way to add up the value of the trianlges overlapping.
Yes, depending on you implement it you may have to use glGetSubData() or glReadPixels or something else. However, depending on the size of the matrix you're filling, it may be a bit long to download the full buffer (2000x1000 pixels for a one channel at 32bit would be around 4-5ms). It may be more efficient to do all your processing on the GPU and extract only few valuable information instead of continuing the processing on the CPU.
The execution will be undoubtedly faster. However, the download of data from the GPU memory is often not optimized (upload is). So the time you will win on the processing may be lost on the download. I've never worked with openGL on VM so the additional loss of performance is unknown to me.
//Struct definition
struct Triangle {
float[2] position;
float intensity;
};
//Init
glGenBuffers(1, &m_buffer);
glBindBuffer(GL_ARRAY_BUFFER, 0, m_buffer);
glBufferData(GL_ARRAY_BUFFER,
triangleVector.data() * sizeof(Triangle),
triangleVector.size(),
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_ARRAY_BUFFER, 0, 0);
glGenVertexArrays(1, &m_vao);
glBindVertexArray(m_vao);
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glVertexAttribPointer(
POSITION,
2,
GL_FLOAT,
GL_FALSE,
sizeof(Triangle),
0);
glVertexAttribPointer(
INTENSITY,
1,
GL_FLOAT,
GL_FALSE,
sizeof(Triangle),
sizeof(float)*2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_R32F,
width,
height,
0,
GL_RED,
GL_FLOAT,
NULL);
glBindTexture(GL_FRAMEBUFFER, 0);
glGenFrameBuffers(1, &m_frameBuffer);
glBindFrameBuffer(GL_FRAMEBUFFER, m_frameBuffer);
glFramebufferTexture(
GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
m_texture,
0);
glBindFrameBuffer(GL_FRAMEBUFFER, 0);
After you just need to write your render function. A simple glDraw*() should be enough. Just remember to bind your buffers correctly. To enable the blending with the proper function. You might also want to disable the anti aliasing for your case. At first I'd say you need an ortho projection but I don't have all the element of your problem so it's up to you.
Long story short, if you never worked with openGL, the piece of code above will be relevant only after you read few documentation/tutorials on openGL/GLSL.

How does glDrawArrays know what to draw?

I am following some begginer OpenGL tutorials, and am a bit confused about this snippet of code:
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); //Bind GL_ARRAY_BUFFER to our handle
glEnableVertexAttribArray(0); //?
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); //Information about the array, 3 points for each vertex, using the float type, don't normalize, no stepping, and an offset of 0. I don't know what the first parameter does however, and how does this function know which array to deal with (does it always assume we're talking about GL_ARRAY_BUFFER?
glDrawArrays(GL_POINTS, 0, 1); //Draw the vertices, once again how does this know which vertices to draw? (Does it always use the ones in GL_ARRAY_BUFFER)
glDisableVertexAttribArray(0); //?
glBindBuffer(GL_ARRAY_BUFFER, 0); //Unbind
I don't understand how glDrawArrays knows which vertices to draw, and what all the stuff to do with glEnableVertexAttribArray is. Could someone shed some light on the situation?
The call to glBindBuffer tells OpenGL to use vertexBufferObject whenever it needs the GL_ARRAY_BUFFER.
glEnableVertexAttribArray means that you want OpenGL to use vertex attribute arrays; without this call the data you supplied will be ignored.
glVertexAttribPointer, as you said, tells OpenGL what to do with the supplied array data, since OpenGL doesn't inherently know what format that data will be in.
glDrawArrays uses all of the above data to draw points.
Remember that OpenGL is a big state machine. Most calls to OpenGL functions modify a global state that you can't directly access. That's why the code ends with glDisableVertexAttribArray and glBindBuffer(..., 0): you have to put that global state back when you're done using it.
DrawArrays takes data from ARRAY_BUFFER.
Data are 'mapped' according to your setup in glVertexAttribPointer which tells what is the definition of your vertex.
In your example you have one vertex attrib (glEnableVertexAttribArray) at position 0 (you can normally have 16 vertex attribs, each 4 floats).
Then you tell that each attrib will be obtained by reading 3 GL_FLOATS from the buffer starting from position 0.
Complementary to the other answers, here some pointers to OpenGL documentation. According to Wikipedia [1], development of OpenGL has ceased in 2016 in favor of the successor API "Vulkan" [2,3]. The latest OpenGL specification is 4.6 of 2017, but it has only few additions over 3.2 [1].
The code snippet in the original question does not require the full OpenGL API, but only a subset that is codified as OpenGL ES (originally intended for embedded systems) [4]. For instance, the widely used GUI development platform Qt uses OpenGL ES 3.X [5].
The maintainer of OpenGL is the Khronos consortium [1,6]. The reference of the latest OpenGL release is at [7], but has some inconsistencies (4.6 linked to 4.5 pages). If in doubt, use the 3.2 reference at [8].
A collection of tutorials is at [9].
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Vulkan
https://vulkan.org
https://en.wikipedia.org/wiki/OpenGL_ES
see links in function references like https://doc.qt.io/qt-6/qopenglfunctions.html#glVertexAttribPointer
https://registry.khronos.org
https://www.khronos.org/opengl
https://registry.khronos.org/OpenGL-Refpages/es3
http://www.opengl-tutorial.org

glDrawElements emits GL_INVALID_OPERATION when using AMD driver on Linux

I have a bit of OpenGL 2.1 rendering code that works great when using an nVidia card/driver or the open-source AMD driver, but doesn't work when using the official fglrx driver. It just displays a grey screen (the glClear colour) and doesn't draw anything.
gDEBugger shows that glDrawElements is giving the error GL_INVALID_OPERATION. According to this page (What can cause glDrawArrays to generate a GL_INVALID_OPERATION error?) there are a lot of half-documented possible causes of this error. The shader is compiling fine, and the buffer size should be good too, and I am not using geometry shaders (obviously). It's just a simple draw call for a cube, with only one vertex attribute. Code is below.
glUseProgram(r->program->getProgram());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, r->texture->glID );
glUniform1i(r->program->getUniform("texture").location, GL_TEXTURE0);
glUniform4f(r->program->getUniform("colour").location, r->colour.x, r->colour.y, r->colour.z, r->colour.w);
glBindBuffer(GL_ARRAY_BUFFER, r->vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, r->ibo);
glVertexAttribPointer(
r->program->getAttribute("position").location, // attribute
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
sizeof(GLfloat)*3, // stride
reinterpret_cast<void*>(0) // array buffer offset
);
glEnableVertexAttribArray(r->program->getAttribute("position").location);
glUniformMatrix4fv(r->program->getUniform("modelToCameraMatrix").location, 1, GL_FALSE, glm::value_ptr(modelToCameraMatrix));
glDrawElements(
r->mesh->mode, // mode
r->mesh->nrOfInds, // count
GL_UNSIGNED_SHORT, // type
reinterpret_cast<void*>(0) // element array buffer offset
);
I have no idea what is going on or what might be causing this error. If anyone has any pointers as to what might be causing this to happen with the fglrx driver and not with any other driver, I would be glad to hear it. If you need more code, I'll happily provide it of course.
Let's dissect this:
glUniform1i(r->program->getUniform("texture").location, GL_TEXTURE0 + i);
The OpenGL 2.1 Spec states in section 2.15:
Setting a sampler’s value to i selects texture image unit number i. The values of i range from zero to the implementation-dependent maximum supported number of texture image units.
Setting the value of a sampler is only legal with calls to glUniform1i{v}.
If an implementation takes the liberty to allow statements like the above, it has to do some subtraction under the covers - i.e. it has to map (GL_TEXTURE0 + i) to some value in [0, MAX_UNITS - 1]. (Note: MAX_UNITS is just symbolic here, it's not an actual GL constant!).
This is simply non-conformant behavior, since GL_TEXTURE0 defines a very large value and which is far out of range of any ancient or modern GPU's number of available texture units.
BTW, GL_TEXTURE0 is not a prefix, it's a constant.
The GL 4.4 core specification is much more clear about this in section 7.10:
An INVALID_VALUE error is generated if Uniform1i{v} is used to set a sampler to a value less than zero or greater than or equal to the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS.
Regarding glActiveTexture(): This functions takes a value in [GL_TEXTURE0, GL_TEXTURE0 + MAX_UNITS - 1]. It probably doesn't have to be like this, but that's the way it is defined.
It's no secret among the OpenGL community, that NVIDIA is often more lenient and their drivers sometimes take stuff that simply doesn't work on other hardware and drivers because it's simply not supposed to work.
As a general rule: never rely on implementation-specific behaviour. Rely only on the specification. If conformant and correct application code does not work with a specific implementation, it's a bug in the implementation.
I think I have found out what is wrong with the above code. The faulty line is
glUniform1i(r->program->getUniform("texture").location, GL_TEXTURE0 + i);
which should be
glUniform1i(r->program->getUniform("texture").location, i);
I'm not really able to find out exactly why it should be like this (glActiveTexture does need the GL_TEXTURE0 prefix), and why only the AMD driver craps out over it, so if anyone can elaborate on that, I would love to hear it. :)