I'm having an issue loading/assigning interleaved vertex data in OpenGL.
I keep getting an INVALID_OPERATION when setting the second attribute.
EDIT Turns out this only happens on Mac. On Windows, I don't get an INVALID_OPERATION error. But I have modified the below with what it looks like now. Still errors out on Mac.
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo);
GL.VertexAttribPointer(shader.GetAttribLocation("position"), 3, VertexAttribPointerType.Float, false, _vertexStride, 0);
REngine.CheckGLError();
GL.VertexAttribPointer(shader.GetAttribLocation("normal"), 3, VertexAttribPointerType.Float, false, _vertexStride, 12);
REngine.CheckGLError();
GL.VertexAttribPointer(shader.GetAttribLocation("texcoord"), 2, VertexAttribPointerType.Float, false, _vertexStride, 24);
REngine.CheckGLError();
GL.EnableVertexAttribArray(shader.GetAttribLocation("position"));
REngine.CheckGLError();
GL.EnableVertexAttribArray(shader.GetAttribLocation("normal"));
REngine.CheckGLError();
GL.EnableVertexAttribArray(shader.GetAttribLocation("texcoord"));
REngine.CheckGLError();
Any idea why? Others seem to do it and it works great, but I can't seem to get it to work.
Here is my GLSL for this:
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
layout(location=2) in vec2 texcoord;
out vec4 out_position;
out vec4 out_normal;
out vec2 out_texcoord;
void main() {
out_normal = vec4(normal,1.0f);
out_position = vec4(position,1.0f);
out_texcoord = texcoord;
}
and the frag:
out vec4 color;
void main()
{
color = vec4(1.0f,1.0f,1.0f,1.0f);
}
EDIT
Turns out I had stale glErrors in the queue from earlier in the pipeline. I checked earlier and had a bum call to glEnableClientState which isn't supported on Mac using the 4.2 context. I removed it as it wasn't necessary anymore with a full shader approach. This fixed the error and my glorious white mesh was displayed.
Only active attributes have a location. Your normal attribute is not active, as it is not used (the fact that you forward it to out_normal is irrelevant, as out_normal is not used). glGetAttributeLocation will return -1 for that, but the attribute index for glVertexAttribPointer is a GLuint, and (GLuint)-1 is way out of the range for allowed attribute indices. You should get the same error for texcoord too.
Please also note that using sizeof(float) as the size parameter for glVertexAttribPointer is wrong too. That parameter determines the number of components for the attribute vector, 1 (scalar), 2d, 3d or 4d, not some number of bytes.
Related
I've found a handful of similar problems posted around the web an it would appear that I'm already doing what the solutions suggest.
To summarize the problem; despite the compute shader running and no errors being present, no change is being made to the texture it's supposedly writing to.
The compute shader code. It was intended to do something else but for the sake of troubleshooting it simply fills the output texture with ones.
#version 430 core
layout(local_size_x = 4 local_size_y = 4, local_size_z = 4) in;
layout(r32f) uniform readonly image3D inputDensityField;
layout(r32f) uniform writeonly image3D outputDensityField;
uniform vec4 paintColor;
uniform vec3 paintPoint;
uniform float paintRadius;
uniform float paintDensity;
void main()
{
ivec3 cellIndex = ivec3(gl_GlobalInvocationID);
imageStore(outputDensityField, cellIndex, vec4(1.0, 1.0, 1.0, 1.0));
}
I'm binding the textures to the compute shader like so.
s32 uniformID = glGetUniformLocation(programID, name);
u32 bindIndex = 0; // 1 for the other texture.
glUseProgram(programID);
glUniform1i(uniformID, bindIndex);
glUseProgram(0);
The dispatch looks something like this.
glUseProgram(programID);
glBindImageTexture(0, inputTexID, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, outputTexID, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glDispatchCompute(groupDim.x, groupDim.y, groupDim.z);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
glUseProgram(0);
Inspecting through RenderDoc does not reveal any errors. The textures seem to have been bound correctly, although they are both displayed in RenderDoc as outputs which I would assume is an error on RenderDoc's part?
Whichever texture that was the output on the last glDispatchCompute will later be sampled in a fragment shader.
Order of operation
Listed images
The red squares are test fills made with glTexSubImage3D. Again for troubleshooting purposes.
I've made sure that I'm passing the correct texture format.
Example in RenderDoc
Additionally I'm using glDebugMessageCallback which usually catches all errors so I would assume that there's no problem with the creation code.
Apologies if the information provided is a bit incoherent. Showing everything would make a very long post and I'm unsure which parts are the most relevant to show.
I've found a solution! Apparently, in the case of a 3D texture, you need to pass GL_TRUE for layered in glBindImageTexture.
https://www.khronos.org/opengl/wiki/Image_Load_Store
Image bindings can be layered or non-layered, which is determined by layered. If layered is GL_TRUE, then texture must be an Array Texture (of some type), a Cubemap Texture, or a 3D Texture. If a layered image is being bound, then the entire mipmap level specified by level is bound.
I tried to draw banch of lines(vary basic and simplest thing) using VBO as folloiwng:
struct VertexColor
{
public Vector2d vertex;
public uint color;
//...
}
//...
Draw(){
GL.EnableClientState(ArrayCap.ColorArray);
GL.EnableClientState(ArrayCap.VertexArray);
GL.BindBuffer(BufferTarget.ArrayBuffer, lineVbo.VboID);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, lineVbo.EboID);
GL.VertexPointer(2, VertexPointerType.Double, BlittableValueType.StrideOf(lineList.ToArray()), 0);
GL.ColorPointer(4, ColorPointerType.UnsignedByte, BlittableValueType.StrideOf(lineList.ToArray()), 16);
GL.DrawElements(PrimitiveType.Lines, lineVbo.NumElements, DrawElementsType.UnsignedInt, 0);
}
It worked well on my PC, but on other one DrawElements threw "memory violation access", that was likely bacause of use glEnableClientState (according to similar questions on SO).
I replaced it with new Draw(via shader, but actually I don't need any shader in my program):
Shader.Bind(shader); //Shader is a helper class that works correctly
GL.BindBuffer(BufferTarget.ArrayBuffer, lineVbo.VboID);
GL.VertexAttribPointer(0, 2, VertexAttribPointerType.Double, false, BlittableValueType.StrideOf(lineList.ToArray()), 0);
GL.EnableVertexAttribArray(0);
GL.VertexAttribPointer(1, 4, VertexAttribPointerType.UnsignedByte, false, BlittableValueType.StrideOf(lineList.ToArray()), 16);
GL.EnableVertexAttribArray(1);
GL.BindBuffer(BufferTarget.ArrayBuffer, lineVbo.EboID);
GL.DrawElements(PrimitiveType.LineStrip, lineVbo.NumElements, DrawElementsType.UnsignedInt, (IntPtr)0);
Fragment shader gives an error "ERROR: 29718:48252: '' : storage qualifier not valid with layout qualifier id" :
#version 330 core
layout(location = 1) in vec4 fragmentColor;
out vec4 color;
void main(){
color = fragmentColor;
}
How can I overcome this error or replace EnableClientState in other way?
You are using OpenGL version 3.3, which uses GLSL v3.30. That's what you said in your fragment shader with #version 330.
layout(location) for the interface between shader stages was not added until GLSL version 4.10. So you can't use it. 3.30 added layout(location) for the outputs of fragment shader, and the inputs of vertex shaders. That is, the top and bottom of the shader pipeline.
But not for the interfaces between shader stages. So remove the layout(location) designation.
I'm actually working on making AR with the HMD oculus rift.
I'm not a pro on openGL and I'm sure it is the source of my problem.
I get this error:
Unhandled exception at 0x064DBD07 (nvoglv32.dll) in THING.exe: 0xC0000005: Access violation reading location 0x00000000.
It hapens during the drawing in quadGeometry_supcam->draw(); that is in renderCamera(eye); :
glDrawElements(elementType, elements * verticesPerElement,
GL_UNSIGNED_INT, (void*) 0);// Stop at this line in the debuger
Here's the drawing code
frameBuffer.activate();
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl::Stacks::with_push(pr, mv, [&]{
mv.preMultiply(eyeArgs.modelviewOffset);
pr.preMultiply(eyeArgs.projectionOffset);
//renderCamera(eye); //If I uncomment this part, it crash
renderScene(eye);
//renderCamera(eye); //If I uncomment this part, it works but It's not what I want
});
frameBuffer.deactivate();
glDisable(GL_DEPTH_TEST);
viewport(eye);
distortProgram->use();
glActiveTexture(GL_TEXTURE1);
eyeArgs.distortionTexture->bind();
glActiveTexture(GL_TEXTURE0);
frameBuffer.color->bind();
quadGeometry->bindVertexArray();
quadGeometry->draw();
gl::VertexArray::unbind();
gl::Program::clear();
//////////////////////////////////////////////////////////////
void renderScene(StereoEye eye) {
GlUtils::renderSkybox(Resource::IMAGES_SKY_CITY_XNEG_PNG);
GlUtils::renderFloorGrid(player);
gl::MatrixStack & mv = gl::Stacks::modelview();
gl::Stacks::with_push(mv, [&]{
mv.translate(glm::vec3(0, eyeHeight, 0)).scale(ipd);
GlUtils::drawColorCube(true); // Before this call renderCamera crash after it works
});
}
void renderCamera(StereoEye eye) {
gl::ProgramPtr program;
program = GlUtils::getProgram(
Resource::SHADERS_TEXTURED_VS,
Resource::SHADERS_TEXTURED_FS);
program->use();
textures[eye]->bind();
quadGeometry_supcam->bindVertexArray();
quadGeometry_supcam->draw();
}
If I call renderCamera before GlUtils::drawColorCube(true); it crash but after it works.
But I need to draw the camera before the rest.
I'm not going to precise drawColorCube because it use an other shader.
I suppose that the problem comes from something missing for the shader program. So here's the fragment and vertex shader.
Vertex:
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
layout(location = 0) in vec4 Position;
layout(location = 1) in vec2 TexCoord0;
out vec2 vTexCoord;
void main() {
gl_Position = Projection * ModelView * Position;
vTexCoord = TexCoord0;
}
Fragment:
uniform sampler2D sampler;
in vec2 vTexCoord;
out vec4 vFragColor;
void main() {
vFragColor = texture(sampler, vTexCoord);
}
(I want to draw the scene on the image of the camera).
Any idea?
You aren't cleaning up the GL state after you call quadGeometry_supcam->draw();. Try adding these lines after that call
gl::VertexArray::unbind();
gl::Program::clear();
(Edited)
I thought the problem was in NULL being passed as indices parameter to glDrawElements() here:
glDrawElements(elementType, elements * verticesPerElement,
GL_UNSIGNED_INT, (void*) 0);
The last parameter, indices, used to be mandatory and passing in 0 or NULL wouldn't work, because it would then try to read the indices from that address and would produce that error Access violation reading location 0x00000000.
But as Jherico pointed out, it is possible to have a buffer with indices bound to GL_ELEMENT_ARRAY and then the indices parameter is an offset into that buffer, not a pointer. Passing 0 then simply means 'no offset'. And Oculus Rift SDK apparently uses this method.
The program crashing with that invalid access to address 0x0 at the line with glDrawElements() therefore does not indicate that 0 should not be used, but that the GL_ELEMENT_ARRAY buffer is not initialized or enabled correctly. Probably because of an improper cleanup, as Jherico pointed out in his better answer.
I'm having an issue where my glsl 130 code wont run properly on my somewhat modern (ATI 5850) hardware while the identical code runs perfectly fine on an older laptop with a NVIDIA card I have.. This is the case no mater what opengl context I use. What occurs is the vectors: in_position, in_colour and in_normal don't bind properly on the new hardware. It appears I am forced to a newer version of glsl (330) on newer hardware.
Here is the glsl code for the vertex shader. Its fairly simple and basic.
#version 130
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform mat4 normalMatrix;
in vec4 in_position;
in vec4 in_colour;
in vec3 in_normal;
out vec4 pass_colour;
smooth out vec3 vNormal;
void main()
{
gl_Position = projectionMatrix * viewMatrix * modelMatrix * in_position;
vec4 vRes = normalMatrix*vec4(in_normal, 0.0);
vNormal = vRes.xyz;
pass_colour = in_colour;
}
What happens is data for:
in vec4 in_position;
in vec4 in_colour;
in vec3 in_normal;
doesn't bind or not fully. The values are oddly distorted. From my testing everything else works properly. Changing the version to 330 and using the location keyword fixes the issue, but that also makes the code not compatible with older verions of opengl...
Here is a sample of the code I use to specify these locations.
for the program:
glBindAttribLocation(LD_standard_program, 0, "in_position");
glBindAttribLocation(LD_standard_program, 1, "in_colour");
glBindAttribLocation(LD_standard_program, 2, "in_normal");
and later for the data itself:
--- code to buffer vertex data
glEnableVertexAttribArray(0);
glVertexAttribPointer((GLuint) 0, 4, GL_FLOAT, GL_FALSE, 0, 0);
--- code to buffer colour data
glEnableVertexAttribArray(1);
glVertexAttribPointer((GLuint) 1, 4, GL_FLOAT, GL_FALSE, 0, 0);
--- code to buffer normal data
glEnableVertexAttribArray(2);
glVertexAttribPointer((GLuint) 2, 3, GL_FLOAT, GL_FALSE, 0, 0);
My question is: Isn't opengl supposed to be backwards compatible?? I'm starting to be afraid that I'll have to write seperate shaders for ever single version of opengl to make my program run on different hardware... Since binding these attributes is very basic functionality I doubt it's a bug in the ATI implementation...
Are you calling glBindAttribLocation before glLinkProgram? Calling after won't give any effect, because the vertex attributes are assigned indices only during glLinkProgram.
In GLSL 3.30+ there is better way of specifiying attribute indices directly in GLSL code:
layout(location=0) in vec4 in_position;
layout(location=1) in vec4 in_colour;
layout(location=2) in vec3 in_normal;
Edit: oh, I skipped the part you tried layout keyword already.
I have the following vertex shader:
uniform mat4 uMVP;
attribute vec4 aPosition;
attribute vec4 aNormal;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
varying vec4 vPrimaryColor;
void main() {
gl_Position = uMVP * aPosition;
vPrimaryColor = vec4(1.0, 1.0, 1.0, 1.0);
vTexCoord = aTexCoord;
}
And the following fragment shader:
uniform sampler2D sTex;
varying vec2 vTexCoord;
varying vec4 vPrimaryColor;
void main() {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
Note that while I have a vTexCoord and vPrimaryColor, neither of which are used in the fragment shader. (The reason why they are there is because they eventually will be).
Now, I also set uMVP to be the identity matrix for now, and draw using the following code:
// Load the matrix
glUniformMatrix4fv(gvMVPHandle, 1, false, &mvPMatrix.matrix[0][0]);
// Draw the square to be textured
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvPositionHandle);
glVertexAttribPointer(gvTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glDrawArrays(GL_QUADS, 0, 4);
where the square is:
const GLfloat PlotWidget::gFullScreenQuad[] = { -1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f};
So, when I run this program, I get a black screen. Which does not seem like you would expect. However, when I change the line in the shade:
vTexCoord = aTexCoord;
To
vTexCoord = vec2(1.0, 1.0);
It works perfectly. So I would assume the problem with the code is with that line of code, but I can't think of anything in opengl that would cause this. Also, I'm using Qt for this project, which means this class is using the QGLWidget. I've never had this issue with OpenGL ES 2.0.
Any suggestions?
I'm sorry for the vague title, but I don't even know what class of problem this would be.
Are you checking glGetShaderInfoLog and glGetProgramInfoLog during your shader compilation? If not then I would recommend that as the first port of call.
Next thing to check would be your the binding for the texture coordinates. Are the attributes are being set up correctly? Is the data valid?
Finally, start stepping through your code with liberal spraying of glGetError calls. It wil almost certainly fail on glDrawArrays which won't help you much, but that's usually when the desparation sets in for me!
OR
You could try gDEBugger. I use it mainly to look for bottlenecks and to make sure I'm releasing OpenGL resources properly so can't vouch for the debugger, but it's worth a shot.
Apparently you need to actually use the whole use glEnableVertexAttribArray if it's getting passed into the fragment shader. I have no idea why though. But changing the drawing code to this:
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvPositionHandle);
glVertexAttribPointer(gvTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvTexCoordHandle);
glDrawArrays(GL_QUADS, 0, 4);
made it work.
Same problem, different cause.
For some devices the automatic variable linking in glLinkProgram does not work as specified.
Make sure things are done in the following order:
glCreateProgram
glCreateShader && glCompileShader for both shaders
glBindAttribLocation for all attributes
glLinkProgram
Step 3 can be repeated later at any time to rebind variables to different buffer slots - however changes only become effective after another call to glLinkgProgram.
or short: whenever you call glBindAttribLocation make sure a glLinkProgram calls comes after.