I was playing around with gl-rs and in the original opengl tutorial they set VertexAttribPointer and it's offset with:
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)(3* sizeof(float)));
With the gl-rs I can't understand how to set offset of (void*)(3* sizeof(float). I can set (void*)0 with :
gl::VertexAttribPointer(
0,
3,
gl::FLOAT,
gl::FALSE,
(6 * std::mem::size_of::<f32>()) as gl::types::GLint,
std::ptr::null(), // offset
);
How do I set different values like (void*)(3* sizeof(float) for the offset? I am not familiar with C so explanation would be appreciated.
The last parameter (offset) has to be cast to *const gl::types::GLvoid:
gl::VertexAttribPointer(
1,
3,
gl::FLOAT,
gl::FALSE,
(6 * std::mem::size_of::<f32>()) as gl::types::GLint,
(3 * std::mem::size_of::<f32>()) as *const gl::types::GLvoid
);
See also
glVertexAttribPointer
Rust and OpenGL from scratch - Vertex Attribute Format
Related
This works but also results in "Don't use reinterpret_cast (type.1)" warning:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
reinterpret_cast<void*>(sizeof(GLfloat) * 3));
This doesn't compile:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
static_cast<void*>(sizeof(GLfloat) * 3));
This doesn't compile:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
dynamic_cast<void*>(sizeof(GLfloat) * 3));
This obviously works but seems to be a big no-no in C++ ("Don't use C-style casts (type.4)")
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
(void*)(sizeof(GLfloat) * 3));
Should i just ignore the warning about the reinterpret_cast?
When you're doing low-level programming, you will occasionally have to do things that the C++ core guidelines say you shouldn't do. So just do them and either live with the "warning" or turn off that specific guideline (possibly on a per-file basis).
That having been said, the need for this particular bit of low-level fudgery is solely because of OpenGL's stupidity in its vertex specification API. That value ought to be an integer byte offset of some sort, not an offset cast to a pointer which the other side will cast back to an offset.
So it would be better to just avoid the bad API altogether. Use separate attribute format specification rather than the old-style glVertexAttribPointer. It's superior in pretty much every way. It will turn your code from:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
reinterpret_cast<void*>(sizeof(GLfloat) * 3));
to
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3); //Offset specified as integer.
glVertexAttribBinding(1, 0); //You seem to be using multiple attributes with a stride, so they should use the same buffer binding.
//Some later point when you're ready to provide a buffer.
glBindVertexBuffer(0, buffer_obj, 0, sizeof(GLfloat) * 8); //Stride goes into the buffer binding.
See, no casting at all.
Unfortunately, there are no alternatives for the glDrawElements family of functions, so you're still going to get this warning.
This works but also results in "Don't use reinterpret_cast (type.1)" warning:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
reinterpret_cast<void*>(sizeof(GLfloat) * 3));
This doesn't compile:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
static_cast<void*>(sizeof(GLfloat) * 3));
This doesn't compile:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
dynamic_cast<void*>(sizeof(GLfloat) * 3));
This obviously works but seems to be a big no-no in C++ ("Don't use C-style casts (type.4)")
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
(void*)(sizeof(GLfloat) * 3));
Should i just ignore the warning about the reinterpret_cast?
When you're doing low-level programming, you will occasionally have to do things that the C++ core guidelines say you shouldn't do. So just do them and either live with the "warning" or turn off that specific guideline (possibly on a per-file basis).
That having been said, the need for this particular bit of low-level fudgery is solely because of OpenGL's stupidity in its vertex specification API. That value ought to be an integer byte offset of some sort, not an offset cast to a pointer which the other side will cast back to an offset.
So it would be better to just avoid the bad API altogether. Use separate attribute format specification rather than the old-style glVertexAttribPointer. It's superior in pretty much every way. It will turn your code from:
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 8,
reinterpret_cast<void*>(sizeof(GLfloat) * 3));
to
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3); //Offset specified as integer.
glVertexAttribBinding(1, 0); //You seem to be using multiple attributes with a stride, so they should use the same buffer binding.
//Some later point when you're ready to provide a buffer.
glBindVertexBuffer(0, buffer_obj, 0, sizeof(GLfloat) * 8); //Stride goes into the buffer binding.
See, no casting at all.
Unfortunately, there are no alternatives for the glDrawElements family of functions, so you're still going to get this warning.
Load to VAO function
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &modelVertexVBO);
glGenBuffers(1, &sphereTransformVBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, modelVertexVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * (sphereModel->numVertices * 3), &(sphereModel->vertices[0]), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), NULL);
glBindBuffer(GL_ARRAY_BUFFER, sphereTransformVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * (maxSphereStorage * 4 * 4), NULL, GL_STATIC_DRAW);
glVertexAttribPointer(1, 4 * 4, GL_FLOAT, GL_FALSE, 4 * 4 * sizeof(GLfloat), NULL);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribDivisor(sphereTransformVBO, 1);
glBindVertexArray(0);
Geometry drawing function:
glBindVertexArray(VAO);
glDrawArraysInstanced(sphereModel->mode, 0, sphereModel->numVertices, sphereCount);
When I try running this code it crashes with the following crash note:
Exception thrown at 0x0000000068F4EDB4 (nvoglv64.dll) in Engine.exe: 0xC0000005: Access violation reading location 0x0000000000000000.
When I remove the second VBO it works for some reason.
glVertexAttribPointer(0, 4 * 4, GL_FLOAT, GL_FALSE, 4 * 4 * sizeof(GLfloat), NULL);
Your crash is the result of a simple copy-and-paste bug. You use attribute 0 here, which means you never called glVertexAttribPointer for attribute 1. Therefore, it uses the default attribute state, thus leading to a crash.
However, I strongly suspect that you are attempting to pass a 4x4 matrix as a single attribute. That won't work; OpenGL will give you a GL_INVALID_VALUE error if you try to set the attribute's size to be more than 4.
Matrices are treated as arrays of (column) vectors. And each vector takes up a separate attribute index. So if you want to pass a matrix, you will have to use 4 attribute indices (starting with the one provided by your shader). And each one will have to have the divisor set for it as well.
Why are you remapping the Vertex Attribute pointer to your second vbo?
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), NULL);
...
glVertexAttribPointer(0, 4 * 4, GL_FLOAT, GL_FALSE, 4 * 4 * sizeof(GLfloat), NULL);
refer https://www.opengl.org/sdk/docs/man/html/glVertexAttribPointer.xhtml for more info on glVertexAttribPointer()
I am guessing your draw call
glDrawArraysInstanced(sphereModel->mode, 0, sphereModel->numVertices, sphereCount);
Note the '0' index in both cases
is exceeding the size of your second VBO and hence the error. Is this intentional that you want to remap your vertex attribute pointer to the second VBO? This overwrites the first mapping, in other words your first VBO is not being used.
I suggest using GLuint attribute1; to store the index and map accordingly to avoid such problems in future. Using numbers for attribute index directly like 0 is an easy way to make mistakes like this.
I'm writing and OpenGL application where I have a GrassPatch class that represents patches of grass in the scene. I don't want to provide any unnecessary details, so the GrassPatch.cpp looks roughly like this:
GrassPatch::GrassPatch(GLuint density)
{
m_density = density;
generateVertices();
}
void GrassPatch::generateVertices()
{
const int quadVertexCount = 64;
GLfloat bladeWidth, bladeHeight, r;
GLfloat randomX, randomZ;
m_vertices = new GLfloat[quadVertexCount * m_density];
srand(time(NULL));
for (int i = 0; i < m_density; i++)
{
// generate 64 float values and put them into their respective indices in m_vertices
}
glGenBuffers(1, &m_VBO);
glGenVertexArrays(1, &m_VAO);
glBindVertexArray(m_VAO);
glBindBuffer(GL_ARRAY_BUFFER, m_VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * m_density * quadVertexCount, m_vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)(5 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(3, 8, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)(8 * sizeof(GLfloat)));
glEnableVertexAttribArray(3);
glBindVertexArray(0);
}
void GrassPatch::draw()
{
glBindVertexArray(m_VAO);
glPatchParameteri(GL_PATCH_VERTICES, 4);
glDrawArrays(GL_PATCHES, 0, 4 * m_density);
glBindVertexArray(0);
}
In short, the vertex array object (VAO) for each grass patch is generated inside generatVertices. My data is tightly packed and the attributes for each vertex are at indices 0, 3, 5, 8, where each vertex is composed of 16 float's. Each grass blade consists of 4 vertices, hence quadVertexCount is set to 64. The vertex shader I use is pretty straighforward and looks like this:
#version 440 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoord;
layout (location = 2) in vec3 centerPos;
layout (location = 3) in float randomValues[8];
out vec2 TexCoord_CS;
void main()
{
TexCoord_CS = texCoord;
gl_Position = vec4(position, 1.0f);
}
The problem here is, when I try to draw each grass blade using the draw() method, I get an access violation error. However, if I slightly change the attribute indices to 0, 4, 8, 12 and make the necessary variable type changes in the vertex shader, the problem disappears and everything renders fine.
What am I missing here, what would cause a problem like this? I've spent hours on the Internet, trying to find the reason but couldn't come up with anything yet. I'm working with Visual Studio 2015 Community Edition. The graphics card I use is NVIDIA GTX 770 and all drivers are up to date.
This is not a valid call:
glVertexAttribPointer(3, 8, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)(8 * sizeof(GLfloat)));
The second argument (size) needs to be 1, 2, 3, or 4. If you call glGetError(), you should see a GL_INVALID_VALUE error code from this call.
Vertex attributes can only have up to 4 components, matching a vec4 type in the shader code. If you need 8 values for an attribute, you'll have to split it into 2 attributes of 4 values each, or use uniforms instead of attributes.
layout (location = 3) in float randomValues[8];
This is not a single input value. This is an array of input values. While this is perfectly legal, it does change what this means.
In particular, it means that this input array is filled in by eight separate attributes. Yes, each one of those floats is a separate attribute, from the OpenGL side. They are assigned locations sequentially, starting with the location you specified. So the input randomValues[4] comes from attribute location 7 (3 + 4).
So your attempt to provide 8 values with one glVertexAttribPointer call will not work. Well, it was never going to work, since the number of components per attribute must be on the range [1, 4]. But it double-doesn't work, since you're not filling in the other 7.
If you want to pass these 8 elements as 8 attributes like this, you therefore need eight independent calls to glVertexAttribPointer:
for(int ix = 0; ix < 8; ++ix)
glVertexAttribPointer(3 + ix, 1, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)((8 + ix) * sizeof(GLfloat)));
But quite frankly, you shouldn't do that. Instead of passing 8 independent attributes, you should pass 2 vec4's:
layout (location = 3) in vec4 randomValues[2];
That way, you only need 2 attributes in your OpenGL code:
glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)(8 * sizeof(GLfloat)));
glVertexAttribPointer(4, 4, GL_FLOAT, GL_FALSE, 16 * sizeof(GLfloat), (GLvoid*)(12 * sizeof(GLfloat)));
Let's say I want to upload unsigned integer and float data to the graphics card, in a single draw call. I use standard VBOs (not VAO, I'm using OpenGL 2.0), with the various vertex attribute arrays combined into the single GL_ARRAY_BUFFER, and pointed to individually using glVertexAttribPointer(...), so:
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferId);
glEnableVertexAttribArray(positionAttributeId);
glEnableVertexAttribArray(myIntAttributeId);
glVertexAttribPointer(positionAttributeId, 4, GL_FLOAT, false, 0, 0);
glVertexAttribPointer(colorAttributeId, 4, GL_UNSIGNED_INT, false, 0, 128);
glClear(...);
glDraw*(...);
The problem I have here is that my buffer (ref'ed by vertexBufferId), has to be created as a FloatBuffer in LWJGL, so that it can support the attribute of type GL_FLOAT, and this would seem to preclude the use of GL_INT here (or else, the other way around - it's either one or the other since the buffer cannot be of two types).
Any ideas? How would this be handled in native C code?
This would be handled in C (in a safe way) by doing this:
GLfloat *positions = malloc(sizeof(GLfloat) * 4 * numVertices);
GLuint *colors = malloc(sizeof(GLuint) * 4 * numVertices);
//Fill in data here.
//Allocate buffer memory
glBufferData(..., (sizeof(GLfloat) + sizeof(GLuint)) * 4 * numVertices, NULL, ...);
//Upload arrays
glBufferSubData(..., 0, sizeof(GLfloat) * 4 * numVertices, positions);
glBufferSubData(..., sizeof(GLfloat) * 4 * numVertices, sizeof(GLuint) * 4 * numVertices, colors);
free(positions);
free(colors);
There are other ways of doing this in as well, which involve a lot of casting and so forth. But this code emulates what you'll have to do in LWJGL.