Does this look like a vertex attribute/layout issue? - c++

I have a project that's been working fine using floats, and I've changed it to use doubles instead of floats, and it doesn't work anymore. I've got a feeling it's maybe the layout of my Vertex, and now the Vertex has 3 position doubles, 3 normal doubles, and 2 texCoord floats. Does the following image look like this is a vertex layout/stride/size issue? Looks strange to me.
Here is my Vertex struct:
struct Vertex
{
glm::dvec3 position; // 24 bytes
glm::dvec3 normal; // 24 bytes
glm::vec2 texCoords; // 8 bytes
}; On the CPU there is no padding. Shader side there would be for a block, but for attributes I don't think it matters.
My vertex shader looks like this:
layout (location = 0) in dvec3 position;
layout (location = 2) in dvec3 vertNormal;
layout (location = 4) in vec2 vertTexCoords;
layout (location = 0) out dvec3 fragWorldPos;
layout (location = 2) out dvec3 fragNormal;
layout (location = 4) out vec2 fragTexCoords;
My fragment shader:
layout (location = 0) flat in dvec3 fragWorldPos;
layout (location = 2) flat in dvec3 fragNormal;
layout (location = 4) in vec2 fragTexCoords;
layout (location = 5) out vec4 outFragColour;
And my vertex attributes:
glVertexAttribPointer(0, 3, GL_DOUBLE, GL_FALSE, 56, (void*)nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1 (should be 2?), 3, GL_DOUBLE, GL_FALSE, 56, (void*)(3 * sizeof(double)));
glEnableVertexAttribArray(1); //Should be ?
glVertexAttribPointer(2 (should be 4?), 2, GL_FLOAT, GL_FALSE, 56, (void*)(48));
glEnableVertexAttribArray(2); //Should be ?
It basically looks like what happens when you're graphics card is about to die. It flickers a lot.

location in vertex shader must match glVertexAttribPointer and glEnableVertexAttribArray. So, vertex shader must be edited.
afaik, you should not specify location for out argument in vertex shader and for any arguments in fragment shader, they should be just passed as-is by name.

Related

Vector 4 not representing the colors of all the verticii

I'm trying to have 4 integers represent the colors of all the verticii in a VBO by having the stride on the color vertex attribute pointer, however, It seems to only take the value once for the color, and, as a result, assigns the rest of the verticii as black as in the picture: picture. The expected result is that all the verticii will be white.
Here is the relevant pieces of code:
int triangleData[18] =
{
2147483647,2147483647,2147483647,2147483647,//opaque white
0,100, //top
100,-100, //bottom right
-100,-100 //bottom left
};
unsigned int colorVAO, colorVBO;
glGenVertexArrays(1, &colorVAO);
glGenBuffers(1, &colorVBO);
glBindVertexArray(colorVAO);
glBindBuffer(GL_ARRAY_BUFFER, colorVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleData), triangleData, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_INT, GL_FALSE, 2 * sizeof(int), (void*)(4*sizeof(int)));
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 4, GL_INT, GL_TRUE, 0, (void*)0);
glEnableVertexAttribArray(1);
Vertex shader:
#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec4 aColor;
out vec4 Color;
uniform mat4 model;
uniform mat4 view;
uniform mat4 ortho;
void main()
{
gl_Position = ortho * view * model * vec4(aPos, 1.0, 1.0);
Color = aColor;
}
Fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 Color;
void main()
{
FragColor = Color;
}
From the documentation of glVertexAttribPointer:
stride
Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array.
Setting the stride to 0 does not mean that the same data is read for each vertex. It means that the data is packed one after the other in the buffer.
If you want all the vertices to use the same data, you can either disable the attribute and use glVertexAttrib, or you can use the separate vertex format (available starting from OpenGL 4.3 or with ARB_vertex_attrib_binding) similar to:
glBindVertexBuffer(index, buffer, offset, 0);
where a stride of 0 really means no stride.

OpenGL pass integer array with GL_ARRAY_BUFFER

I am trying to pass some integer values to the Vertex Shader along with the vertex data.
I generate a buffer while vertex array is bound and then try to attach it to a location but it seems like in vertex shader the value is always 0.
here is part of the code that generates the buffer and it`s usage in the shader.
glm::vec3 materialStuff = glm::vec3(31, 32, 33);
glGenBuffers(1, &materialBufferIndex);
glBindBuffer(GL_ARRAY_BUFFER, materialBufferIndex);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3), &materialStuff, GL_STATIC_DRAW);
glEnableVertexAttribArray(9);
glVertexAttribIPointer(9, 3, GL_INT, sizeof(glm::vec3), (void*)0);
And here is part of the shader that suppose to receive the integer values
// Some other locations
layout (location = 0) in vec3 vertex_position;
layout (location = 1) in vec2 vertex_texcoord;
layout (location = 2) in vec3 vertex_normal;
layout (location = 3) in vec3 vertex_tangent;
layout (location = 4) in vec3 vertex_bitangent;
layout (location = 5) in mat4 vertex_modelMatrix;
// layout (location = 6) in_use...
// layout (location = 7) in_use...
// layout (location = 8) in_use...
// The location I am attaching my integer buffer to
layout (location = 9) in ivec3 vertex_material;
// I also tried with these variations
//layout (location = 9) in int vertex_material[3];
//layout (location = 9) in int[3] vertex_material;
// and then in vertex shader I try to retrieve the int value by doing something like this
diffuseTextureInd = vertex_material[0];
That diffuseTextureInd should go to fragment shader through
out flat int diffuseTextureInd;
And I am planning to use this to index into an array of bindless textures that I already have set up and working. The issue is that it seems like vertex_material just contains 0s since my fragment shader always displays the 0th texture in the array.
Note: I know that my fragment shader is fine since if I do
diffuseTextureInd = 31;
in the vertex shader, the fragment shader correctly receives the correct index and displays the correct texture. But when I try to use the value from the layout location 9, it seems like I always get a 0. Any idea what I am doing wrong here?
The following definitions:
glm::vec3 materialStuff = glm::vec3(31, 32, 33);
glVertexAttribIPointer(9, 3, GL_INT, sizeof(glm::vec3), (void*)0);
...
layout (location = 9) in ivec3 vertex_material;
practically mean that:
glm::vec3 means that you declare vector of 3 floats rather than integers. glm::ivec3 should be used for vector of integer.
ivec3 vertex attribute means a vector of 3 integer values is expected for each vertex. At the same moment, materialStuff defines values only for a single vertex (makes no sense for a triangle, which would require at least 3 glm::ivec3).
What is supposed to be declared for passing a single integer vertex attribute:
layout (location = 9) in int vertex_material;
(without any array qualifier)
GLint materialStuff[3] = { 31, 32, 33 };
glVertexAttribIPointer(9, 1, GL_INT, sizeof(GLint)*3, (void*)0);
It should be noticed though, that passing different per-vertex integer to fragment shader makes no sense, which I suppose you solved by flat keyword. Existing pipeline defines only per-vertex inputs, not per-triangle or something like this. There are glVertexAttribDivisor() defining the vertex attribute rate, but it is applicable only to rendering instances via glDrawArraysInstanced()/glDrawElementsInstanced() (specific vertex attribute might be increment per instance), not triangles.
There are ways to handle per-triangle inputs - this could be done by defining Uniform Buffer Object or Texture Buffer Object (same as 1D texture but for accessing by index without interpolation) instead of generic Vertex Attribute. But tricks will be still necessary to determine the triangle index in this array - again, from vertex attribute or from built-in variables like gl_VertexID in Vertex Shader, gl_PrimitiveIDIn in Geometry Shader or gl_PrimitiveID in Fragment Shader (I cannot say, though, how these counters are affected by culling).

Does it matter if there are gaps between uniform locations in OpenGL/GLSL shaders?

In a GLSL shader, if I have the following layout specifications:
layout (location = 0) uniform mat4 modelMatrix;
layout (location = 1) uniform mat4 viewMatrix;
layout (location = 5) uniform mat4 projMatrix;
layout (location = 30) uniform vec3 diffuseColor;
layout (location = 40) uniform vec3 specularColor;
void main()
{
...
}
Does it matter that there are gaps between the locations? Do these gaps have any impacts in terms of actual memory layout of the data or performance?
Whether it affects performance cannot be known without testing on various implementations. However, as far as the OpenGL specification is concerned, uniform locations are just numbers; they do not represent anything specific about the hardware. So gaps in locations are fine, from a standardization point of view.
Most OpenGL implementations do have an upper limit on the number of bindings afforded to Attributes, Uniforms, etc. So if you specify a number above the maximum limit, the GL might not handle it correctly.
But a lot of it is implementation specific. An implementation might, for example, only allow up to 16* attribute locations, but has no problem indexing any valid integer value so long as the number of unique locations doesn't exceed 16.
More importantly, there's no limit on simply skipping locations:
layout(location = 0) in vec2 vertex;
layout(location = 1) in vec4 color;
layout(location = 3) in uint indicator;
layout(location = 7) in vec2 tex;
Which, of course, you bind as expected:
glEnableVertexArrayAttrib(0);
glEnableVertexArrayAttrib(1);
glEnableVertexArrayAttrib(3);
glEnableVertexArrayAttrib(7);
//Assuming all the data is tightly packed in a single Array Buffer
glVertexAttribPointer(0, 2, GL_FLOAT, false, 0, (void*)(36));
glVertexAttribPointer(1, 4, GL_FLOAT, false, 8, (void*)(36));
glVertexAttribIPointer(3, 1, GL_UNSIGNED_INT, 24, (void*)(36));
glVertexAttribPointer(7, 2, GL_FLOAT, false, 28, (void*)(36));
*I haven't looked it up, but I do know OpenGL guarantees that implementations support at least some number of Attribute and Uniform locations and bindings. I don't know what that number is, but the number '84' keeps popping into my head for some reason.

OpenGL - vertex color in shader gets swapped

I'm trying to send colors to the shader but the colors get swapped,
I send 0xFF00FFFF (magenta) but I get 0xFFFF00FF (yellow) in the shader.
I think is happening something like this, by experimenting:
My vertex shader:
#version 330 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec4 color;
uniform mat4 pr_matrix;
uniform mat4 vw_matrix = mat4(1.0);
uniform mat4 ml_matrix = mat4(1.0);
out DATA
{
vec4 position;
vec3 normal;
vec4 color;
} vs_out;
void main()
{
gl_Position = pr_matrix * vw_matrix * ml_matrix * position;
vs_out.position = position;
vs_out.color = color;
vs_out.normal = normalize(mat3(ml_matrix) * normal);
}
And the fragment shader:
#version 330 core
layout(location = 0) out vec4 out_color;
in DATA
{
vec3 position;
vec3 normal;
vec4 color;
} fs_in;
void main()
{
out_color = fs_in.color;
//out_color = vec4(fs_in.color.y, 0, 0, 1);
//out_color = vec4((fs_in.normal + 1 / 2.0), 1.0);
}
Here is how I set up the mesh:
struct Vertex_Color {
Vec3 vertex;
Vec3 normal;
GLint color; // GLuint tested
};
std::vector<Vertex_Color> verts = std::vector<Vertex_Color>();
[loops]
int color = 0xFF00FFFF; // magenta, uint tested
verts.push_back({ vert, normal, color });
glBufferData(GL_ARRAY_BUFFER, verts.size() * sizeof(Vertex_Color), &verts[0], GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex_Color), (const GLvoid*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex_Color), (const GLvoid*)(offsetof(Vertex_Color, normal)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex_Color), (const GLvoid*)(offsetof(Vertex_Color, color)));
glEnableVertexAttribArray(2);
Here are some examples:
I can't figure it out what's wrong. Thanks in advance.
Your code is reinterpreting an int as 4 consecutive bytes in memory. The internal encoding for int (and all other types) is machine-specific. In your case, you got 32 bit integers stored in little endian byte order, which is kind of the typical case for PC environments.
You could use an array like GLubyte color[4] to explicitely get a defined memory layout.
If you really want to use an integer type, you could send the data as a an integer attribute with glVertexAttribIPointer (note the I there) and use unpackUnorm4x8 om the shader to get a normalized float vector. However, that requires at least GLSL 4.10, and might be less efficient than the standard approach.

OpenGL - GL_LINE_STRIP acts like GL_LINE_LOOP

Im using OpenGL 3.3 with GLFW.
The problem is that GL_LINE_STRIP and GL_LINE LOOP give the same result.
Here is the array of 2D coordinates:
GLfloat vertices[] =
{
0, 0,
1, 1,
1, 2,
2, 2,
3, 1,
};
The attribute pointer:
// Position attribute 2D
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
And finally:
glDrawArrays(GL_LINE_STRIP, 0, sizeof(vertices)/4);
Vertex shader:
#version 330 core
layout (location = 0) in vec2 position;
layout (location = 1) in vec3 color;
out vec3 ourColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position, 0.0f, 1.0f);
ourColor = color;
}
Fragment shader:
#version 330 core
in vec3 ourColor;
out vec3 color;
void main()
{
color = vec3(ourColor);
}
The Color attrib. is disabled (lines are black and visible)
Any idea?
You have only 5 pairs of floats, so 5 vertices. Total size of your array is 4 times 10 floats, so 40 bytes.
Your equation for count, 40/4 gives 10. sizeof(array) / (sizeof(array[0]) * dimensionality) would be the correct equation there.