Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I have a file from which I read vertex-positions/-uvs/-normals and also indices. I want to render them in the most efficient way possible. That's not the problem. I also want to use a vertex shader to displace the vertecies, from bones and animations.
Of course I want to achieve this in the most efficient way possible. The texture is bound externally, nothing I should care about.
My first idea was to use glVertexAttribute* and glBindBuffer* etc. But I can't figure out a way to get my normals through like when I do glNormal, glTexCoord they get processed by OpenGl automatically.
Like I said I ONLY can use vertex shaders, fragment etc. is already "blocked".
What version of GLSL are you using?
This probably will not answer your question, but it shows how to properly setup generic vertex attributes without relying on non-standard attribute aliasing.
The general idea is the same for all versions (you use generic vertex attributes), but the syntax for declaring them in GLSL differs. Regardless what version you are using, you need to tie the named attributes in your vertex shader to the same index as you pass to glVertexAttribPointer (...).
Pre-GLSL 1.30 (GL 2.0/2.1):
#version 110
attribute vec4 vtx_pos_NDC;
attribute vec2 vtx_tex;
attribute vec3 vtx_norm;
varying vec2 texcoords;
varying vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
GLSL 1.30 (GL 3.0):
#version 130
in vec4 vtx_pos_NDC;
in vec2 vtx_tex;
in vec3 vtx_norm;
out vec2 texcoords;
out vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
For both of these shaders, you can set the attribute location for each of the inputs (before linking) like so:
glBindAttribLocation (<GLSL_PROGRAM>, 0, "vtx_pos_NDC");
glBindAttribLocation (<GLSL_PROGRAM>, 1, "vtx_tex");
glBindAttribLocation (<GLSL_PROGRAM>, 2, "vtx_norm");
If you are lucky enough to be using an implementation that supports
GL_ARB_explicit_attrib_location (or GLSL 3.30), you can also do this:
GLSL 3.30 (GL 3.3)
#version 330
layout (location = 0) in vec4 vtx_pos_NDC;
layout (location = 1) in vec2 vtx_tex;
layout (location = 2) in vec3 vtx_norm;
out vec2 texcoords;
out vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
Here's an example of how to set up an vertex buffer in GL 3 with packed vertex data: Position, color, normal, and one set of texture coords
typedef struct ccqv {
GLfloat Pos[3];
unsigned int Col;
GLfloat Norm[3];
GLfloat Tex2[4];
} Vertex;
...
glGenVertexArrays( 1, _vertexArray );
glBindVertexArray(_vertexArray);
glGenBuffers( 1, _vertexBuffer );
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, _arraySize*sizeof(Vertex), NULL, GL_DYNAMIC_DRAW );
glEnableVertexAttribArray(0); // vertex
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Pos));
glEnableVertexAttribArray(3); // primary color
glVertexAttribPointer( 3, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Col));
glEnableVertexAttribArray(2); // normal
glVertexAttribPointer( 2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Tex1));
glEnableVertexAttribArray(8); // texcoord0
glVertexAttribPointer( 8, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Tex2));
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
The first parameter to the Attrib functions is the index of the attributes. For simplicity, I'm using aliases defined for NVIDIA CG: http://http.developer.nvidia.com/Cg/gp4gp.html
If you're using GLSL shaders, you'll need to use glBindAttribLocation() to define these indices, as explained in Andon M. Coleman's answer.
Related
I'm trying to learn OpenGL and GLSL. I'm trying to draw an imported model, which is stored in three arrays (vertices - array of TVector3, which is a record/struct with X, Y, Z: single/float; normals - array of TVector3; UVs - array of TVector2). The model was drawn fine without using shaders and using old calls such as glTexCoord, glNormal and glVertex. I switched to glDrawArrays because everything is always deprecated and trying to use shaders and glTexCoordPointer didn't work as the layout (location = 2) contained either incorrect UV mapping or none at all (texture is still there because the mesh is given coloring) and glTexCoordPointer didn't affect it at all. However, when trying to use the glVertexAttribPointer approach found on this tutorial nothing is drawn at all. I convert the three arrays into one array of single, but still to no avail. Trying to use glDrawElements results in a SIGSEGV, because I have no indices to provide (also I've read it's slower than glDrawArrays).
I'm lost, is there anything I'm doing wrong? Maybe I'm missing something? Are there any ways to pass arrays of TVectorX without combining them all into one?
My code (Object Pascal, OpenGL 4.3):
type
TVector2 = record
public
X, Y: single;
{...}
end;
TVector3 = record
private
{...}
public
X, Y, Z: single;
{...}
end;
var
MeshArray: array of single;
VertexArray, VertexBuffer: longword;
{Mesh initialization code:}
SetLength(MeshArray, Length(Vertices)*8);
for i:=1 to Length(Vertices) do begin
j := (i-1)*8;
MeshArray[ j ] := Vertices[i-1].X;
MeshArray[j+1] := Vertices[i-1].Y;
MeshArray[j+2] := Vertices[i-1].Z;
MeshArray[j+3] := Normals[i-1].X;
MeshArray[j+4] := Normals[i-1].Y;
MeshArray[j+5] := Normals[i-1].Z;
MeshArray[j+6] := UVs[i-1].X;
MeshArray[j+7] := UVs[i-1].Y;
end;
glGenVertexArrays(1, #VertexArray);
glGenBuffers(1, #VertexBuffer);
glBindVertexArray(VertexArray);
glBindBuffer(GL_ARRAY_BUFFER, VertexBuffer);
glBufferData(GL_ARRAY_BUFFER, SizeOf(MeshArray), #MeshArray, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, SizeOf(single) * 8, PChar(0));
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, SizeOf(single) * 8, PChar(3 * SizeOf(single)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, SizeOf(single) * 8, PChar(6 * SizeOf(single)));
glEnableVertexAttribArray(2);
{Drawing code:}
glActiveTexture(GL_TEXTURE0); //Not necessary
glBindTexture(GL_TEXTURE_2D, mat.Albedo.Data);
glUseProgram(mat.ShaderProgram);
//Draws model correctly, but without UV when using glVertexPointer(3, GL_FLOAT, 0, #Vertices[0]);
glBindVertexArray(VertexArray);
glDrawArrays(GL_TRIANGLES, 0, Length(Vertices));
Vertex shader (MatVertex is just the object and camera matrix):
#version 430 core
layout (location = 0) in vec3 Vertex;
layout (location = 1) in vec3 Normal;
layout (location = 2) in vec2 UV;
out vec3 outVertex;
out vec3 outNormal;
out vec2 outUV;
uniform mat4 MatVertex;
void main(){
gl_Position = vec4(Vertex, 1.0) * MatVertex;
outVertex = Vertex;
outNormal = Normal;
outUV = UV;
}
Fragment shader:
#version 430 core
in vec3 outVertex;
in vec3 outNormal;
in vec2 outUV;
out vec3 color;
uniform sampler2D albedoTex;
void main(){
color = texture(albedoTex, outUV).rgb;
}
The vector needs to be multiplied to the matrix from the tight (see GLSL Programming/Vector and Matrix Operations):
gl_Position = vec4(Vertex, 1.0) * MatVertex;
gl_Position = MatVertex * vec4(Vertex, 1.0);
I'm trying to have 4 integers represent the colors of all the verticii in a VBO by having the stride on the color vertex attribute pointer, however, It seems to only take the value once for the color, and, as a result, assigns the rest of the verticii as black as in the picture: picture. The expected result is that all the verticii will be white.
Here is the relevant pieces of code:
int triangleData[18] =
{
2147483647,2147483647,2147483647,2147483647,//opaque white
0,100, //top
100,-100, //bottom right
-100,-100 //bottom left
};
unsigned int colorVAO, colorVBO;
glGenVertexArrays(1, &colorVAO);
glGenBuffers(1, &colorVBO);
glBindVertexArray(colorVAO);
glBindBuffer(GL_ARRAY_BUFFER, colorVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleData), triangleData, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_INT, GL_FALSE, 2 * sizeof(int), (void*)(4*sizeof(int)));
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 4, GL_INT, GL_TRUE, 0, (void*)0);
glEnableVertexAttribArray(1);
Vertex shader:
#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec4 aColor;
out vec4 Color;
uniform mat4 model;
uniform mat4 view;
uniform mat4 ortho;
void main()
{
gl_Position = ortho * view * model * vec4(aPos, 1.0, 1.0);
Color = aColor;
}
Fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 Color;
void main()
{
FragColor = Color;
}
From the documentation of glVertexAttribPointer:
stride
Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array.
Setting the stride to 0 does not mean that the same data is read for each vertex. It means that the data is packed one after the other in the buffer.
If you want all the vertices to use the same data, you can either disable the attribute and use glVertexAttrib, or you can use the separate vertex format (available starting from OpenGL 4.3 or with ARB_vertex_attrib_binding) similar to:
glBindVertexBuffer(index, buffer, offset, 0);
where a stride of 0 really means no stride.
I'm trying to get some basic shaders working in OpenGL, and I seem to have hit a roadblock at the first barrier. I'm trying to enable some vertex attributes, but I'm getting weird results. I've brought up the draw call in RenderDoc, and only vertex attribute 0 is being enabled. Here is my VAO making code, and my render loop. I'm probably overlooking something really obvious. Thanks!
std::vector<float> positions;
std::vector<float> normals;
std::vector<float> texCoords;
for (auto x : model->positions)
{
positions.push_back(x.x);
positions.push_back(x.y);
positions.push_back(x.z);
}
for (auto x : model->normals)
{
normals.push_back(x.x);
normals.push_back(x.y);
normals.push_back(x.z);
}
for (auto x : model->texCoords)
{
texCoords.push_back(x.x);
texCoords.push_back(x.y);
}
GLuint indicesVBO = 0;
GLuint texCoordsVBO = 0;
GLuint vertsVBO = 0;
GLuint normsVBO = 0;
glGenVertexArrays(1, &model->vao);
glBindVertexArray(model->vao);
glGenBuffers(1, &vertsVBO);
glBindBuffer(GL_ARRAY_BUFFER, vertsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * positions.size(), positions.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)0);
glEnableVertexAttribArray(0);
glGenBuffers(1, &normsVBO);
glBindBuffer(GL_ARRAY_BUFFER, normsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * normals.size(), normals.data(), GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)0);
glEnableVertexAttribArray(1);
glGenBuffers(1, &texCoordsVBO);
glBindBuffer(GL_ARRAY_BUFFER, texCoordsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * texCoords.size(), texCoords.data(), GL_STATIC_DRAW);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)0);
glEnableVertexAttribArray(2);
glGenBuffers(1, &indicesVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, model->indices.size() * sizeof(uint32_t), model->indices.data(), GL_STATIC_DRAW);
glBindVertexArray(0);
My Render Loop is this:
//I'm aware this isn't usually needed but I'm just trying to make sure
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
for (GamePiece * x : gamePieces)
{
glUseProgram(x->program->programID);
glBindVertexArray(x->model->vao);
glBindTexture(GL_TEXTURE_2D, x->texture->texID);
glDrawElements(GL_TRIANGLES, x->model->indices.size(), GL_UNSIGNED_INT,(void*)0);
}
And my vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec2 texCoord;
out vec2 outUV;
out vec3 outNormal;
void main()
{
outUV = texCoord;
outNormal = normal;
gl_Position = vec4(position, 1.0f);
}
#version 330
in vec2 inUV;
in vec3 normal;
out vec4 outFragcolor;
uniform sampler2D colourTexture;
void main()
{
outFragcolor = texture2D(colourTexture, inUV);
}
See OpenGL 4.5 Core Profile Specification - 7.3.1 Program Interfaces, page 96:
[...] When a program is linked, the GL builds a list of active resources for each interface. [...] For example, variables might be considered inactive if they are declared but not used in executable code, [...] The set of active resources for any interface is implementation-dependent because it depends on various analysis and optimizations performed by the compiler and linker
This means that, if the compiler and linker determine that the an attribute variable is "not used", when the executable code is executed, then the attribute is inactive.
Inactive attributes are no active program resources and thus not visible in RenderDoc.
Furthermore the output variables of a shader stage are linked to the input variables of the next shader stage by its name.
texCoord is not an active program resource, because it is assigned to the output variable outUV. The fragment shader has no input variable outUV.
Vertex shader:
out vec2 outUV;
out vec3 outNormal;
Fragment shader:
in vec2 inUV;
in vec3 normal;
See Program separation linkage:
Either use the same names for the outputs of the vertex shader and inputs of the fragment shader, or use layout locations to linke the interface variables:
Vertex shader:
layout(location = 0) out vec2 outUV;
layout(location = 1) out vec3 outNormal;
Fragment shader:
layout(location = 0) in vec2 inUV;
layout(location = 1) in vec3 normal;
I am currently trying to render the value of an integer using a bitmap (think scoreboard for invaders) but I'm having trouble changing texture coordinates while the game is running.
I link the shader and data like so:
GLint texAttrib = glGetAttribLocation(shaderProgram, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(float), (void*)(2 * sizeof(float)));
And in my shaders I do the following:
Vertex Shader:
#version 150
uniform mat4 mvp;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}
FragmentShader:
#version 150 core
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texture2D(tex, Texcoord);
}
How would I change this code/implement a function to be able to change the texcoord variable?
If you need to modify the texture coordinates frequently, but the other vertex attributes remain unchanged, it can be beneficial to keep the texture coordinates in a separate VBO. While it's generally preferable to use interleaved attributes, this is one case where that's not necessarily the most efficient solution.
So you would have two VBOs, one for the positions, and one for the texture coordinates. Your setup code will look something like this:
GLuint vboIds[2];
glGenBuffers(2, vboIds);
// Load positions.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW);
// Load texture coordinates.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(texCoords), texCoords, GL_DYNAMIC_DRAW);
Note the different last argument to glBufferData(), which is a usage hint. GL_STATIC_DRAW suggests to the OpenGL implementation that the data will not be modified on a regular basis, while GL_DYNAMIC_DRAW suggests that it will be modified frequently.
Then, anytime your texture data changes, you can modify it with glBufferSubData():
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(texCoords), texCoords);
Of course if only part of them change, you would only make the call for the part that changes.
You did not specify how exactly the texture coordinates change. If it's just something like a simple transformation, it would be much more efficient to apply that transformation in the shader code, instead of modifying the original texture coordinates.
For example, say you only wanted to shift the texture coordinates. You could have a uniform variable for the shift in your vertex shader, and then add it to the incoming texture coordinate attribute:
uniform vec2 TexCoordShift;
in vec2 TexCoord;
out vec2 FragTexCoord;
...
FragTexCoord = TexCoord + TexCoordShift;
and then in your C++ code:
// Once during setup, after linking program.
TexCoordShiftLoc = glGetUniformLocation(program, "TexCoordShift");
// To change transformation, after glUseProgram(), before glDraw*().
glUniform2f(TexCoordShiftLoc, xShift, yShift);
So I make no promises on the efficiency of this technique, but it's what I do and I'll be damned if text rendering is what slows down my program.
I have a dedicated class to store mesh, which consists of a few vectors of data, and a few GLuints to store pointers to my uploaded data. I upload data to openGL like this:
glBindBuffer(GL_ARRAY_BUFFER, position);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.position.size(), &data.position[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.normal.size(), &data.normal[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec2) * data.uv.size(), &data.uv[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * data.index.size(), &data.index[0], GL_DYNAMIC_DRAW);
Then, to draw it I go like this:
glEnableVertexAttribArray(positionBinding);
glBindBuffer(GL_ARRAY_BUFFER, position);
glVertexAttribPointer(positionBinding, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(normalBinding);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glVertexAttribPointer(normalBinding, 3, GL_FLOAT, GL_TRUE, 0, NULL);
glEnableVertexAttribArray(uvBinding);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glVertexAttribPointer(uvBinding, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, NULL);
glDisableVertexAttribArray(positionBinding);
glDisableVertexAttribArray(normalBinding);
glDisableVertexAttribArray(uvBinding);
This setup is designed for a full fledged 3D engine, so you can definitely tone it down a little. Basically, I have 4 buffers, position, uv, normal, and index. You probably only need the first two, so just ignore the others.
Anyway, each time I want to draw some text, I upload my data using the first code chunk I showed, then draw it using the second chunk. It works pretty well, and it's very elegant. This is my code to draw text using it:
vbo(genTextMesh("some string")).draw(); //vbo is my mesh containing class
I hope this helps, if you have any questions feel free to ask.
I use a uniform vec2 to pass the texture offset into the vertex shader.
I am not sure how efficient that is, but if your texture coordinates are the same shape, and just moved around, then this is an option.
#version 150
uniform mat4 mvp;
uniform vec2 texOffset;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord + texOffset;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}
I have the following extremely simple vertex shader, when I render with it I get a blank screen:
#version 110
layout(location = 1) attribute vec3 position;
uniform mat4 modelview_matrix;
uniform mat4 projection_matrix;
void main() {
vec4 eye = modelview_matrix * vec4(position, 1.0);
gl_Position = projection_matrix * eye;
}
However, changing
layout(location = 1) attribute vec3 position; to
layout(location = 0) attribute vec3 position;
allows me to render correctly. Here's my rendering function:
glUseProgram(program);
GLenum error;
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(
modelview_uniform, 1, GL_FALSE, glm::value_ptr(modelview));
glUniformMatrix4fv(
projection_uniform, 1, GL_FALSE, glm::value_ptr(projection));
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glVertexAttribPointer(
position_attribute,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0);
glEnableVertexAttribArray(position_attribute);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element_buffer);
glDrawElements(
GL_TRIANGLES,
monkey_mesh.indices.size(),
GL_UNSIGNED_INT,
(void*)0);
glDisableVertexAttribArray(position_attribute);
glutSwapBuffers();
I obtain position_attribute through a call to glGetAttribLocation(program, "position");. It contains the correct value in both cases (1 in the first case, 0 in the second).
Is there something I'm doing wrong? I'm sure I'm able to render when location == 0 only because I'm lucky and the data is written there by sheer luck but I can't figure out for the life of me what step I'm missing.
What you are seeing is not possible. GLSL version 1.10 does not support layout syntax at all. So your compiler should have rejected the shader. Therefore, either your compiler is not rejecting the shader and is therefore broken, or you are not loading the shader you think you are.
If it still doesn't work when using GLSL version 3.30 or higher (the first core version to support layout(location=#) syntax for attribute indices), then what you're seeing is the result of a different bug. Namely, the compatibility profile implicitly states that, to render with vertex arrays, you must either use attribute zero or gl_Vertex. The core profile has no such restrictions. However, this restriction was in GL for a while, so some implementations will still enforce it, even on the core profile where it doesn't exist.
So just use attribute zero. Or possibly switch to the core profile if you're not already using it (though I'd be surprised if an implementation actually implements the distinction correctly. Generally, it'll either be too permissive in compatibility or too restrictive in core).