Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I have this code :
Upp::Vector<float> verticesTriangle{
1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f,
};
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
//Setting up the VAO Attribute format
glVertexArrayAttribFormat(VAO, 0, 3, GL_FLOAT, GL_FALSE, 0); //Will be colors (R G B in float)
glVertexArrayAttribFormat(VAO, 1, 2, GL_FLOAT, GL_FALSE, 3); //Will be texture coordinates
glVertexArrayAttribFormat(VAO, 2, 3, GL_FLOAT, GL_FALSE, 5); //Normals
glVertexArrayAttribFormat(VAO, 3, 3, GL_FLOAT, GL_FALSE, 8); //Will be my position
glEnableVertexArrayAttrib(VAO, 0);
glEnableVertexArrayAttrib(VAO, 1);
glEnableVertexArrayAttrib(VAO, 2);
glEnableVertexArrayAttrib(VAO, 3);
//Generating a VBO
glGenBuffers(1, &VBOCarre);
glBindBuffer(GL_ARRAY_BUFFER, VBOCarre);
glBufferStorage(GL_ARRAY_BUFFER, sizeof(float) * verticesTriangle.GetCount(), verticesTriangle, GL_MAP_READ_BIT | GL_MAP_WRITE_BIT);
//Binding the VBO to be read by VAO
glVertexArrayVertexBuffer(VAO, 0, VBOCarre, 0 * sizeof(float), 11 * sizeof(float));
glVertexArrayVertexBuffer(VAO, 1, VBOCarre, 3 * sizeof(float), 11 * sizeof(float));
glVertexArrayVertexBuffer(VAO, 2, VBOCarre, 5 * sizeof(float), 11 * sizeof(float));
glVertexArrayVertexBuffer(VAO, 3, VBOCarre, 8 * sizeof(float), 11 * sizeof(float));
//Bind VAO
glBindVertexArray(VAO);
I have no problem retrieving the first attribute in my shader however, when I trying to retrieve others, It dont work. To test it, I have setup an float array and a simple shader program and I try to retrieve my position to draw a triangle.
Here is how my datas are ordered :
Here is my vertex shader :
#version 400
layout (location = 0) in vec3 colors;
layout (location = 1) in vec2 textCoords;
layout (location = 2) in vec3 normals;
layout (location = 3) in vec3 positions;
out vec3 fs_colors;
void main()
{
gl_Position = vec4(positions.x, positions.y, positions.z, 1.0);
// gl_Position = vec4(colors.x, colors.y, colors.z, 1.0); //This line work, proofing my
// first attribute is sended well to my shader
fs_colors = colors;
}
The problem is, except the first attribute, all others seems to not be sent to the shader. What am I missing ?!
You're putting stuff in the wrong place.
glVertexArrayAttribFormat(VAO, 1, 2, GL_FLOAT, GL_FALSE, 3); //Will be texture coordinates
The "3" here is being passed as a byte offset from the start of a vertex in the array to the particular data for that vertex in the attribute. Obviously, your texture coordinate is not 3 bytes from the start of your vertex; it's 3 * sizeof(float) bytes from the start of the vertex.
Similarly:
glVertexArrayVertexBuffer(VAO, 1, VBOCarre, 3 * sizeof(float), 11 * sizeof(float));
This makes no sense either. You're only using a single buffer, and all four attributes read from the same binding. So you should only bind a single buffer.
The offset ought to be 0, because that's where a vertex in the buffer starts. And the stride should be what you wrote.
You also never directly set the association between the attributes and the binding index with glVertexArrayAttribBinding. You probably got things to work by relying on the default, but you shouldn't be using the default here.
The correct code would be:
//Generating a VBO
glCreateBuffers(1, &VBOCarre);
//No need to call glBindBuffer(GL_ARRAY_BUFFER, VBOCarre);, since we're doing DSA.
glNamedBufferStorage(VBOCarre, sizeof(float) * verticesTriangle.GetCount(), verticesTriangle, GL_MAP_READ_BIT | GL_MAP_WRITE_BIT);
glCreateVertexArrays(1, &VAO);
//No need to glBindVertexArray(VAO);, since we're using DSA.
//Setting up the VAO Attribute format
glEnableVertexArrayAttrib(VAO, 0);
glVertexArrayAttribFormat(VAO, 0, 3, GL_FLOAT, GL_FALSE, 0); //Will be colors (R G B in float)
glEnableVertexArrayAttrib(VAO, 1);
glVertexArrayAttribFormat(VAO, 1, 2, GL_FLOAT, GL_FALSE, 3 * sizeof(float)); //Will be texture coordinates
glEnableVertexArrayAttrib(VAO, 2);
glVertexArrayAttribFormat(VAO, 2, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float)); //Normals
glEnableVertexArrayAttrib(VAO, 3);
glVertexArrayAttribFormat(VAO, 3, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float)); //Will be my position
//One buffer, one binding.
glVertexArrayVertexBuffer(VAO, 0, VBOCarre, 0, 11 * sizeof(float));
//Make all attributes read from the same buffer.
glVertexArrayAttribBinding(VAO, 0, 0);
glVertexArrayAttribBinding(VAO, 1, 0);
glVertexArrayAttribBinding(VAO, 2, 0);
glVertexArrayAttribBinding(VAO, 3, 0);
//We can glBindVertexArray(VAO); when we're about to use it, not just because we finished setting it up.
Related
I am trying to apply texture mapping to my cubes but are unsure on how to proceed. Current I am using indices to avoid having to repeat vec3s to make a cube and a vertex array of the points and their normals like so:
// Cube data as our basic building block
unsigned int indices[] = {
10, 8, 0, 2, 10, 0, 12, 10, 2, 4, 12, 2,
14, 12, 4, 6, 14, 4, 8, 14, 6, 0, 8, 6,
12, 14, 8, 10, 12, 8, 2, 0, 6, 4, 2, 6
};
vec3 vertexArray[] = {
vec3(-0.5f, -0.5f, -0.5f), vec3(-0.408248, -0.816497, -0.408248),
vec3(0.5f, -0.5f, -0.5f), vec3(0.666667, -0.333333, -0.666667),
vec3(0.5f, 0.5f, -0.5f), vec3(0.408248, 0.816497, -0.408248),
vec3(-0.5f, 0.5f, -0.5f), vec3(-0.666667, 0.333333, -0.666667),
vec3(-0.5f, -0.5f, 0.5f), vec3(-0.666667, -0.333333, 0.666667),
vec3(0.5f, -0.5f, 0.5f), vec3(0.666667, -0.666667, 0.333333),
vec3(0.5f, 0.5f, 0.5f), vec3(0.408248, 0.408248, 0.816497),
vec3(-0.5f, 0.5f, 0.5f), vec3(-0.408248, 0.816497, 0.408248),
};
// convert arrays to vectors
std::vector<vec3> vertexArrayVector;
vertexArrayVector.insert(vertexArrayVector.begin(), std::begin(vertexArray), std::end(vertexArray));
std::vector<unsigned int> indicesVector;
indicesVector.insert(indicesVector.begin(), std::begin(indices), std::end(indices));
I want to now apply textures to the cube but I am not sure how to add the use of a vec2 for UV when using indices. My creating of VBOs and VAOs like this if it helps:
GLuint vertexBufferObject;
GLuint indexBufferObject;
GLuint vertexArrayObject;
glGenVertexArrays(1, &vertexArrayObject);
glGenBuffers(1, &indexBufferObject);
glGenBuffers(1, &vertexBufferObject);
glBindVertexArray(vertexArrayObject);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObject);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(vertexIndicesArray[0]) * vertexIndicesArray.size(), &vertexIndicesArray[0], GL_STATIC_DRAW);
// Upload Vertex Buffer to the GPU, keep a reference to it (vertexBufferObject)
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPointsArray[0]) * vertexPointsArray.size(), &vertexPointsArray[0], GL_STATIC_DRAW);
// Teach GPU how to read position data from vertexBufferObject
glVertexAttribPointer(0, // attribute 0 matches aPos in Vertex Shader
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // 0 stride
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(0);
// Teach GPU how to read normals data from vertexBufferObject
glVertexAttribPointer(1, // attribute 1 matches normals in Vertex Shader
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)sizeof(glm::vec3) // normal is offseted a vec3 (comes after position)
);
glEnableVertexAttribArray(1);
The vertex coordinate an the texture coordinates for a tuple with 5 components (x, y, z, u, v). If you have a vertex coordinate that is shared by the face but is associated to different texture coordinates, you need to duplicate a vertex coordinate. You must specify 1 attribute tuple for each vertex coordinate and texture coordinate combination required in your mesh.
It is not possible to specify different indices for the vertex coordinates and texture coordinates. See Rendering meshes with multiple indices and Why does OpenGL not support multiple index buffering?.
So I'm pretty new to OpenGL I was trying to create orthographic projection and the problem is when I do
glm::mat4 ortho;
ortho = glm::ortho(-(float)WINDOW_WIDTH / 2.0f, (float)WINDOW_WIDTH / 2.0f, -(float)WINDOW_HEIGHT / 2.0f, (float)WINDOW_HEIGHT / 2.0f, -1.f, 1.f);
It works just fine but the 0, 0 point is in the middle of the screen
The thing I wanna do is have 0, 0 point in the down left corner of the window but when I do
glm::mat4 ortho;
ortho = glm::ortho(0.0f, (float)WINDOW_WIDTH, 0.0f, (float)WINDOW_HEIGHT, -1.0f, 1.0f);
It ends up like this
I was searching so long so by now I'm just asking for help
These are vertices positions of my rectangle
float positions[8] =
{
-100.0f, -100.0f,
100.0f, -100.0f,
100.0f, 100.0f,
-100.0f, 100.0f,
};
I also use index buffer to draw it
unsigned indices[6] =
{
0, 1, 2,
2, 3, 0
};
These are my buffers
unsigned buffer = 0;
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(float), positions, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 2, 0);
unsigned ibo = 0;
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6 * sizeof(unsigned), indices, GL_STATIC_DRAW);
And it's drawn using
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
So what I have done wrong is in shader I have multiplied position by orth projection and not orth projection by position so
do not do this:
gl_Position = position * u_MVP;
do that:
gl_Position = u_MVP * position;
I am trying to use two groups of shaders. ImageShader draws a bigger square, GridShader draws a smaller square. Inside init function I declare the programs (inside new OpenGL::OpenGLShader), after that I insert buffer with position for
I get the following result:
(the bright square is the ImageShader, declared second)
Render result
Here is the code for init() function:
gridShader = new OpenGL::OpenGLShader(Common::GetShaderResource(IDR_SHADERS_GRID_SQUARE_VERTEX), Common::GetShaderResource(IDR_SHADERS_GRID_SQUARE_FRAGMENT));
gridShader->bind();
//3x positions
float verticesSquare[6][3] = {
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
imageShader->unbind();
And here are the render functions:
void OpenglRenderer::RenderScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
visualize_image();
visualize_grid();
renderToImage();
SwapBuffers(hdc);
}
void OpenglRenderer::visualize_image()
{
imageShader->bind();
GLint position = glGetAttribLocation(imageShader->shader_id, "position");
GLint uvPos = glGetAttribLocation(imageShader->shader_id, "uvPos");
glEnableVertexAttribArray(position);
glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), 0);
glEnableVertexAttribArray(uvPos);
glVertexAttribPointer(uvPos, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(2 * sizeof(float)));
glDrawArrays(GL_TRIANGLES, 0, 6);
glDisableVertexAttribArray(position);
glDisableVertexAttribArray(uvPos);
imageShader->unbind();
}
void OpenglRenderer::visualize_grid()
{
gridShader->bind();
GLint position = glGetAttribLocation(gridShader->shader_id, "position");
glEnableVertexAttribArray(position);
glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), 0);
glDrawArrays(GL_TRIANGLES, 0, 6);
glDisableVertexAttribArray(position);
gridShader->unbind();
}
Since you want to draw from two different buffers, you have to make sure that the correct one is bound in the rendering method (or better to say, when setting up the vertex attribute pointer). At the moment, all data is taken from vboIndexImage because this is the buffer bound when you call glVertexAttribPointer. From your setup code, I guess you shouldn't even setup the vertex attribute pointers in the render methods and only bind the correct VAO instead:
Setup:
glGenVertexArrays(1, &vaoIndexImage);
glGenBuffers(1, &vboIndexImage);
glBindVertexArray(vaoIndexImage);
glBindBuffer(GL_ARRAY_BUFFER, vboIndexImage);
glBufferData(GL_ARRAY_BUFFER, sizeof(verticesImage), &verticesImage[0][0], GL_STATIC_DRAW);
GLint position = glGetAttribLocation(imageShader->shader_id, "position");
GLint uvPos = glGetAttribLocation(imageShader->shader_id, "uvPos");
glEnableVertexAttribArray(position);
glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), 0);
glEnableVertexAttribArray(uvPos);
glVertexAttribPointer(uvPos, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(2 * sizeof(float)));
Rendering:
glBindVertexArray(vaoIndexImage);
glDrawArrays(GL_TRIANGLES, 0, 6);
Similar code should be used for the grid.
I ran into an issue where my vertices were being drawn offscreen. I changed the stride to 0 for all my vertex attribute pointers and now they draw at the correct location.
Here is some code to start this off:
glGenVertexArrays(1, &vertexID);
glBindVertexArray(vertexID);
glGenBuffers(1, &bufferID);
glBindBuffer(GL_ARRAY_BUFFER, bufferID);
GLfloat verts[4 * 2 * 3] = { -0.5, -0.5, 0.0, 1.0,// bottom left
.5, -.5, 0.0, 1.0, // bottom right
-.5, .5, 0.0, 1.0, // top left
0.5, 0.5, 0.0, 1.0,
.5, -.5, 0.0, 1.0, // bottom right
-.5, .5, 0.0, 1.0, // top left// top right
};
GLfloat color[4 * 3 * 2] = {
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f };
GLfloat tex[8] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f
};
glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(color) + sizeof(tex), nullptr, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(verts), verts);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(verts), sizeof(color), color);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(color), sizeof(tex), tex);
glClearColor(0.05f, 0.05f, 0.05f, 1.0f);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 10, NULL);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 10, (const GLvoid *)(sizeof(GLfloat) * 4));
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 10, (const GLvoid *)(sizeof(GLfloat) * 8));
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glGenBuffers(1, &ebo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
GLuint indices[4] = {
0,2,1,3
};
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_DYNAMIC_DRAW);
and then here is the rendering code
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, texture);
glBindVertexArray(vertexID);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
//glDrawArrays(GL_TRIANGLES, 0, 6);
glFlush();
SDL_GL_SwapWindow(window);
Okay so here is where I am confused:
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 10, NULL);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 10, (const GLvoid *)(sizeof(GLfloat) * 4));
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 10, (const GLvoid *)(sizeof(GLfloat) * 8));
With this above code, the program doesn't render the proper square I am trying to render. If I change the stride to 0 for all of these, then it renders in the correct position. My understanding of it was that i sub buffered 3 sets of information: position, color, and tex coordinates. Therefor I thought the data looked like the following:
There for I set the stride to sizeof(GLFloat) * 10, however, this doesn't work. This makes me assume I also don't have the offset values set correctly. So why is my stride messing up the vertex position?
Therefor I thought the data looked like the following:
But that's not what you told OpenGL.
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(verts), verts);
This tells OpenGL to take the array verts and copy it into the start of the buffer object.
glBufferSubData(GL_ARRAY_BUFFER, sizeof(verts), sizeof(color), color);
This tells OpengL to take the array color, and copy it into the buffer object, but after all of verts.
There's no interleaving here. You buffer stores all of verts, followed by all of color, followed by all of tex. It does not store 4 floats of verts followed by 4 floats of color, followed by 4 floats of tex.
glBufferSubData cannot interleave data for you (well, you could do it in a long series of calls, but that'd be ridiculous). If you want to upload interleaved vertex data, you have to interleave it on the CPU, then upload it.
And setting the strides to 0 doesn't make this work. Well, it doesn't make it work correctly. Your base offsets are still wrong, relative to the data you actually uploaded. You'll get the correct position data, but the colors and texture coordinates will be wrong.
You're uploading your vertex data in blocks:
glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(color) + sizeof(tex), nullptr, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(verts), verts);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(verts), sizeof(color), color);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(color), sizeof(tex), tex);
So it ends up like:
<vert 0, ... vert N><color 0, ..., color N><tex 0, ..., tex N>
But your glVertexAttribPointer() calls are claiming the buffer is interleaved like this:
<vert 0><color 0><tex 0><vert 1><color1><tex 1>...
Either interleave the data at upload or adjust your glVertexAttribPointer() calls to take into account the block layout.
Here are the paste bins for the code main.cpp and the shaders. It uses devIL, glload and glfw. Runs on windows and linux. any png named pic.png will load.
I buffer my data in a fairly normal way. Just a simple triangle.
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
//vX vY vZ vW nX nY nZ U V
float bufferDataThree[9*3] = {
-1.0f, -1.0f, 0.0f,1.0f, 0.0f,1.0f,0.0f, 0.0f,0.0f,
1.0f, -1.0f, 0.0f,1.0f, 0.0f,1.0f,0.0f, 0.0f,1.0f,
1.0f, 1.0f, 0.0f,1.0f, 0.0f,1.0f,0.0f, 1.0f,1.0f};
//TOTAL 4 + 3 + 2 = 9;
glBufferData(GL_ARRAY_BUFFER, (9*3)*4, bufferDataThree, GL_STATIC_DRAW); //Doesnt Work
//glBufferData(GL_ARRAY_BUFFER, (10*3)*4, bufferDataThree, GL_STATIC_DRAW); //Works
There is 9*3 = 27 floats. Therefore 108 bytes. if I buffer 108 bytes it will screw up the texture coords. If I buffer 116 bytes, (2 floats more) It renders fine.
My display method.
void display()
{
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(program);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,tbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, (4 + 3 + 2)*sizeof(float), 0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, (4 + 3 + 2)*sizeof(float),(void*) (4*sizeof(float)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, (4 + 3 + 2)*sizeof(float),(void*) ((4+3)*sizeof(float)));
glDrawArrays(GL_TRIANGLES,0,3);
glDisableVertexAttribArray(0);
glUseProgram(0);
glfwSwapBuffers();
}
How can this be happening?
second argument to glVertexAttribPointer is number of components, for texture coord it is 2 and 3 for normal.