Not able to get output with glDrawElements() & glMultiDrawElements() - opengl

I'm in the process of building a graphics app where the user can specify vertices by clicking on a canvas and then the vertices are used to draw polygons.
The app supports line, triangle and polygon modes. Drawing a line and triangle is done by counting the number of clicks. Then vertex arrays are created and data is bound to buffers and rendered using glDrawArrays(). The tricky one is the polygon mode. The user can specify any number of vertices and clicking right mouse button triggers drawing. I initially planned to use glMultiDrawElements, but somehow I wasn't getting any output. So I tried to call glDrawElements() in a loop. still with no luck. I searched a lot and read a lot of documentation about using glDrawElements()/glMultiDrawElements() with VBOs and VAOs and also with glVertexPointer() and glColorPointer. Still no luck.
I have used the following for keeping track of vertex attributes:
GLfloat ** polygonVertices; //every polygon vertex list goes into this..
GLuint * polygonIndicesCounts; //pointer to hold the number of vertices each polygon has
GLuint ** polygonIndices; //array of pointers to hold indices of vertices corresponding to polygons
GLfloat * polygonColors; //for every mouse click, colors are randomly generated.
and the code for rendering:
glVertexPointer(4, GL_FLOAT, 0, (GLvoid*)polygonVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, (GLvoid*)polygonColors);
//glMultiDrawElements(GL_POLYGON, polygonIndicesCounts, GL_UNSIGNED_INT, polygonIndices, polygonCount);
for(int i = 0 ; i < polygonCount; i ++)
glDrawElements(GL_POLYGON, polygonIndicesCounts[i], GL_UNSIGNED_INT, polygonIndices[i]);

Why are polygonVertices pointers to pointers? If you cast that to (void*) the only thing OpenGL sees is the value of pointer to which each points. You want those to be flat a array, so their type signature should be compatible with float* (not float**). A pointer to a pointer makes sense only for the glMultiDrawArrays call.

Related

OpenGL draws weird lines on top of polygons

Let me introduce you to Fishtank:
It's an aquarium simulator I am doing on OpenGL to learn before going into Vulkan.
I have drawn many fish like these:
Aquarium
Now I added the grid functionnality which goes like this:
Grid
But when I let it turn for some time, these lines appear:
Weird Lines
I've seen somewhere to clear the Depth Buffer, which I did, but that doesn't resolve the problem.
Here's the code of the function:
void Game::drawGrid()
{
std::vector<glm::vec2> gridVertices;
for (unsigned int x = 1; x < mGameMap.mColCount; x += 1) //Include the last one as the drawed line is the left of the column
{
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, x*mGameMap.mCellSize, mGameMap.mCellSize)));
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, x*mGameMap.mCellSize, (mGameMap.mRowCount-1)*mGameMap.mCellSize)));
}
for (unsigned int y = 1; y < mGameMap.mRowCount; y += 1) //Same here but special info needed:
// Normally, the origin is at the top-left corner and the y-axis points to the bottom. However, OpenGL's y-axis is reversed.
// That's why taking into account the mRowCount-1 actually draws the very first line.
{
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, mGameMap.mCellSize, y*mGameMap.mCellSize)));
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, (mGameMap.mColCount - 1)*mGameMap.mCellSize, y*mGameMap.mCellSize)));
}
mShader.setVec3("color", glm::vec3(1.0f));
glBufferData(GL_ARRAY_BUFFER, gridVertices.size()*sizeof(glm::vec2), gridVertices.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT_VEC2, GL_FALSE, sizeof(glm::vec2), (void*)0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_LINES, 0, gridVertices.size()*sizeof(glm::vec2));
glClear(GL_DEPTH_BUFFER_BIT);
}
I'd like to erase those lines and understand why OpenGL does this (or maybe it's me but I don't see where).
This is the problematic line:
glDrawArrays(GL_LINES, 0, gridVertices.size()*sizeof(glm::vec2));
If you look at the documentation for this function, you will find
void glDrawArrays( GLenum mode, GLint first, GLsizei count);
count: Specifies the number of indices to be rendered
But you are passing the byte size. Hence, you are asking OpenGL to draw more vertices than there are in your vertex buffer. The specific OpenGL implementation you are using is probably reading past the end of the grid vertex buffer and finds vertices from the fish vertex buffer to draw (but this behavior is undefined).
So, just change it to
glDrawArrays(GL_LINES, 0, gridVertices.size());
A general comment: Do not create vertex buffers every time you want to draw the same thing. Create them at the beginning of the application and re-use them. You can also change their content if needed, but be careful with that since it comes with a performance price. Creating vertex buffers is even more costlier.

How does opengl knows how many vertex there are?

I am currently trying to learn opengl and in doing so, I ran into a question that needs clarifcation. So basically, I was told that vertex shader applies to every vertex once. I am not so sure how does it know how many data constitutes to a vertex because all that I am giving the shader is basically a buffer containing a bunch of floats.
Consider the following code:
GLuint vbo = 0;
glGenBuffer(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER,vbo);
glBufferData(GL_ARRAY_BUFFER,7 * 3 * sizeof(GLfloat), NULL, GL_STATIC_DRAW);
glSubData(GL_ARRAY_BUFFER,0,3*3*sizeof(GLfloat),vertices);
glSubData(GL_ARRAY_BUFFER,3*3*sizeof(GLfloat),4*3*sizeof(GLfloat),colors);
GLuint pos = glGetAttribLocation(programId, "position");
GLuint col = glGetAttribLocation(programId, "color");
glVertexAttribPointer(pos,3,GL_FLOAT,GL_FALSE,0,0);
glVertexAttribPointer(col,4,GL_FLOAT,GL_FALSE,0,3*3*sizeof(GLfloat));
glEnableVertexAttribArray(pos);
glEnableVertexAttribArray(col);
glUseProgram(programId);
glDrawArray(GL_TRIANGLE,0,3);
This is just to draw a simple triangle and so the shader is just declaring gl_position and an output color on the fragment shader. Please consider that my vertex shader will have a variable called position and my fragment shader has a variable called color.
Now, the problem that I don't understand is I never specify an ending point for my vertex in the vbo. As a result, how does opengl knows that it should only look at the first 9 float position in the vbo and not go over to the color rgba specification in the vbo.
There are two answers to your question:
In your special case, you tell OpenGL how many vertices to draw in your last line: glDrawArray takes count as its third argument which, when you are not using index buffers, is exactly the number of vertices that will be drawn.
Because you chose GL_TRIANGLE as your primitive type, 3 consecutive vertices will be drawn as a single vertex.
If you use an index buffer, you specify the number of indices you want to render, i.e. how many entries of the index buffer should be used for drawing. The maximum number in the index buffer in turn tells OpenGL how many vertices will be used at most.

How to correctly link open GL normal for OpenGL shaders

I am trying to do a simple map rendered in OpenGL 2.1 and Qt5. But I'm failing on very basic issues. The one I'm presenting here is surface normals.
I have 4 object made of a single triangle geometry. A geoetry to make simple, is a dynamically allocated array of Vertex, where a Vertex is a couple of two QVector3D, a 3D position class predefined in Qt.
struct Vertex
{
QVector3D position;
QVector3D normal;
};
I'm computing the normal at a vertex by using the cross product of the two vectors from that vertex to the next or previous vertex. Normal computation for the structure seems fine, by debugging or printing results to the console.
QVector3D(-2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(-2, 2, -2) has normal QVector3D(0, 0, 1)
...
But when I feed the data to the shaders, the result are absurd! Here is a picture of the polygons colored with the normal value at each position:
As in normal maps, red=x, green=y and blue=z. The top left corner of the black square is the origin of the world. As you can see the normal at some point seems to simply be the position at that point, without the z-value. Can you hint me what might be wrong, knowing the painting code is :
glUseProgram(program.programId());
glEnableClientState(GL_NORMAL_ARRAY);
program.setUniformValue("modelViewProjectionMatrix", viewCamera);
program.setUniformValue("entityBaseColor", QColor(0,120,233));
program.setUniformValue("sunColor", QColor("white"));
program.setUniformValue("sunBrightness", 1.0f);
static QVector3D tmpSunDir = QVector3D(0.2,-0.2,1.0).normalized();
program.setUniformValue("sunDir",tmpSunDir);
for( size_t i = 0; i < m_numberOfBoundaries; ++i)
{
glBindBuffer(GL_ARRAY_BUFFER, m_bufferObjects[i]);
int vertexLocation = program.attributeLocation("vertexPosition");
program.setAttributeArray( vertexLocation, GL_FLOAT, &(m_boundaries[i].data->position), sizeof(Vertex) );
program.enableAttributeArray(vertexLocation);
glVertexAttribPointer( vertexLocation, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
int vertexNormal = program.attributeLocation("vertexNormal");
program.setAttributeArray( vertexNormal, GL_FLOAT, &(m_boundaries[i].data->normal), sizeof(Vertex) );
program.enableAttributeArray(vertexNormal);
glVertexAttribPointer( vertexNormal, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
glDrawArrays( GL_POLYGON, 0, m_boundaries[i].sizeOfData );
}
glDisableClientState(GL_NORMAL_ARRAY);
where a boundary is a geometrically connected component of the polygon. program is a QOpenGLShaderProgram, an Qt abstraction for shader programs. Each boundary is bound to a buffer object. The buffer object numbers are stored in the array m_bufferObjects. Polygon “boundaries” are stored as struct in the array m_boundaries. They have two fields : data, a pointer to the start of the array of vertices for the loop, and sizeOfData, the number of points for the polygon.
Until I get to the real problem of yours, here's something, probably unrelated but just as wrong:
glEnableClientState(GL_NORMAL_ARRAY);
/*...*/
glDisableClientState(GL_NORMAL_ARRAY);
You're using self defined vertex attributes, so it makes absolutely no sense to use those old fixed function pipeline client state locations. Use glEnableVertexAttribArray(location_index) instead,
Update
So I finally came around to take a closer look at your code and your problem is the mix of Qt's abstraction layer and use of raw OpenGL commands. Essentially your problem boils down to that you have a VBO bound when making calls to QOpenGLShaderProgram::setAttribArray followed by a call of glVertexAttribPointer.
One problem is, that setAttribArray internally makes the call of glVertexAttribPointer for you, so your own call to it is redundant and overwrites whatever Qt's stuff did. The more severe problem is, that you do have a VBO bound by glBindBuffer, so calls to glVertexAttribPointer actually take an byte offset into the VBO data instead of a pointer (in fact with a VBO bound passing a 0, which in pointer terms was a null pointer will yield a perfectly valid data offset). See this answer by me, why this is all a bit misleading and actually violates the C specification: https://stackoverflow.com/a/8284829/524368
Recent OpenGL versions actually have a new API for specifying attrib array offsets that conform to the C language specification.
The correct Qt method to use would be QOpenGLShaderProgramm::setAttribBuffer. Unfortunately your code shows not the exact definition of m_boundaries and your call to glBufferData or glBufferSubData; if I had that I could give you instructions on how to alter your code.

OpenGL color buffers per frame adding too much frame time

I have written a simple Particle class that stores positions, directions, velocities, colors etc for a particle demo. Each particle is a pyramid (4 triangles) and has a single color in all it's vertices.
Every frame I loop all the particles to find their new position etc and I need each one to render on screen with it's specific color. The way I know how to do this so far is, fill a color buffer for each of the particle's vertex colors, bind it to an attribute and send it to the shaders.
This is how I do it:
GLfloat g_color_buffer_data[3*3*4];
for (int j = 0; j < 3*4 ; j++)
{
g_color_buffer_data[j*3]=particles[i].color.r;
g_color_buffer_data[j*3+1]=particles[i].color.g;
g_color_buffer_data[j*3+2]=particles[i].color.b;
}
// 2nd attribute buffer : colors
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_color_buffer_data), g_color_buffer_data, GL_STATIC_DRAW);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
This means I have to send a color for all 12 vertices of each particle, where I would ideally only send 1 RGB triplet to color the entire particle. The vertex shader gets a color for each vertex and sends it to the fragment shader.
Now this code inserted into my per-particle loop slows down my frames by 40 ms. Without it, I'm having a frame time of 6 ms which gets raised to 45 ms.
Questions begin:
Is there a way to send a single color per-primitive/particle to color the entire thing inside the shaders (possibly by modifying the data structure of the color buffer or something else)?
Is it normal for this code to be causing such a big performance hit?
Since I'm new to all of this, can you point to me which call is the one that has this massive execution time?
Maybe this happens because of using 4 samples per fragment in addition to having big color buffers sent for each particle to the shaders?
note: I am testing with 512 particles. I been changing the code around for hours, but at some point earlier I think I did have a frame time of 6 ms WITH 512 particles, and something like 50 ms with 2048. I was using the same color buffer and attribute binding code that I provided above.
Yes you could use a 3D vector(such as glm::vec3) and use the x,y,z components as the rgb values. If you use a glm::vec4 then you can also have alpha values.
You can then send this to your shader using something like in vec3 in_colours; where in_colours is the glm::vec3 in the code.
glm::vec3 in_colours= glm::vec3(r, g, b); // Will need to do in a loop or something for each particle, or add to a std::vector and send that to shader
glUniform3fv(glGetUniformLocation(shader_program, "in_colours"), 1, glm::value_ptr(in_colours));
Then in the shader you can access the x, y, z component individually if needed or as a whole.

Techniques for drawing spritesheets in OpenGL with shaders

I'm learning OpenGL with right now and I'd like to draw some sprites to the screen from a sprite sheet. I'm note sure if I'm doing this the right way though.
What I want to do is to build a world out of tiles à la Terraria. That means that all tiles that build my world are 1x1, but I might want things later like entities that are 2x1, 1x2, 2x2 etc.
What I do right now is that I have a class named "Tile" which contains the tile's transform matrix and a pointer to its buffer. Very simple:
Tile::Tile(glm::vec2 position, GLuint* vbo)
{
transformMatrix = glm::translate(transformMatrix, glm::vec3(position, 0.0f));
buffer = vbo;
}
Then when I draw the tile I just bind the buffer and update the shader's UV-coords and vertex position. After that I pass the tile's transform matrix to the shader and draw it using glDrawElements:
glEnableVertexAttribArray(positionAttrib);
glEnableVertexAttribArray(textureAttrib);
for(int i = 0; i < 5; i++)
{
glBindBuffer(GL_ARRAY_BUFFER, *tiles[i].buffer);
glVertexAttribPointer(positionAttrib, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), 0);
glVertexAttribPointer(textureAttrib, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), (void*)(2 * sizeof(GLfloat)));
glUniformMatrix4fv(transformMatrixLoc, 1, GL_FALSE, value_ptr(tiles[i].transformMatrix));
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
}
glDisableVertexAttribArray(positionAttrib);
glDisableVertexAttribArray(textureAttrib);
Could I do this more efficiently? I was thinking that I could have one buffer for 1x1 tiles, one buffer for 2x1 tiles etc. etc. and then just have the Tile class contain UVpos and UVsize and then just send those to the shader, but I'm not sure how I'd do that.
I think what I described with one buffer for 1x1 and one for 2x1 sounds like it would be a lot faster.
Could I do this more efficiently?
I don't think you could do it less efficiently. You are binding a whole buffer object for each quad. You are then uploading a matrix. For each quad.
The way tilemap drawing (and only the map, not the entities) normally works is that you build a buffer object that contains some portion of the visible screen's tiles. Empty space is rendered as a transparent tile. You then render all of the tiles for that region of the screen, all in one drawing call. You provide one matrix for all of the tiles in that region.
Normally, you'll have some number of such visible regions, to make it easy to update the tiles for that region when they change. Regions that go off-screen are re-used for regions that come on-screen, so you fill them with new tile data.