OpenGL draws weird lines on top of polygons - c++

Let me introduce you to Fishtank:
It's an aquarium simulator I am doing on OpenGL to learn before going into Vulkan.
I have drawn many fish like these:
Aquarium
Now I added the grid functionnality which goes like this:
Grid
But when I let it turn for some time, these lines appear:
Weird Lines
I've seen somewhere to clear the Depth Buffer, which I did, but that doesn't resolve the problem.
Here's the code of the function:
void Game::drawGrid()
{
std::vector<glm::vec2> gridVertices;
for (unsigned int x = 1; x < mGameMap.mColCount; x += 1) //Include the last one as the drawed line is the left of the column
{
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, x*mGameMap.mCellSize, mGameMap.mCellSize)));
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, x*mGameMap.mCellSize, (mGameMap.mRowCount-1)*mGameMap.mCellSize)));
}
for (unsigned int y = 1; y < mGameMap.mRowCount; y += 1) //Same here but special info needed:
// Normally, the origin is at the top-left corner and the y-axis points to the bottom. However, OpenGL's y-axis is reversed.
// That's why taking into account the mRowCount-1 actually draws the very first line.
{
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, mGameMap.mCellSize, y*mGameMap.mCellSize)));
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, (mGameMap.mColCount - 1)*mGameMap.mCellSize, y*mGameMap.mCellSize)));
}
mShader.setVec3("color", glm::vec3(1.0f));
glBufferData(GL_ARRAY_BUFFER, gridVertices.size()*sizeof(glm::vec2), gridVertices.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT_VEC2, GL_FALSE, sizeof(glm::vec2), (void*)0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_LINES, 0, gridVertices.size()*sizeof(glm::vec2));
glClear(GL_DEPTH_BUFFER_BIT);
}
I'd like to erase those lines and understand why OpenGL does this (or maybe it's me but I don't see where).

This is the problematic line:
glDrawArrays(GL_LINES, 0, gridVertices.size()*sizeof(glm::vec2));
If you look at the documentation for this function, you will find
void glDrawArrays( GLenum mode, GLint first, GLsizei count);
count: Specifies the number of indices to be rendered
But you are passing the byte size. Hence, you are asking OpenGL to draw more vertices than there are in your vertex buffer. The specific OpenGL implementation you are using is probably reading past the end of the grid vertex buffer and finds vertices from the fish vertex buffer to draw (but this behavior is undefined).
So, just change it to
glDrawArrays(GL_LINES, 0, gridVertices.size());
A general comment: Do not create vertex buffers every time you want to draw the same thing. Create them at the beginning of the application and re-use them. You can also change their content if needed, but be careful with that since it comes with a performance price. Creating vertex buffers is even more costlier.

Related

openGL-drawing using unsigned int coordinates

I have been trying to draw points with unsigned integer coordinates
However, whenever I do one point is drawn at the center and the next at the far right, I cannot see any of the other points (except the first and second from the next line). Is there any way to fix this without converting to floats?
My draw code is as follows:
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
SDL_GL_SwapWindow(_window);
glPointSize(1);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_UNSIGNED_INT, GL_TRUE, 0, 0);
glDrawArrays(GL_POINTS, 0, _dataCount);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
EDIT:
I have determined that it is related to the gl_Position.w value in the vertex shader, can anyone tell me how to use this correctly?
EDIT 2:
I have distributed the values between 0 and _maxValue (largest unsigned int for the moment), and set that to be gl_Position.w, for the moment it draws to the screen but only in the upper left quadrant, I would assume this is because it is being mapped to 0.0-1.0 instead of -1.0-1.0. How do you fix this issue?
Alternatively I feel I should be doing something with the projection matrix, I haven't needed to use it before as I'm relatively new to openGL, so I'm not sure exactly what I specifically need if this is the correct approach, so any help would be appreciated

How to correctly link open GL normal for OpenGL shaders

I am trying to do a simple map rendered in OpenGL 2.1 and Qt5. But I'm failing on very basic issues. The one I'm presenting here is surface normals.
I have 4 object made of a single triangle geometry. A geoetry to make simple, is a dynamically allocated array of Vertex, where a Vertex is a couple of two QVector3D, a 3D position class predefined in Qt.
struct Vertex
{
QVector3D position;
QVector3D normal;
};
I'm computing the normal at a vertex by using the cross product of the two vectors from that vertex to the next or previous vertex. Normal computation for the structure seems fine, by debugging or printing results to the console.
QVector3D(-2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(-2, 2, -2) has normal QVector3D(0, 0, 1)
...
But when I feed the data to the shaders, the result are absurd! Here is a picture of the polygons colored with the normal value at each position:
As in normal maps, red=x, green=y and blue=z. The top left corner of the black square is the origin of the world. As you can see the normal at some point seems to simply be the position at that point, without the z-value. Can you hint me what might be wrong, knowing the painting code is :
glUseProgram(program.programId());
glEnableClientState(GL_NORMAL_ARRAY);
program.setUniformValue("modelViewProjectionMatrix", viewCamera);
program.setUniformValue("entityBaseColor", QColor(0,120,233));
program.setUniformValue("sunColor", QColor("white"));
program.setUniformValue("sunBrightness", 1.0f);
static QVector3D tmpSunDir = QVector3D(0.2,-0.2,1.0).normalized();
program.setUniformValue("sunDir",tmpSunDir);
for( size_t i = 0; i < m_numberOfBoundaries; ++i)
{
glBindBuffer(GL_ARRAY_BUFFER, m_bufferObjects[i]);
int vertexLocation = program.attributeLocation("vertexPosition");
program.setAttributeArray( vertexLocation, GL_FLOAT, &(m_boundaries[i].data->position), sizeof(Vertex) );
program.enableAttributeArray(vertexLocation);
glVertexAttribPointer( vertexLocation, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
int vertexNormal = program.attributeLocation("vertexNormal");
program.setAttributeArray( vertexNormal, GL_FLOAT, &(m_boundaries[i].data->normal), sizeof(Vertex) );
program.enableAttributeArray(vertexNormal);
glVertexAttribPointer( vertexNormal, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
glDrawArrays( GL_POLYGON, 0, m_boundaries[i].sizeOfData );
}
glDisableClientState(GL_NORMAL_ARRAY);
where a boundary is a geometrically connected component of the polygon. program is a QOpenGLShaderProgram, an Qt abstraction for shader programs. Each boundary is bound to a buffer object. The buffer object numbers are stored in the array m_bufferObjects. Polygon “boundaries” are stored as struct in the array m_boundaries. They have two fields : data, a pointer to the start of the array of vertices for the loop, and sizeOfData, the number of points for the polygon.
Until I get to the real problem of yours, here's something, probably unrelated but just as wrong:
glEnableClientState(GL_NORMAL_ARRAY);
/*...*/
glDisableClientState(GL_NORMAL_ARRAY);
You're using self defined vertex attributes, so it makes absolutely no sense to use those old fixed function pipeline client state locations. Use glEnableVertexAttribArray(location_index) instead,
Update
So I finally came around to take a closer look at your code and your problem is the mix of Qt's abstraction layer and use of raw OpenGL commands. Essentially your problem boils down to that you have a VBO bound when making calls to QOpenGLShaderProgram::setAttribArray followed by a call of glVertexAttribPointer.
One problem is, that setAttribArray internally makes the call of glVertexAttribPointer for you, so your own call to it is redundant and overwrites whatever Qt's stuff did. The more severe problem is, that you do have a VBO bound by glBindBuffer, so calls to glVertexAttribPointer actually take an byte offset into the VBO data instead of a pointer (in fact with a VBO bound passing a 0, which in pointer terms was a null pointer will yield a perfectly valid data offset). See this answer by me, why this is all a bit misleading and actually violates the C specification: https://stackoverflow.com/a/8284829/524368
Recent OpenGL versions actually have a new API for specifying attrib array offsets that conform to the C language specification.
The correct Qt method to use would be QOpenGLShaderProgramm::setAttribBuffer. Unfortunately your code shows not the exact definition of m_boundaries and your call to glBufferData or glBufferSubData; if I had that I could give you instructions on how to alter your code.

Not able to get output with glDrawElements() & glMultiDrawElements()

I'm in the process of building a graphics app where the user can specify vertices by clicking on a canvas and then the vertices are used to draw polygons.
The app supports line, triangle and polygon modes. Drawing a line and triangle is done by counting the number of clicks. Then vertex arrays are created and data is bound to buffers and rendered using glDrawArrays(). The tricky one is the polygon mode. The user can specify any number of vertices and clicking right mouse button triggers drawing. I initially planned to use glMultiDrawElements, but somehow I wasn't getting any output. So I tried to call glDrawElements() in a loop. still with no luck. I searched a lot and read a lot of documentation about using glDrawElements()/glMultiDrawElements() with VBOs and VAOs and also with glVertexPointer() and glColorPointer. Still no luck.
I have used the following for keeping track of vertex attributes:
GLfloat ** polygonVertices; //every polygon vertex list goes into this..
GLuint * polygonIndicesCounts; //pointer to hold the number of vertices each polygon has
GLuint ** polygonIndices; //array of pointers to hold indices of vertices corresponding to polygons
GLfloat * polygonColors; //for every mouse click, colors are randomly generated.
and the code for rendering:
glVertexPointer(4, GL_FLOAT, 0, (GLvoid*)polygonVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, (GLvoid*)polygonColors);
//glMultiDrawElements(GL_POLYGON, polygonIndicesCounts, GL_UNSIGNED_INT, polygonIndices, polygonCount);
for(int i = 0 ; i < polygonCount; i ++)
glDrawElements(GL_POLYGON, polygonIndicesCounts[i], GL_UNSIGNED_INT, polygonIndices[i]);
Why are polygonVertices pointers to pointers? If you cast that to (void*) the only thing OpenGL sees is the value of pointer to which each points. You want those to be flat a array, so their type signature should be compatible with float* (not float**). A pointer to a pointer makes sense only for the glMultiDrawArrays call.

Techniques for drawing spritesheets in OpenGL with shaders

I'm learning OpenGL with right now and I'd like to draw some sprites to the screen from a sprite sheet. I'm note sure if I'm doing this the right way though.
What I want to do is to build a world out of tiles à la Terraria. That means that all tiles that build my world are 1x1, but I might want things later like entities that are 2x1, 1x2, 2x2 etc.
What I do right now is that I have a class named "Tile" which contains the tile's transform matrix and a pointer to its buffer. Very simple:
Tile::Tile(glm::vec2 position, GLuint* vbo)
{
transformMatrix = glm::translate(transformMatrix, glm::vec3(position, 0.0f));
buffer = vbo;
}
Then when I draw the tile I just bind the buffer and update the shader's UV-coords and vertex position. After that I pass the tile's transform matrix to the shader and draw it using glDrawElements:
glEnableVertexAttribArray(positionAttrib);
glEnableVertexAttribArray(textureAttrib);
for(int i = 0; i < 5; i++)
{
glBindBuffer(GL_ARRAY_BUFFER, *tiles[i].buffer);
glVertexAttribPointer(positionAttrib, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), 0);
glVertexAttribPointer(textureAttrib, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), (void*)(2 * sizeof(GLfloat)));
glUniformMatrix4fv(transformMatrixLoc, 1, GL_FALSE, value_ptr(tiles[i].transformMatrix));
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
}
glDisableVertexAttribArray(positionAttrib);
glDisableVertexAttribArray(textureAttrib);
Could I do this more efficiently? I was thinking that I could have one buffer for 1x1 tiles, one buffer for 2x1 tiles etc. etc. and then just have the Tile class contain UVpos and UVsize and then just send those to the shader, but I'm not sure how I'd do that.
I think what I described with one buffer for 1x1 and one for 2x1 sounds like it would be a lot faster.
Could I do this more efficiently?
I don't think you could do it less efficiently. You are binding a whole buffer object for each quad. You are then uploading a matrix. For each quad.
The way tilemap drawing (and only the map, not the entities) normally works is that you build a buffer object that contains some portion of the visible screen's tiles. Empty space is rendered as a transparent tile. You then render all of the tiles for that region of the screen, all in one drawing call. You provide one matrix for all of the tiles in that region.
Normally, you'll have some number of such visible regions, to make it easy to update the tiles for that region when they change. Regions that go off-screen are re-used for regions that come on-screen, so you fill them with new tile data.

OpenGL Batching: Why does my draw call exceed array buffer bounds?

I trying to implement some relatively simple 2D sprite batching in OpenGL ES 2.0 using vertex buffer objects. However, my geometry is not drawing correctly and some error I can't seem to locate is causing the GL ES analyzer in Instruments to report:
Draw Call Exceeded Array Buffer Bounds
A draw call accessed a vertex outside the range of an array buffer in use. This is a serious error, and may result in a crash.
I've tested my drawing with the same vertex layout by drawing single quads at a time instead of batching and it draws as expected.
// This technique doesn't necessarily result in correct layering,
// but for this game it is unlikely that the same texture will
// need to be drawn both in front of and behind other images.
while (!renderQueue.empty())
{
vector<GLfloat> batchVertices;
GLuint texture = renderQueue.front()->textureName;
// find all the draw descriptors with the same texture as the first
// item in the vector and batch them together, back to front
for (int i = 0; i < renderQueue.size(); i++)
{
if (renderQueue[i]->textureName == texture)
{
for (int vertIndex = 0; vertIndex < 24; vertIndex++)
{
batchVertices.push_back(renderQueue[i]->vertexData[vertIndex]);
}
// Remove the item as it has been added to the batch to be drawn
renderQueue.erase(renderQueue.begin() + i);
i--;
}
}
int elements = batchVertices.size();
GLfloat *batchVertArray = new GLfloat[elements];
memcpy(batchVertArray, &batchVertices[0], elements * sizeof(GLfloat));
// Draw the batch
bindTexture(texture);
glBufferData(GL_ARRAY_BUFFER, elements, batchVertArray, GL_STREAM_DRAW);
prepareToDraw();
glDrawArrays(GL_TRIANGLES, 0, elements / BufferStride);
delete [] batchVertArray;
}
Other info of plausible relevance: renderQueue is a vector of DrawDescriptors. BufferStride is 4, as my vertex buffer format is interleaved position2, texcoord2: X,Y,U,V...
Thank you.
glBufferData expects its second argument to be the size of the data in bytes. The correct way to copy your vertex data to the GPU would therefore be:
glBufferData(GL_ARRAY_BUFFER, elements * sizeof(GLfloat), batchVertArray, GL_STREAM_DRAW);
Also make sure that the correct vertex buffer is bound when calling glBufferData.
On a performance note, allocating a temporary array is absolutely unnecessary here. Just use the vector directly:
glBufferData(GL_ARRAY_BUFFER, batchVertices.size() * sizeof(GLfloat), &batchVertices[0], GL_STREAM_DRAW);