So I'm writing a chunk based procedurally generated terrain game and am running into two errors:
Basically, the way it's working is it generates chunks around the player position, once per game loop.
for (int i = RENDER_RADIUS; i >= 0; i--)
{
for (int j = RENDER_RADIUS; j >= 0; j--)
{
terr.renderChunk(glm::ivec2(c->getXOff() + i, c->getZOff() + j), cubeShader);
terr.renderChunk(glm::ivec2(c->getXOff() - i, c->getZOff() - j), cubeShader);
terr.renderChunk(glm::ivec2(c->getXOff() - i, c->getZOff() + j), cubeShader);
terr.renderChunk(glm::ivec2(c->getXOff() + i, c->getZOff() - j), cubeShader);
}
}
In terr.renderChunk I am using a unordered_map that uses the chunks position as the key and the chunk is the value. If the unordered_map doesn't find the chunk then the position gets added to terr.updateList.
Then, back in the game loop:
if (!terr.updateList.empty())
{
terr.updateChunk(terr.updateList[terr.updateList.size()-1]);
terr.world[terr.updateList[terr.updateList.size()-1]]->render(cubeShader);
terr.updateList.pop_back();
}
In a separate line, I'm ensuring that the players current chunk is loaded as well.
To generate a chunks VBO I added the indices to the chunks vector points and then build it as so:
glGenVertexArrays(1, &this->VAO);
glBindVertexArray(this->VAO);
// vertice VBO
glGenBuffers(1, &this->VBO_VERT);
glBindBuffer(GL_ARRAY_BUFFER, this->VBO_VERT);
glBufferData(GL_ARRAY_BUFFER, this->points.size() * sizeof(glm::vec3), &this->points[0][0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);
// texture coords
glGenBuffers(1, &this->VBO_UV);
glBindBuffer(GL_ARRAY_BUFFER, this->VBO_UV);
glBufferData(GL_ARRAY_BUFFER, this->uvs.size() * sizeof(glm::vec2), &this->uvs[0][0], GL_STATIC_DRAW);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)(0));
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
Now, it all generally works, but I randomly get a segmentation fault and have debugged it down to my render function:
void Chunk::render(Shader shader)
{
shader.setMat4("transform", offsetMatrix);
shader.setFloat("transparency", 1.0f);
glBindVertexArray(VAO);
cout << "size " << points.size() << endl;
glDrawArrays(GL_TRIANGLES, 0, points.size()); //RIGHT HERE CAUSES THE SEGFAULTS
cout << "TEST2" << endl;
}
The segmentation fault seems to happen randomly, however, I do believe it doesn't happen on new chunks but rather going back over old ones.
My question was, is there anything specific with OpenGL/C++ that I'm unaware of that could be causing it?
The other error I'm getting that may be related but I've debugged less is I'm getting chunk rendering errors as so where it renders random terrain but when I go into it, collision still works as if terrain was where it should be.
I realize this is a long question, but any support is really appreciated!
Switching your render call to a renderChunk looks like it would be a safer alternative, not sure if it'd fix the segfault but from what I can see, that's a safer, and not too much slower bet.
Related
I use a library called rapidobj and attempt to load obj file into the game problem is that whenever i try, i get a distorted model that doesn't look right:
even with indice for loop start value changed to "1" which is starting point of indices in obj format, it still doesn't retrieve it correctly.
// public domain, cc by sa is bad
auto objFile = rapidobj::ParseFile(location);
for(unsigned int i = 0; i < objFile.attributes.positions.size(); i+=3)
this->bufferVertices.push_back(glm::vec3(objFile.attributes.positions[i], objFile.attributes.positions[i + 1], objFile.attributes.positions[i + 2]));
for(unsigned int i = 0; i < objFile.shapes.size(); i++) {
for(unsigned e = 0; e < objFile.shapes[i].mesh.indices.size(); e++)
this->indices.push_back(objFile.shapes[i].mesh.indices[e].position_index);
}
for(unsigned i = 0; i < indices.size(); i++)
this->vertices.push_back(this->bufferVertices[this -> indices [i]]);
What i attempt to do with this piece of code is load vertices into an array then use indices and "duplicate" the correct (susposed to work but it doesn't) into the main vertices array, I have done this before and this is first time Im using this library and for some reason I just can't get model displayed here correctly what im possibly doing wrong here.
When checked through renderdoc it can be seen model is partially loaded correctly but then a value keeps repeating for no reason, breaking the model.
There is my buffers:
// public domain cc by sa is awful
glGenVertexArrays(1, &(this->vao));
glBindVertexArray(this->vao);
glEnableVertexAttribArray(0);
glGenBuffers(1, &(this->vbo[0]));
glBindBuffer(GL_ARRAY_BUFFER, this->vbo[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(this->vertices.data())*sizeof(glm::vec3), vertices.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void *)0);
glEnableVertexAttribArray(1);
glGenBuffers(1, &(this->vbo[1]));
glBindBuffer(GL_ARRAY_BUFFER, this->vbo[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(this->position.data()) * sizeof(glm::vec3), position.data(), GL_DYNAMIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*) 0);
glVertexAttribDivisor(1, 1);
Turns out size passed to buffer is wrong and should be changed to
glBufferData(GL_ARRAY_BUFFER, sizeof(this->vertices.data())*sizeof(glm::vec3), vertices.data() * vertices.size(), GL_STATIC_DRAW);
During small size tests old code didnt had any issues essentially hiding this issue and causing a 4 day headache. Rubbery ducky effect after posting it to stackoverflow magically found a solution.
I have a bunch of code (copied from various tutorials) that is supposed to draw a random color-changing cube that the camera shifts around every second or so (with a variable, not using timers yet). It worked in the past before I moved my code into distinctive classes and shoved it all into my main function, but now I can't see anything on the main window other than a blank background. I cannot pinpoint any particular issue here as I am getting no errors or exceptions, and my own personally defined code checks out; when I debugged, every variable had a value I expected, and the shaders I used (in string form) worked in the past before I re-organized my code. I can print out the vertices of the cube in the same scope as the glDrawArrays() function as well, and they have the correct values too. Basically, I have no idea what's wrong with my code that is causing nothing to be drawn.
My best guess is that I called - or forgot to call - some opengl function improperly with the wrong data in one of the three methods of my Model class. In my program, I create a Model object (after glfw and glad are initialized, which then calls the Model constructor), update it every once and a while (time doesn't matter) through the update() function, then draw it to my screen every time my main loop is run through the draw() function.
Possible locations of code faults:
Model::Model(std::vector<GLfloat> vertexBufferData, std::vector<GLfloat> colorBufferData) {
mVertexBufferData = vertexBufferData;
mColorBufferData = colorBufferData;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &VBO);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, VBO);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(mVertexBufferData), &mVertexBufferData.front(), GL_STATIC_DRAW);
glGenBuffers(1, &CBO);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
// Create and compile our GLSL program from the shaders
programID = loadShaders(zachos::DATA_DEF);
glUseProgram(programID);
}
void Model::update() {
for (int v = 0; v < 12 * 3; v++) {
mColorBufferData[3 * v + 0] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 1] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 2] = (float)std::rand() / RAND_MAX;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
}
void Model::draw() {
// Setup some 3D stuff
glm::mat4 mvp = Mainframe::projection * Mainframe::view * model;
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &mvp[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the array
glDrawArrays(GL_TRIANGLES, 0, mVertexBufferData.size() / 3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
};
My question is simple, how come my program won't draw a cube on my screen? Is the issue within these three functions or elsewhere? I can provide more general information about the drawing process if needed, though I believe the code I provided is enough, since I literally just call model.draw().
sizeof(std::vector) will usually just be 24bytes (since the struct contains 3 pointers typically). So basically both of your buffers have 6 floats loaded in them, which is not enough verts for a single triangle, lets alone a cube!
You should instead be calling size() on the vector when loading the data into the vertex buffers.
glBufferData(GL_ARRAY_BUFFER,
mVertexBufferData.size() * sizeof(float), ///< this!
mVertexBufferData.data(), ///< prefer calling data() here!
GL_STATIC_DRAW);
Well the origins of this questions lie there
https://stackoverflow.com/questions/20820456/strange-behavior-in-application-using-glfw
But I decided to simplify question with less text data and pictures(Now I use triangles)
I've got two objects - both objects are using vbo. I initalize every object using constructor and method init
Character::Character() {
glm::vec3 vert[] = {
glm::vec3(-.5f, -.5f, 0.0f) ,
glm::vec3(.5f, -.5f, 0.0f) ,
glm::vec3(0.0f, .5f, 0.0f)
};
vertices.insert(vertices.begin(), vert, vert + 3);
}
void Character::init() {
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
}
void Character::draw() {
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLES, 0, vertices.size());
glDisableVertexAttribArray(0);
}
In another class glfwinitializer I keep both objects in std::vector<Character>. So in function main I create two objects and then push_back them into vector.
Draw loop is simple - I iterate through vector and call draw method
while(!glfwWindowShouldClose(window))
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for(int i = 0; i < scene_items.size(); i++) {
scene_items[i]->draw();
}
this->navigator();
glfwSwapBuffers(window);
glfwPollEvents();
}
Method navigator calculates new position of objects when objects are selected. In every method, where I update data of vector vertices I call method
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
For example(method move_offset is called in navigator)
void Character::move_offset(double x_offset, double y_offset) {
for(int i = 0; i < vertices.size(); i++) {
vertices[i].x += x_offset;
vertices[i].y += y_offset;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
};
But two triangles are not shown on the screen at the same time - when I select one object - another object disappears. When I click on disappeared triangle - it appears and another disappears. (There is also one triangle in the initial location but it cannot be moved)
Why? Is there problem with buffers?
EDIT: PROJECT REPRODUCING PROBLEM WITH ADDITIONAL LIBS
visual studio 2010 project (9 mb)
project with libs
I haven't looked at the full source code in the linked project, but from what you have pasted here, you seem not to call glBindBuffer() in your move_offset() method - thus overwriting the buffer of whatever object was last bound (probably the last one drawn in your loop).
If I do this:
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions1)+sizeof(vertexPositions2), vertexPositions1, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(vertexPositions1), sizeof(vertexPositions2), vertexPositions2);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
I get correct shapes displayed. Now, if I replace these lines by:
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions1)+sizeof(vertexPositions2), 0, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
I still get two correct shapes (while nothing has been added into the buffer). I suppose it is because the memory allocated for the buffer in both cases is the same. So in case 2, it actually uses vertices stored during case 1.
To check that, I just comment the line glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);. The program crashes. Then I uncomment it and I get a black screen as expected.
So, is it really the memory initialized in case 1 that is actually used in case 2 (even if I did not initialized my buffer with any vertex)? Then, how could I avoid such side-effects to detect uninitilaized memore sooner?
EDIT 1: using GL_STREAM_DRAW produces the same behavior
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions1)+sizeof(vertexPositions2), vertexPositions1, GL_STREAM_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(vertexPositions1), sizeof(vertexPositions2), vertexPositions2);
EDIT 2: "similar" use of uninitialized memory on CPU (I am not asking why that or the differences between CPU and GPU. Just stating that random uninitialized memory would help too (if this is the actual problem here of course):
int a[2];
for (unsigned int i = 0; i < 2; ++i)
{
std::cout << a[i] << std::endl;
}
a[0] = 1234; a[1] = 5678;
for (unsigned int i = 0; i < 2; ++i)
{
std::cout << a[i] << std::endl;
}
2 executions in a row will produce:
-858993460
-858993460
1234
5678
Using a debugger tool might help you ( gDEBugger maybe ). Uninitialized memory is pretty much the same on the graphic card as it is on the main RAM. You could get the same kind of artifact reading random memory.
I'm attempting to add a vertex buffer to the Mesh.cpp file of the pixel city procedural city-generating program. Part of the current code looks like this:
for (qsi = _quad_strip.begin(); qsi < _quad_strip.end(); ++qsi) {
glBegin (GL_QUAD_STRIP);
for (n = qsi->index_list.begin(); n < qsi->index_list.end(); ++n) {
glTexCoord2fv (&_vertex[*n].uv.x);
glVertex3fv (&_vertex[*n].position.x);
}
glEnd ();
}
This draws textures onto the rectangular sides of some of the buildings. Just going off the VBO tutorials I've found online, I attempt to convert this to use a vertex buffer like so (I store vboId in Mesh.h):
for (qsi = _quad_strip.begin(); qsi < _quad_strip.end(); ++qsi) {
void * varray = (char *) malloc(sizeof(GLfloat)*5*qsi->index_list.size());
GLfloat *p = (GLfloat*)varray;
int counter = 0;
for (n = qsi->index_list.begin(); n < qsi->index_list.end(); ++n) {
memcpy(&p[counter+0],&_vertex[*n].uv.x, sizeof(GLfloat)*2);
memcpy(&p[counter+2],&_vertex[*n].position.x, sizeof(GLfloat)*3);
counter+=5;
}
glGenBuffersARB(1, &vboId);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboId);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat)*5*qsi->index_list.size(), p, GL_STATIC_DRAW_ARB);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, sizeof(GLfloat)*5, (GLfloat*)0);
glVertexPointer(3, GL_FLOAT, sizeof(GLfloat)*5, (GLfloat*)2);
glDrawArrays(GL_QUAD_STRIP, 0, qsi->index_list.size());
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
free(varray);
}
However, this code simply doesn't work. Nothing is rendered. Sometimes, when I mess with the parameters into glTexCoordPointer or glVertexPointer, I really skewed/garbage data drawn on the screen (or the program crashes), but nothing that has even come close to remotely working.
Your vertex pointer is wrong. When using VBOs, the pointer is interpreted as the byte offset relative to the currently bound buffer. So, you need sizeof(GLfloat)*2 bytes as offset.
glVertexPointer(3, GL_FLOAT, sizeof(GLfloat)*5, (char*)0 +sizeof(GLfloat)*2);
As a side note: you could save the additional data copy if you would create the VBO with the correct size but NULL as the data pointer (which basically creates the data storage) and memory-map it with glMapBuffer(), completely avoivding the malloc()ed temporary buffer you use.