I am trying to render a large dataset of ~100 000 values in OpenGL, right now only as points, later using sprites.
My vector "positions" is ordered like this:
+-------------------------------------------------
| x | y | z | w | x | y | z | w | x | y | z | ...
+-------------------------------------------------
where the fourth component (w) is a scaling factor that is used in the vertex/fragment shaders..
VBO creation [ EDIT ]
...
v_size = positions.size();
GLint positionAttrib = _programObject->attributeLocation("in_position");
glGenVertexArrays(1, &_vaoID);
glGenBuffers(1, &_vboID);
glBindVertexArray(_vaoID);
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
glBufferData(GL_ARRAY_BUFFER, v_size*sizeof(GLfloat), &positions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(positionAttrib);
glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0 );
glBindVertexArray(0);
Render-stage: [ EDIT ]
This works now, but I am not sure if 100% correct, feel free to criticize:
GLint vertsToDraw = v_size / 4;
GLint positionAttrib = _programObject->attributeLocation("in_position");
// edit 1. added vao bind
glBindVertexArray(_vaoID);
glEnableVertexAttribArray(positionAttrib);
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
//glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), (void*)0);
// edit 2. no stride
glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_POINTS, 0, vertsToDraw);
glDisableVertexAttribArray(positionAttrib);
glBindVertexArray(0);
Please let me know if any more code is needed.
Fixed everything as suggested by derhass and keltar, see comments for post. Everything works now.
Related
I have a lot of points that i need to draw in a batch and i have been trying it for two days and i cant seem get any progress with glDrawArrays. I have tried DrawNode and drawing each individual point for testing and it works correctly... but i cant seem to get glDrawArray to give any visual result.
Here is my drawing code(changed a few variable names):
auto glProgram = getGLProgram();
if (glProgram == nullptr) {
setGLProgramState(GLProgramState::getOrCreateWithGLProgramName(
GLProgram::SHADER_NAME_POSITION_COLOR));
glProgram = getGLProgram();
if (glProgram == nullptr) {
return;
}
}
glProgram->use();
glProgram->setUniformsForBuiltins();
GL::enableVertexAttribs(GL::VERTEX_ATTRIB_FLAG_POSITION | GL::VERTEX_ATTRIB_FLAG_COLOR);
GLfloat *vertices = new GLfloat[myStruct->data.size()*2];
GLfloat *colors = new GLfloat[myStruct->data.size()*4];
int vIndex = 0;
int cIndex = 0;
for (std::vector<myPointStruct*>::iterator it = myStruct->data.begin(); it != myStruct->data.end(); ++it) {
vertices[vIndex++] = (*it)->pos.x;
vertices[vIndex++] = (*it)->pos.y;
colors[cIndex++] = (*it)->color.r;
colors[cIndex++] = (*it)->color.g;
colors[cIndex++] = (*it)->color.b;
colors[cIndex++] = (*it)->color.a;
glLineWidth(10);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat), &vertices[0]);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_COLOR, 4, GL_FLOAT, GL_FALSE, sizeof(GLfloat), &colors[0]);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glDrawArrays(GL_POINTS, 0, (GLsizei) myStruct->data.size());
CC_INCREMENT_GL_DRAWN_BATCHES_AND_VERTICES(1, (GLsizei) myStruct->data.size());
And here is how i call the method:
_renderTexture->begin();
myMethodForDrawing();
_renderTexture->end();
Director::getInstance()->getRenderer()->render();
I have also tried:
_renderTexture->begin();
_customCommand.init(_renderTexture->getGlobalZOrder());
_customCommand.func = CC_CALLBACK_0(MyClass:: myMethodForDrawing,this);
auto renderer = Director::getInstance()->getRenderer();
renderer->addCommand(&_customCommand);
_renderTexture->end();
The 5th paramter of glVertexAttribPointer specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array.
Since your vertices and colors are tightly packed, you do not need to set the stride parameter. Note, sizeof(GLfloat) is wrong anyway. In you case it would be 2 * sizeof(GLfloat) for vertices and 4 * sizeof(GLfloat) for colors.
Change your code like this (focus on the 0 for the 5th parameter):
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, &vertices[0]);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_COLOR, 4, GL_FLOAT, GL_FALSE, 0, &colors[0]);
I'm having some difficulty mixing instanced data with non-instanced data.
What I have is an array of GLfloats and an array of GLuints.
The GLuints is my index-data for rendering Elements. My GLfloats is the vertex, texel and depth information.
What I'm doing is collecting all the data required for rendering a series of quads and them submitting them all at once:
For each Quad there are:
Four vertices:
2 floats for the vertex position
2 floats for the texel position
One depth float
6 indices submitted to the index buffer
They are in that order. So after one blit of a texture that I want to show all of it to the extents of the screen I would expect to see (stretch the whole texture to fit screen) (top-right, bottom-right, bottom-left, top-left) (texture coords may seem flipped but they're correct):
Float Buffer
[ 1 | -1 | 1 | 0 ] [ 1 | 1 | 1 | 1 ] [ -1 | 1 | 0 | 1 ] [ -1 | -1 | 0 | 0 ] [ 0.9 ]
[ pos | tex ] [ pos | tex ] [ pos | tex ] [ pos | tex ] [depth]
Uint Buffer
[ 0 | 1 | 3 | 3 | 1 | 2 ]
In case it's not clear, each of the four vertices for a quad are to use the same depth value in their frags.
And so, I setup the buffers like so:
uint32_t maxNum = 32; // this is the max amount I can submit
glGenBuffers(1, &m_vbo); // to store my vertex positions, texels and depth
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, 17 * maxNum, nullptr, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenBuffers(1, &m_indicesBuffer); // to store the indices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6 * maxNum, nullptr, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 17, 0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 17, (void*)(sizeof(GLfloat)*2));
glVertexAttribPointer(2, 1, GL_FLOAT, GL_FALSE, 17, (void*)(sizeof(GLfloat)*16));
glVertexAttribDivisor(0,0);
glVertexAttribDivisor(1,0);
glVertexAttribDivisor(2,4);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
And populating the buffers and drawing is like so. The vertexAndTexelData and indicesData are the GLfloat and GLuint buffers discussed above. numSubmitted is how many I'm actually drawing.
glBindTexture(GL_TEXTURE_2D, texture);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(GLfloat) * vertexAndTexelData.size(), vertexAndTexelData.data());
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(GLuint) * indicesData.size(), indicesData.data());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(m_vertexArrayObject);
glDrawElementsInstanced(GL_TRIANGLES,indicesData.size(),GL_UNSIGNED_INT,(const void*)0,numSubmitted);
glBindVertexArray(0);
When I draw all this, I'm getting a black screen. If I take out the depth data and the instanced drawing stuff, everything works perfectly fine. So I'm guessing it's to do with the instanced drawing.
If I put this through RenderDoc I see the following:
Buffer Bindings
So I can see my three vertex attributes appear to be set correctly.
Vertex Attribute Formats
So these appear to be in the right layout and the correct data types.
Mesh Output
However, if I look at the Mesh data being submitted...
Something is clearly wrong. The Depth values all appear to be correct, so it appears the instanced drawing is working there.
But obviously the positions and texels are busted. Interestingly, where there isn't a garbage value, i.e. a -1, 0 or 1, it is correct for that position.
I suspect it's the stride or offset or something... but I can't see the issue.
I followed a tutorial to build a .obj model with OpenGL.
I have only one problem, at the end, we have a vectorglm::vec3 to draw.
In the tutorial they said to use "glBufferData()"
Then I made that
float* _vertices = new float[vertices.size() * 3];
for (int i = 0; i < vertices.size(); ++i)
{
float* _t = glm::value_ptr(vertices[i]);
for (int j = 0; j < 3; ++j)
_vertices[i + j*(vertices.size() - 1)] = _t[j];
}
(I converted my vector un float*)
Then I load it:
mat4 projection;
mat4 modelview;
projection = perspective(70.0, (double)800 / 600, 1.0, 100.0);
modelview = mat4(1.0);
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(_vertices), _vertices, GL_DYNAMIC_DRAW);
And I finally draw it in my main loop :
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
modelview = lookAt(vec3(3, 1, 3), vec3(0, 0, 0), vec3(0, 1, 0));
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, vertices.size());
glDisableVertexAttribArray(0);
But it does not work... (I have a black screen)
sizeof(_vertices) does not give you what you expect. It returns the size of float*, which is a pointer, and not the number of bytes of the data behind the pointer.
Use vertices.data() for the pointer to the first element in the std::vector and 3 * vertices.size() * sizeof(float) as the number of bytes if your vector contains floats (glm::vec3 containes 3 floats).
together like:
glBufferData(GL_ARRAY_BUFFER, 3 * vertices.size() * sizeof(float), vertices.data(), GL_DYNAMIC_DRAW);
You can also substitute 3 * sizeof(float) to sizeof(glm::vec3).
Also check if your glm::perspective function expects the field of view as degrees or radians, you currently use 70.0 degrees.
I have a working Vertex-Buffer-Object but I need to add the normals.
The normales are stored in the same array as the vertex positons. They are interleaved
Vx Vy Vz Nx Ny Nz
This is my code so far:
GLfloat values[NUM_POINTS*3 + NUM_POINTS*3];
void initScene() {
for(int i = 0; i < (NUM_POINTS) ; i = i+6){
values[i+0] = bunny[i];
values[i+1] = bunny[i+1];
values[i+2] = bunny[i+2];
values[i+3] = normals[i];
values[i+4] = normals[i+1];
values[i+5] = normals[i+2];
}
glGenVertexArrays(1,&bunnyVAO);
glBindVertexArray(bunnyVAO);
glGenBuffers(1, &bunnyVBO);
glBindBuffer(GL_ARRAY_BUFFER, bunnyVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(bunny), bunny, GL_STATIC_DRAW);
glVertexAttribPointer(0,3, GL_FLOAT, GL_FALSE, 0,0);
glEnableVertexAttribArray(0);
glGenBuffers(1, &bunnyIBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bunnyIBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(triangles), triangles, GL_STATIC_DRAW);
// unbind active buffers //
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
void renderScene() {
if (bunnyVBO != 0) {
// x: bind VAO //
glEnableClientState(GL_VERTEX_ARRAY);
glBindVertexArray(bunnyVAO);
glDrawElements(GL_TRIANGLES, NUM_TRIANGLES, GL_UNSIGNED_INT, NULL);
glDisableClientState(GL_VERTEX_ARRAY);
// unbind active buffers //
glBindVertexArray(0);
}
}
I can see something on the screen but it is not right as the normals are not used correctly...
How can I use the values array correctly connected with my code so far.
You need to call glVertexAttribPointer two times, once for the vertices and once for the normals. This is how you tell OpenGL how your data is layed out inside your vertex buffer.
// Vertices consist of 3 floats, occurring every 24 bytes (6 floats),
// starting at byte 0.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 24, 0);
// Normals consist of 3 floats, occurring every 24 bytes starting at byte 12.
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 24, 12);
This is assuming that your normal attribute in your shader has an index of 1.
I generated model (Suzie) in blender and exported it to .obj file with normals. During loading mode to my app i noticed that numbers of vertices and normals are diffrent (2012 and 1967).
I try to implement simple cell shading. The problem is in passing normals to shader. For storing vertex data i use vectors from glm.
std::vector<unsigned int> face_indices;
std::vector<unsigned int> normal_indices;
std::vector<glm::vec3> geometry;
std::vector<glm::vec3> normals;
Result i've got so far
Buffers Layout
glBindVertexArray(VAO);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glBufferData(GL_ARRAY_BUFFER, geometry.size() * sizeof(glm::vec3), &geometry[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, NormalVBOID);
glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(glm::vec3), &normals[0], GL_DYNAMIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, VIndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, face_indices.size() * sizeof(unsigned int), &face_indices[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
Rendering fragment
glBindVertexArray(VAO);
glPolygonMode(GL_FRONT_AND_BACK, GL_QUADS);
glDrawElements(GL_QUADS, face_indices.size(), GL_UNSIGNED_INT, (void*)0);
glBindVertexArray(0);
The reason that had such wierd problem was that some normals were used more than once to preserve disk space so i had to rearrange them in a proper order. So the solution is pretty trival.
geometry.clear();
normals.clear();
geometry.resize(vv.size());
normals.resize(vv.size());
for (unsigned int i = 0; i < face_indices.size(); i++)
{
int vi = face_indices[i];
int ni = normal_indices[i];
glm::vec3 v = vv [vi];
glm::vec3 n = vn [ni];
geometry[vi] = v ;
normals[vi] = n ;
indices.push_back(vi);
}
You should also keep in mind that using the smooth modifier in Blender before export will in some cases help ensure that you have 1 normal per vertex (you may or may not need to also set per-vert normal view instead of face-normal view...can't rem so you'll have to test). This is because by default, blender uses per-face normals. The smooth modifier ("w" hotkey menu)
will switch it to per-vertex norms. Then when you export, you export verts and norms as usual, and the number should match. It doesn't always, but this has worked for me in the past.
This could possibly mean less unnecessary juggling of your data during import.