We are migrating OpenGL to newer version ( ES 2.0 ). Application actually renders Vector images ( i.e CGM files). I have successfully rendered the graphics using vertices using VBO. But the problem is performance. DisplayList performance is way better than VBO. So I am thinking using VBO indexing.
How to come up the indices array ? Will Indexing improve performance?
Please find my code below
//This is my data structure
struct DisplayIndexID {
int idx;
DrawStateT drawState;
//Every display Index ID has its own draw models.
std::vector<std::unique_ptr<vertexModel>> readytoDrawModels;
};
//Initializing the VBO
void initVbo(std::vector<DisplayIndexID> & v)
{
glBindVertexArray(geomVAO);
glBindBuffer(GL_ARRAY_BUFFER, geomVBO);
std::vector<QVector3D> vecToDraw;
std::vector<QVector3D> finalVecToDraw;
for (int j = 0; j < v.size(); j++)
for (auto& vModel : v[j].readytoDrawModels)
{
if (vModel) {
vecToDraw = vModel->getVertices();
finalVecToDraw.insert(finalVecToDraw.end(), vecToDraw.begin(), vecToDraw.end());
}
}
glBufferData(GL_ARRAY_BUFFER, sizeof(QVector3D) * finalVecToDraw.size(), &finalVecToDraw[0],GL_STATIC_DRAW );
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
//Rendering function
void drawDisplayLists(std::vector<DisplayIndexID> & v)
{
GLintptr offset = 0;
initVbo(v);
for (int i = 0; i < v.size(); i++)
{
//***********PRINT AREA***********************/
for (auto& vModel : v[i].readytoDrawModels)
{
glBindBuffer(GL_ARRAY_BUFFER, geomVBO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(QVector3D), (GLvoid*)offset);
switch (vModel->getDrawMode())
{
case 0: //GL_POINTS
glDrawArrays(GL_POINTS, 0, vModel->getVertices().size());
break;
case 1: //GL_LINES
glDrawArrays(GL_LINES, 0, vModel->getVertices().size());
break;
case 3: // GL_TRIANGLE
...
}
offset += sizeof(QVector3D) * vModel->getVertices().size();
}
}
I never worked with vector graphics but this is just on how to create indices from some vertices. So if I read your code right, your vertices are stored in the finalVecToDraw vector. First create a vector for the indices, the supported types are unsigned char up to unsigned int.
Then resize that vector to the size of your vertex array (this will save you much time resizing the vector).
Fill the vector, so that the element at index n has the value n. So
for example with this code:
for(int i = 0; i < indices.size(); i++) {
indices[i] = i;
}
This will do what you already do, but using indices.
Now we need to remove the duplicate entries in the vertices vector and adjust the indices. I don't know what algorithm would be the best to find duplicates for your case, but you need to do this.
The duplicate is at the index d and the first time this value showed up was at o. d must be bigger than o:
Delete the vertex at d.
Iterate over your indices and if an element has a value bigger than d, subtract one from it.
Set the value of the index at d to o.
Related
I'm trying to pass a vector into the last argument of glDrawElements(). If I used array, it worked fine. However, when I switched to a vector, it only rendered a portion of the object.
This worked just fine:
//TA_CartesianSys.h
class TA_CartesianSys
{
private:
int drawOrder[86];
//the rest of the class
}
//---------------------------------
//TA_CartesianSys.cpp
TA_CartesianSys::TA_CartesianSys()
{
GLfloat CartesianVertices[] = { ... };
int tempOrder[] = { ... };
for(int i = 0; i < sizeof(tempOrder) / sizeof(int); i++)
{
drawOrder[i] = tempOrder[i];
}
//Code to generate and bind vertex array and buffer
glDrawElements(GL_LINES, sizeof(drawOrder)/sizeof(int), GL_UNSIGNED_INT, drawOrder);
}
It worked as expected, and this was how it looked like:
Now, I decided to use vector instead of array for drawOrder[]. This is the new code:
//TA_CartesianSys.h
class TA_CartesianSys
{
private:
std::vector<int> drawOrder; //***NOTE: This is the change
//the rest of the class
}
//---------------------------------
//TA_CartesianSys.cpp
TA_CartesianSys::TA_CartesianSys()
{
GLfloat CartesianVertices[] = { ... };
int tempOrder[] = { ... };
drawOrder.resize(sizeof(tempOrder) / sizeof(int));
for(int i = 0; i < (sizeof(tempOrder) / sizeof(int)); i++)
{
drawOrder[i] = tempOrder[i];
}
//Code to generate and bind vertex array and buffer - Same as above
glDrawElements(GL_LINES, sizeof(drawOrder)/sizeof(int), GL_UNSIGNED_INT, &drawOrder[0]);
}
And this was what I got when I ran the program:
NOTE: the square in the middle was not part of this object. It belonged to a totally different class.
So, basically, when I changed the drawOrder[] to vector instead of array, only a small part of my object was rendered (the 2 lines). The rest were no seen.
I put a break point right at the draw() function, and it showed that the drawOrder vector was initialized properly, with the exact same value as its array counter part.
So, why am I only getting 2 lines instead of the whole grids? What am I missing?
sizeof on a vector object just tells you the size of the vector class instance, not the number of elements. What you want is the size member fucction of std::vector, which tells you the number of elements (i.e. you must not divide the number returned by std::vector::size by the element sizeof).
glDrawElements(
GL_LINES,
drawOrder.size(), // <<<<<
GL_UNSIGNED_INT,
&drawOrder[0] );
I have written a simple obj parser in c++ that loads the vertices, indices and texture coordinates (that's all the data I need).
Here is the function:
Model* ModelLoader::loadFromOBJ(string objFile, ShaderProgram *shader, GLuint texture)
{
fstream file;
file.open(objFile);
if (!file.is_open())
{
cout << "ModelLoader: " << objFile << " was not found";
return NULL;
}
int vertexCount = 0;
int indexCount = 0;
vector<Vector3> vertices;
vector<Vector2> textureCoordinates;
vector<Vector2> textureCoordinatesFinal;
vector<unsigned int> vertexIndices;
vector<unsigned int> textureIndices;
string line;
while (getline(file, line))
{
vector<string> splitLine = Common::splitString(line, ' ');
// v - vertex
if (splitLine[0] == "v")
{
Vector3 vertex(stof(splitLine[1]), stof(splitLine[2]), stof(splitLine[3]));
vertices.push_back(vertex);
vertexCount++;
}
// vt - texture coordinate
else if (splitLine[0] == "vt")
{
Vector2 textureCoordinate(stof(splitLine[1]), 1 - stof(splitLine[2]));
textureCoordinates.push_back(textureCoordinate);
}
// f - face
else if (splitLine[0] == "f")
{
vector<string> faceSplit1 = Common::splitString(splitLine[1], '/');
vector<string> faceSplit2 = Common::splitString(splitLine[2], '/');
vector<string> faceSplit3 = Common::splitString(splitLine[3], '/');
unsigned int vi1 = stoi(faceSplit1[0]) - 1;
unsigned int vi2 = stoi(faceSplit2[0]) - 1;
unsigned int vi3 = stoi(faceSplit3[0]) - 1;
unsigned int ti1 = stoi(faceSplit1[1]) - 1;
unsigned int ti2 = stoi(faceSplit2[1]) - 1;
unsigned int ti3 = stoi(faceSplit3[1]) - 1;
vertexIndices.push_back(vi1);
vertexIndices.push_back(vi2);
vertexIndices.push_back(vi3);
textureIndices.push_back(ti1);
textureIndices.push_back(ti2);
textureIndices.push_back(ti3);
indexCount += 3;
}
}
// rearanging textureCoordinates into textureCoordinatesFinal based on textureIndices
for (int i = 0; i < indexCount; i++)
textureCoordinatesFinal.push_back(textureCoordinates[textureIndices[i]]);
Model *result = new Model(shader, vertexCount, &vertices[0], NULL, texture, indexCount, &textureCoordinatesFinal[0], &vertexIndices[0]);
models.push_back(result);
return result;
}
As you can see, I take into account the 1 - texCoord.y (because blender and opengl use a different coordinate system for textures).
I also arrange the texture coordinates based on the texture indices after the while loop.
However, the models I try to render have their textures messed up. Here is an example:
Texture messed up
I even tried it with a single cube which I unwrapped myself in blender and applied a very simple brick texture. In 1 or 2 faces, the texture was fine and working, then in some other faces, 1 of the tringles had a correct texture and the others appeared streched out (same as in the picture above).
To define a mesh, there is only one index list that indexes the vertex attributes. The vertex attributes (in your case the vertices and the texture coordinate) form a record set, which is referred by these indices.
This causes, that each vertex coordinate may occur several times in the list and each texture coordinate may occur several times in the list. But each combination of vertices and texture coordinates is unique.
Take the vertexIndices and textureIndices an create unique pairs of vertices and texture coordinates (verticesFinal, textureCoordinatesFinal).
Create new attribute_indices, which indexes the pairs.
Use the a temporary container attribute_pairs to manage the unique pairs and to identify their indices:
#include <vector>
#include <map>
// input
std::vector<Vector3> vertices;
std::vector<Vector2> textureCoordinates;
std::vector<unsigned int> vertexIndices;
std::vector<unsigned int> textureIndices;
std::vector<unsigned int> attribute_indices; // final indices
std::vector<Vector3> verticesFinal; // final vertices buffer
std::vector<Vector2> textureCoordinatesFinal; // final texture coordinate buffer
// map a pair of indices to the final attribute index
std::map<std::pair<unsigned int, unsigned int>, unsigned int> attribute_pairs;
// vertexIndices.size() == textureIndices.size()
for ( size_t i = 0; i < vertexIndices.size(); ++ i )
{
// pair of vertex index an texture index
auto attr = std::make_pair( vertexIndices[i], textureIndices[i] );
// check if the pair aready is a member of "attribute_pairs"
auto attr_it = attribute_pairs.find( attr );
if ( attr_it == attribute_pairs.end() )
{
// "attr" is a new pair
// add the attributes to the final buffers
verticesFinal.push_back( vertices[attr.first] );
textureCoordinatesFinal.push_back( textureCoordinates[attr.first] );
// the new final index is the next index
unsigned int new_index = (unsigned int)attribute_pairs.size();
attribute_indices.push_back( new_index );
// add the new map entry
attribute_pairs[attr] = new_index;
}
else
{
// the pair "attr" already exists: add the index which was found in the map
attribute_indices.push_back( attr_it->second );
}
}
Note the number of the vertex coordinates (verticesFinal.size()) is equal the number of the texture coordiantes (textureCoordinatesFinal.size()). But the number of the indices (attribute_indices.size()) is something completely different.
// verticesFinal.size() == textureCoordinatesFinal.size()
Model *result = new Model(
shader,
verticesFinal.size(),
verticesFinal.data(),
NULL, texture,
attribute_indices.size(),
textureCoordinatesFinal.data(),
attribute_indices.data() );
I am currently working on a game and I want to know if there is any way of handling with the elements i am drawing . For example : if i draw in a loop 100 cubes , how can i show / hide the cube number 15 or 63 or n ... I thought that initializing elements in a list would work , but i didn't find any property of it that could help.
GLuint cube;
cube = glGenLists(1);
glNewList(cube,GL_COMPILE);
for(int i = -30; i < 3; i++) {
for(int j = -30; j < 3; j++) {
glPushMatrix();
glTranslatef(i*2.0,0,j * 2.0);
Dcube();
glPopMatrix();
}
}
glEndList();
//something like : glDeleteList(cube); but that only works with entire list not with individual objects..
You have a display list, very good. So now you're back to using your regular language primitives to simply call that function.
std::array<bool, 100> cubes;
std::fill(cubes.begin(), cubes.end(), true);
cubes[15] = false;
cubes[63] = false;
for (bool drawCube : cubes) {
if (drawCube) {
// move a bit, perhaps using glTranslate
glCallList(cube);
}
}
OpenGL isn't your statekeeper. It just draws what you tell it to, you're responsible for keeping your objects.
I've been working on skeletal animation using OPenGL and sending bone weight influences to the GPU in a struct containing a pair of arrays, however changing these arrays to vectors doesn't seem to work (as in the model ceases to render, as if the bone information was missing or wrong in some manner).
Asides from the declaration of the vectors within the struct, the code path is identical. I've debugged through and ensured the size/values of the elements are okay. The only thing I can think of is that C++ is freeing the contents of the vectors before it can be uploaded to the GPU, but I've seen no evidence to support that claim.
It's worth noting that the resizes used are to make the structure compatible with the existing array functionality and will be removed to allow for dynamic scaling after this issue is resolved.
Here's the structure as it stands, using vectors:
struct VertexBoneData
{
std::vector<glm::uint> IDs;
std::vector<float> weights;
VertexBoneData()
{
Reset();
IDs.resize(NUM_BONES_PER_VEREX);
weights.resize(NUM_BONES_PER_VEREX);
};
void Reset()
{
IDs.clear();
weights.clear();
}
void AddBoneData(glm::uint p_boneID, float p_weight);
};
Edit: Additional information.
Here's the loop to get the bone information in to the struct:
void Skeleton::LoadBones(glm::uint baseVertex, const aiMesh* mesh, std::vector<VertexBoneData>& bones)
{
for (glm::uint i = 0; i < p_mesh->mNumBones; i++) {
glm::uint boneIndex = 0;
std::string boneName(mesh->mBones[i]->mName.data);
if (_boneMap.find(boneName) == _boneMap.end()) {
//Allocate an index for a new bone
boneIndex = _numBones;
_numBones++;
BoneInfo bi;
_boneInfo.push_back(bi);
SetMat4x4(_boneInfo[boneIndex].boneOffset, mesh->mBones[i]->mOffsetMatrix);
_boneMap[boneName] = boneIndex;
}
else {
boneIndex = _boneMap[boneName];
}
for (glm::uint j = 0; j < mesh->mBones[i]->mNumWeights; j++) {
glm::uint vertId = baseVertex + mesh->mBones[i]->mWeights[j].mVertexId;
float weight = mesh->mBones[i]->mWeights[j].mWeight;
bones[vertId].AddBoneData(boneIndex, weight);
}
}
}
The code to add the ID/weights to the vertex:
void VertexBoneData::AddBoneData(glm::uint p_boneID, float p_weight)
{
for (glm::uint i = 0; i < ARRAY_SIZE_IN_ELEMENTS(IDs); i++) {
if (weights[i] == 0.0) {
IDs[i] = p_boneID;
weights[i] = p_weight;
return;
}
}
assert(0); //Should never get here - more bones than we have space for
}
Loop to load the Skeleton and insert to the VBO object from the Mesh class:
if (_animator.HasAnimations())
{
bones.resize(totalIndices);
for (unsigned int i = 0; i < _scene->mNumMeshes; ++i){
_animator.LoadSkeleton(_subMeshes[i].baseVertex, _scene->mMeshes[i], bones);
}
_vertexBuffer[2].AddData(&bones);
}
Insert to the VBO object:
void VertexBufferObject::AddData(void* ptr_data)
{
data = *static_cast<std::vector<BYTE>*>(ptr_data);
}
Finally the code to add the information to the GPU from the Mesh class:
_vertexBuffer[2].BindVBO();
_vertexBuffer[2].UploadDataToGPU(GL_STATIC_DRAW);
glEnableVertexAttribArray(BONE_ID_LOCATION);
glVertexAttribIPointer(BONE_ID_LOCATION, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0);
glEnableVertexAttribArray(BONE_WEIGHT_LOCATION);
glVertexAttribPointer(BONE_WEIGHT_LOCATION, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)(sizeof(VertexBoneData) / 2));
This can't possibly work. A std::vector instance is an object that internally contains a pointer to dynamically allocated data.
So with this definition:
struct VertexBoneData
{
std::vector<glm::uint> IDs;
std::vector<float> weights;
The values for the IDs and weights aren't stored directly in the structure. The are in dynamically allocated heap memory. The structure only contains the vector objects, which contain pointers and some bookkeeping attributes.
You can't store these structs directly in an OpenGL buffer, and have the GPU access the data. OpenGL won't know what do with vector objects.
I am trying to add animation to my program.
I have human model created in Blender with skeletal animation, and I can skip through the keyframes to see the model walking.
Now I've exported the model to an XML (Ogre3D) format, and in this XML file I can see the rotation, translation and scale assigned to each bone at a specific time (t=0.00000, t=0.00040, ... etc.)
What I've done is found which vertices are assigned each bone. Now I'm assuming all I need to do is apply the transformations defined for the bone to each one of these vertices. Is this the correct approach?
In my OpenGL draw() function (rough pseudo-code):
for (Bone b : bones){
gl.glLoadIdentity();
List<Vertex> v= b.getVertices();
rotation = b.getRotation();
translation = b.getTranslation();
scale = b.getScale();
gl.glTranslatef(translation);
gl.glRotatef(rotation);
gl.glScalef(scale);
gl.glDrawElements(v);
}
Vertices are usually affected by more than one bone -- it sounds like you're after linear blend skinning. My code's in C++ unfortunately, but hopefully it'll give you the idea:
void Submesh::skin(const Skeleton_CPtr& skeleton)
{
/*
Linear Blend Skinning Algorithm:
P = (\sum_i w_i * M_i * M_{0,i}^{-1}) * P_0 / (sum i w_i)
Each M_{0,i}^{-1} matrix gets P_0 (the rest vertex) into its corresponding bone's coordinate frame.
We construct matrices M_n * M_{0,n}^-1 for each n in advance to avoid repeating calculations.
I refer to these in the code as the 'skinning matrices'.
*/
BoneHierarchy_CPtr boneHierarchy = skeleton->bone_hierarchy();
ConfiguredPose_CPtr pose = skeleton->get_pose();
int boneCount = boneHierarchy->bone_count();
// Construct the skinning matrices.
std::vector<RBTMatrix_CPtr> skinningMatrices(boneCount);
for(int i=0; i<boneCount; ++i)
{
skinningMatrices[i] = pose->bones(i)->absolute_matrix() * skeleton->to_bone_matrix(i);
}
// Build the vertex array.
RBTMatrix_Ptr m = RBTMatrix::zeros(); // used as an accumulator for \sum_i w_i * M_i * M_{0,i}^{-1}
int vertCount = static_cast<int>(m_vertices.size());
for(int i=0, offset=0; i<vertCount; ++i, offset+=3)
{
const Vector3d& p0 = m_vertices[i].position();
const std::vector<BoneWeight>& boneWeights = m_vertices[i].bone_weights();
int boneWeightCount = static_cast<int>(boneWeights.size());
Vector3d p;
if(boneWeightCount != 0)
{
double boneWeightSum = 0;
for(int j=0; j<boneWeightCount; ++j)
{
int boneIndex = boneWeights[j].bone_index();
double boneWeight = boneWeights[j].weight();
boneWeightSum += boneWeight;
m->add_scaled(skinningMatrices[boneIndex], boneWeight);
}
// Note: This is effectively p = m*p0 (if we think of p0 as (p0.x, p0.y, p0.z, 1)).
p = m->apply_to_point(p0);
p /= boneWeightSum;
// Reset the accumulator matrix ready for the next vertex.
m->reset_to_zeros();
}
else
{
// If this vertex is unaffected by the armature (i.e. no bone weights have been assigned to it),
// use its rest position as its real position (it's the best we can do).
p = p0;
}
m_vertArray[offset] = p.x;
m_vertArray[offset+1] = p.y;
m_vertArray[offset+2] = p.z;
}
}
void Submesh::render() const
{
glPushClientAttrib(GL_CLIENT_VERTEX_ARRAY_BIT);
glPushAttrib(GL_ENABLE_BIT | GL_POLYGON_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_DOUBLE, 0, &m_vertArray[0]);
if(m_material->uses_texcoords())
{
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_DOUBLE, 0, &m_texCoordArray[0]);
}
m_material->apply();
glDrawElements(GL_TRIANGLES, static_cast<GLsizei>(m_vertIndices.size()), GL_UNSIGNED_INT, &m_vertIndices[0]);
glPopAttrib();
glPopClientAttrib();
}
Note in passing that real-world implementations usually do this sort of thing on the GPU to the best of my knowledge.
Your code assumes that each bone has an independent transformation matrix (you reset your matrix at the start of each loop iteration). But in reality, bones form a hierarchical structure that you must preserve when you do your rendering. Consider that when your upper arm rotates your forearm rotates along, because it is attached. The forearm may have its own rotation, but that is applied after it is rotated with the upper arm.
The rendering of the skeleton then is done recursively. Here is some pseudo-code:
function renderBone(Bone b) {
setupTransformMatrix(b);
draw(b);
foreach c in b.getChildren()
renderBone(c);
}
main() {
gl.glLoadIdentity();
renderBone(rootBone);
}
I hope this helps.