Draw polygon wire in Maya using OpenGL - c++

I'm looking for fast way of drawing polygon wireframe in Maya using openGL. I have a working solution, however it's very slow for complex scenes.
I also have a fast solution using MGeometry and MGeometryPrimitive, however it gives me triangles and I can't see a way to get polygon definition.
I am only interested in points and polygon definition, I don't care about normals, UVs and such.
Here's my working slow solution:
MPointArray points;
for (MItMeshPolygon oPolyIter(object); !oPolyIter.isDone(); oPolyIter.next())
{
gGLFT->glBegin(MGL_LINE_LOOP);
oPolyIter.getPoints(points);
for (unsigned int i = 0; i < points.length(); i++)
gGLFT->glVertex3d(points[i].x, points[i].y, points[i].z);
gGLFT->glEnd();
}
Any ideas or pointers?

After some research, I came up with this solution, which runs considerably faster.
gGLFT->glPolygonMode(MGL_FRONT_AND_BACK, MGL_LINE);
MIntArray verts;
UintArray vertIds;
for (int i = 0 ; i < mesh.numPolygons(); i++)
{
mesh.getPolygonVertices(i, verts);
vertIds.convert(verts);
gGLFT->glDrawElements(GL_POLYGON, verts.length(), GL_UNSIGNED_INT, vertIds.data() );
}

Related

How do I draw an OBJ file in OpenGL using tinyobjloader?

I am trying to draw this free airwing model from Starfox 64 in OpenGL. I converted the .fbx file to .obj in Blender and am using tinyobjloader to load it (all requirements for my university subject).
I pretty much slapped the example code (with the modern API) into my program, replaced the file name, and grabbed the attrib.vertices and attrib.normals vectors to draw the airwing.
I can view the vertices with GL_POINTS:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &vertices[0]);
glDrawArrays(GL_POINTS, 0, vertices.size() / 3);
glDisableClientState(GL_VERTEX_ARRAY);
Which looks correct (I ... think?):
But I'm not sure how to render a solid model. Simply replacing GL_POINTS with GL_TRIANGLES (shown) or GL_QUADS doesn't work:
I am using OpenGL 1.1 w/ GLUT (again, university). I think I just don't know what I'm doing, really. Help?
E: When I wrote this answer originally I had only worked with vertices and normals. I've figured out how to get materials and textures working, but don't have time to write that out at the moment. I will add that in when I have some time, but it's largely the same logic if you wanna poke around the tinyobj header yourselves in the meantime. :-)
I've learned a lot about TinyOBJLoader in the last day so I hope this helps someone in the future. Credit goes to this GitHub repository which uses TinyOBJLoader very clearly and cleanly in fileloader.cpp.
To summarise what I learned studying that code:
Shapes are of type shape_t. For a single model OBJ, the size of shapes is 1. I'm assuming OBJ files can contain multiple objects but I haven't used the file format much to know.
shape_t's have a member mesh of type mesh_t. This member stores the information parsed from the face rows of the OBJ. You can figure out the number of faces your object has by checking the size of the material_ids member.
The vertex, texture coordinate and normal indices of each face are stored in the indices member of the mesh. This is of type std::vector<index_t>. This is a flattened vector of indices. So for a model with triangulated faces f1, f2 ... fi, it stores v1, t1, n1, v2, t2, n2 ... vi, ti, ni. Remember that these indices correspond to the whole vertex, texture coordinate or normal. Personally I triangulated my model by importing into Blender and exporting it with triangulation turned on. TinyOBJ has its own triangulation algorithm you can turn on by setting the reader_config.triangulate flag.
I've only worked with the vertices and normals so far. Here's how I access and store them to be used in OpenGL:
Convert the flat vertices and normal arrays into groups of 3, i.e. 3D vectors
for (size_t vec_start = 0; vec_start < attrib.vertices.size(); vec_start += 3) {
vertices.emplace_back(
attrib.vertices[vec_start],
attrib.vertices[vec_start + 1],
attrib.vertices[vec_start + 2]);
}
for (size_t norm_start = 0; norm_start < attrib.normals.size(); norm_start += 3) {
normals.emplace_back(
attrib.normals[norm_start],
attrib.normals[norm_start + 1],
attrib.normals[norm_start + 2]);
}
This way the index of the vertices and normals containers will correspond with the indices given by the face entries.
Loop over every face, and store the vertex and normal indices in a separate object
for (auto shape = shapes.begin(); shape < shapes.end(); ++shape) {
const std::vector<tinyobj::index_t>& indices = shape->mesh.indices;
const std::vector<int>& material_ids = shape->mesh.material_ids;
for (size_t index = 0; index < material_ids.size(); ++index) {
// offset by 3 because values are grouped as vertex/normal/texture
triangles.push_back(Triangle(
{ indices[3 * index].vertex_index, indices[3 * index + 1].vertex_index, indices[3 * index + 2].vertex_index },
{ indices[3 * index].normal_index, indices[3 * index + 1].normal_index, indices[3 * index + 2].normal_index })
);
}
}
Drawing is then quite easy:
glBegin(GL_TRIANGLES);
for (auto triangle = triangles.begin(); triangle != triangles.end(); ++triangle) {
glNormal3f(normals[triangle->normals[0]].X, normals[triangle->normals[0]].Y, normals[triangle->normals[0]].Z);
glVertex3f(vertices[triangle->vertices[0]].X, vertices[triangle->vertices[0]].Y, vertices[triangle->vertices[0]].Z);
glNormal3f(normals[triangle->normals[1]].X, normals[triangle->normals[1]].Y, normals[triangle->normals[1]].Z);
glVertex3f(vertices[triangle->vertices[1]].X, vertices[triangle->vertices[1]].Y, vertices[triangle->vertices[1]].Z);
glNormal3f(normals[triangle->normals[2]].X, normals[triangle->normals[2]].Y, normals[triangle->normals[2]].Z);
glVertex3f(vertices[triangle->vertices[2]].X, vertices[triangle->vertices[2]].Y, vertices[triangle->vertices[2]].Z);
}
glEnd();

Issues turning loaded meshes into cloth simulation

I'm having a bit of issue trying to get meshes I import into my program to have cloth simulation physics using a particle/spring system. I'm kind of a beginner into graphics programming, so sorry if this is super obvious and I'm just missing something. I'm using C++ with OpenGL, as well as Assimp to import the models. I'm fairly sure my code to calculate the constraints/springs and step each particle is correct, as I tested it out with generated meshes (with quads instead of triangles), and it looked fine, but idk.
I've been using this link to study up on how to actually do this: https://nccastaff.bournemouth.ac.uk/jmacey/MastersProjects/MSc2010/07LuisPereira/Thesis/LuisPereira_Thesis.pdf
What it looks like in-engine: https://www.youtube.com/watch?v=RyAan27wryU
I'm pretty sure it's an issue with the connections/springs, as the imported model thats just a flat plane seems to work fine, for the most part. The other model though.. seems to just fall apart. I keep looking at papers on this, and from what I understand everything should be working right, as I connect the edge/bend springs seemingly correctly, and the physics side seems to work from the flat planes. I really can't figure it out for the life of me! Any tips/help would be GREATLY appreciated! :)
Code for processing Mesh into Cloth:
// Container to temporarily hold faces while we process springs
std::vector<Face> faces;
// Go through indices and take the ones making a triangle.
// Indices come from assimp, so i think this is the right thing to do to get each face?
for (int i = 0; i < this->indices.size(); i+=3)
{
std::vector<unsigned int> faceIds = { this->indices.at(i), this->indices.at(i + 1), this->indices.at(i + 2) };
Face face;
face.vertexIDs = faceIds;
faces.push_back(face);
}
// Iterate through faces and add constraints when needed.
for (int l = 0; l < faces.size(); l++)
{
// Adding edge springs.
Face temp = faces[l];
makeConstraint(particles.at(temp.vertexIDs[0]), particles.at(temp.vertexIDs[1]));
makeConstraint(particles.at(temp.vertexIDs[0]), particles.at(temp.vertexIDs[2]));
makeConstraint(particles.at(temp.vertexIDs[1]), particles.at(temp.vertexIDs[2]));
// We need to get the bending springs as well, and i've just written a function to do that.
for (int x = 0; x < faces.size(); x++)
{
Face temp2 = faces[x];
if (l != x)
{
verticesShared(temp, temp2);
}
}
}
And heres the code where I process the bending springs as well:
// Container for any indices the two faces have in common.
std::vector<glm::vec2> traversed;
// Loop through both face's indices, to see if they match eachother.
for (int i = 0; i < a.vertexIDs.size(); i++)
{
for (int k = 0; k < b.vertexIDs.size(); k++)
{
// If we do get a match, we push a vector into the container containing the two indices of the faces so we know which ones are equal.
if (a.vertexIDs.at(i) == b.vertexIDs.at(k))
{
traversed.push_back(glm::vec2(i, k));
}
}
// If we're here, if means we have an edge in common, aka that we have two vertices shared between the two faces.
if (traversed.size() == 2)
{
// Get the adjacent vertices.
int face_a_adj_ind = 3 - ((traversed[0].x) + (traversed[1].x));
int face_b_adj_ind = 3 - ((traversed[0].y) + (traversed[1].y));
// Turn the stored ones from earlier and just get the ACTUAL indices from the face. Indices of indices, eh.
unsigned int adj_1 = a.vertexIDs[face_a_adj_ind];
unsigned int adj_2 = b.vertexIDs[face_b_adj_ind];
// And finally, make a bending spring between the two adjacent particles.
makeConstraint(particles.at(adj_1), particles.at(adj_2));
}
}

Marching Cubes Issues

I've been trying to implement the marching cubes algorithm with C++ and Qt. Anyway, so far all the steps have been written, but I'm getting a really bad result. I'm looking for orientation or advices about what can be going wrong. I suspect one of the problems may be with the voxel conception, specifically about which vertex goes in which corner (0, 1, ..., 7). Also, I'm not a 100% sure about how to interpret the input for the algorithm (I'm using datasets). Should I read it in the ZYX order and move the marching cube in the same way or it doesn't matter at all? (Leaving aside the fact that no every dimension has to have the same size).
Here is what I'm getting against what it should look like...
http://i57.tinypic.com/2nb7g46.jpg
http://en.wikipedia.org/wiki/Marching_cubes
http://en.wikipedia.org/wiki/Marching_cubes#External_links
Paul Bourke. "Overview and source code".
http://paulbourke.net/geometry/polygonise/
Qt_MARCHING_CUBES.zip: Qt/OpenGL example courtesy Dr. Klaus Miltenberger.
http://paulbourke.net/geometry/polygonise/Qt_MARCHING_CUBES.zip
The example requires boost, but looks like it probably should work.
In his example, it has in marchingcubes.cpp, a few different methods for calculating the marching cubes: vMarchCube1 and vMarchCube2.
In the comments it says vMarchCube2 performs the Marching Tetrahedrons algorithm on a single cube by making six calls to vMarchTetrahedron.
Below is the source for the first one vMarchCube1:
//vMarchCube1 performs the Marching Cubes algorithm on a single cube
GLvoid GL_Widget::vMarchCube1(const GLfloat &fX, const GLfloat &fY, const GLfloat &fZ, const GLfloat &fScale, const GLfloat &fTv)
{
GLint iCorner, iVertex, iVertexTest, iEdge, iTriangle, iFlagIndex, iEdgeFlags;
GLfloat fOffset;
GLvector sColor;
GLfloat afCubeValue[8];
GLvector asEdgeVertex[12];
GLvector asEdgeNorm[12];
//Make a local copy of the values at the cube's corners
for(iVertex = 0; iVertex < 8; iVertex++)
{
afCubeValue[iVertex] = (this->*fSample)(fX + a2fVertexOffset[iVertex][0]*fScale,fY + a2fVertexOffset[iVertex][1]*fScale,fZ + a2fVertexOffset[iVertex][2]*fScale);
}
//Find which vertices are inside of the surface and which are outside
iFlagIndex = 0;
for(iVertexTest = 0; iVertexTest < 8; iVertexTest++)
{
if(afCubeValue[iVertexTest] <= fTv) iFlagIndex |= 1<<iVertexTest;
}
//Find which edges are intersected by the surface
iEdgeFlags = aiCubeEdgeFlags[iFlagIndex];
//If the cube is entirely inside or outside of the surface, then there will be no intersections
if(iEdgeFlags == 0)
{
return;
}
//Find the point of intersection of the surface with each edge
//Then find the normal to the surface at those points
for(iEdge = 0; iEdge < 12; iEdge++)
{
//if there is an intersection on this edge
if(iEdgeFlags & (1<<iEdge))
{
fOffset = fGetOffset(afCubeValue[ a2iEdgeConnection[iEdge][0] ],afCubeValue[ a2iEdgeConnection[iEdge][1] ], fTv);
asEdgeVertex[iEdge].fX = fX + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][0] + fOffset * a2fEdgeDirection[iEdge][0]) * fScale;
asEdgeVertex[iEdge].fY = fY + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][1] + fOffset * a2fEdgeDirection[iEdge][1]) * fScale;
asEdgeVertex[iEdge].fZ = fZ + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][2] + fOffset * a2fEdgeDirection[iEdge][2]) * fScale;
vGetNormal(asEdgeNorm[iEdge], asEdgeVertex[iEdge].fX, asEdgeVertex[iEdge].fY, asEdgeVertex[iEdge].fZ);
}
}
//Draw the triangles that were found. There can be up to five per cube
for(iTriangle = 0; iTriangle < 5; iTriangle++)
{
if(a2iTriangleConnectionTable[iFlagIndex][3*iTriangle] < 0) break;
for(iCorner = 0; iCorner < 3; iCorner++)
{
iVertex = a2iTriangleConnectionTable[iFlagIndex][3*iTriangle+iCorner];
vGetColor(sColor, asEdgeVertex[iVertex], asEdgeNorm[iVertex]);
glColor4f(sColor.fX, sColor.fY, sColor.fZ, 0.6);
glNormal3f(asEdgeNorm[iVertex].fX, asEdgeNorm[iVertex].fY, asEdgeNorm[iVertex].fZ);
glVertex3f(asEdgeVertex[iVertex].fX, asEdgeVertex[iVertex].fY, asEdgeVertex[iVertex].fZ);
}
}
}
UPDATE: Github working example, tested
https://github.com/peteristhegreat/qt-marching-cubes
Hope that helps.
Finally, I found what was wrong.
I use a VBO indexer class to reduce the ammount of duplicated vertices and make the render faster. This class is implemented with a std::map to find and discard already existing vertices, using a tuple of < vec3, unsigned short >. As you may imagine, a marching cubes algorithm generates structures with thousands if not millions of vertices. The highest number a common unsigned short can hold is 65536, or 2^16. So, when the output geometry had more than that, the map index started to overflow and the result was a mess, since it started to overwrite vertices with the new ones. I just changed my implementation to draw with common VBO and not indexed while I fix my class to support millions of vertices.
The result, with some minor vertex normal issues, speaks for itself:
http://i61.tinypic.com/fep2t3.jpg

Is using vectors to store Vertices for DirectX9 slow?

Over the past few days I made my first "engine" thingy. A central object with a window object, graphics object, and an input object - all nice and encapsulated and happy.
In this setup I also included some objects in the graphics object that handle some 'utility' functions, like a camera and a 'vindex' manager.
The Vertex/Index Manager stores all vertices and indices in std::vectors, that are called upon and sent to graphics when it's time to create the buffers.
The only problem is that I get ~8 frames a second with only 8-10 rectangles.
I think the problem is in the 'Vindex' object (my shader is nothing spectacular, and the pipeline is pretty vanilla).
Is storing Vertices in this way a plum bad idea, or is there just some painfully obvious thing I'm missing?
I did a little evolution sim project a few years ago that was pretty messy code-wise, but it rendered 20,000 vertices at 100s of frames a second on this machine, so it's not my machine that's slow.
I've been kind of staring at this for several hours, any and all input is VERY much appreciated :)
Example from my object that stores my vertices:
for (int i = 0; i < 24; ++i)
{
mVertList.push_back(Vertex(v[i], n[i], col));
}
For Clarity's sake
std::vector<Vertex> mVertList;
std::vector<int> mIndList;
and
std::vector<Vertex> VindexPile::getVerts()
{
return mVertList;
}
std::vector<int> VindexPile::getInds()
{
return mIndList;
}
In my graphics.cpp file:
md3dDevice->CreateVertexBuffer(mVinds.getVerts().size() * sizeof(Vertex), D3DUSAGE_WRITEONLY, 0, D3DPOOL_MANAGED, &mVB, 0);
Vertex * v = 0;
mVB->Lock(0, 0, (void**)&v, 0);
std::vector<Vertex> vList = mVinds.getVerts();
for (int i = 0; i < mVinds.getVerts().size(); ++i)
{
v[i] = vList[i];
}
mVB->Unlock();
md3dDevice->CreateIndexBuffer(mVinds.getInds().size() * sizeof(WORD), D3DUSAGE_WRITEONLY, D3DFMT_INDEX16, D3DPOOL_MANAGED, &mIB, 0);
WORD* ind = 0;
mIB->Lock(0, 0, (void**)&ind, 0);
std::vector<int> iList = mVinds.getInds();
for (int i = 0; i<mVinds.getInds().size(); ++i)
{
ind[i] = iList[i];
}
mIB->Unlock();
There is quite a bit of copying going on in here: I can not tell without running a profiler and some more code, but that seems like the first culprit:
std::vector<Vertex> vList = mVinds.getVerts();
std::vector<int> iList = mVinds.getInds();
Those two calls create copies of your vertex/index buffers, which is most probably not what you want - you most probably want to declare those as const references. You are also ruining cache coherency by doing those copies, which slows down your program more.
mVertList.push_back(Vertex(v[i], n[i], col));
This is moving and resizing the vectors quite a lot as well - you should most probably use reserve or resize before putting stuff in your vectors, to avoid reallocation and moving throughout memory of your data.
If I have to give you one big advice however, that would be: Profile. I don't know what tools you have access to, however there are plenty of profilers available, pick one and learn it, and it will provide much more valuable insight into why your program is slow.

How to improve speed when drawing over 50k sphere with Opengl

right now, I used glutSolidSphere to draw multiple sphere which is 50k+ sphere
the speed is extremely low.
Is there any method or suggestion to increase speed?
below is my code...
void COpenGlWnd::OnPaint()
{
CPaintDC dc(this);
::wglMakeCurrent(m_hDC, m_hRC);
for(int k = 0; k < m_nCountZ; k++)
{
for(int j = 0; j < m_nCountY; j ++)
{
for(int i = 0; i < m_nCountX; i ++)
{
::glPushMatrix();
........
::glutSolidSphere(Size[i][j][k], 36, 36);
........
::glPopMatrix();
}
}
}
::SwapBuffers(m_hDC);
}
For more information:
the sphere will always be in specific location, but user can use mouse to rotate and see all sphere from difference view.
Here's a couple of suggestions:
Create a vertex buffer object (VBO) containing the sphere and render this instead of using glutSolidSphere.
Look into instancing, that is drawing many spheres with a single draw call.
The following article does almost exactly what you want: http://sol.gfxile.net/instancing.html
If you really want efficiency and are only dealing with spheres, you can actually draw a sphere with infinite resolution using only a single quad and a shader. Basically use math to work out the sphere. Start with an untextured circle. Add depth, normals, lighting, texturing and so on.
This calculates the sphere per-pixel making it as high res as required.