I am making an OBJ importer and I happen to be stuck on how to construct the mesh from a set of given vertices. Consider a cube with these vertices (OBJ format, faces are triangles:
v -2.767533 -0.000000 2.927381
v 3.017295 -0.000000 2.927381
v -2.767533 6.311718 2.927381
v 3.017295 6.311718 2.927381
v -2.767533 6.311718 -2.845727
v 3.017295 6.311718 -2.845727
v -2.767533 -0.000000 -2.845727
v 3.017295 -0.000000 -2.845727
I know how to construct meshes using GLUT (to make my calls to GlBegin(GL_TRIANGLES), glVertex3f(x, y, z), glEnd(), etc.) Its just that I don't know how to combine the vertices to recreate the object. I thought it was to go v1, v2, v3, then v2, v3, v4, etc. until I have made enough triangles (and something like v7, v8, v1 (because it goes back to the begining)) counts. So 8 vertices is 12 triangles for the cube, and for, say, a sphere with 108 triangles and 56 vertices is (56 vertices * 2) - 4. For a cube, I make the 12 triangles, its ok but for a sphere, I make the 108 triangles with 56 vertices, it does not work. So how do I combine the vertices in my glVertex calls to make it work for any mesh? Thank you!
There should be a bunch of "face" lines in the file (lines beginning with the letter "f") that tell you how to combine the vertices into an object. For example,
f 1 2 3
would mean a triangle composed of the first three vertices in the file. You might also see something like
f 1/1 2/2 3/3
which is a triangle that also includes texture coordinates,
f 1//1 2//2 3//3
which includes vertex normal vectors, or
f 1/1/1 2/2/2 3/3/3
which is one that includes both.
Wikipedia has an article that includes an overview of the format: https://en.wikipedia.org/wiki/Wavefront_.obj_file
Related
I'm writing a C++ algorithm that returns an X,Y position on a 2D texture. Using the X,Y value I wish to find the u,v texture coordinates of a 3D object (already mapped in software).
I have these calculations:
u = X/texture_width
v = texture_height - Y/texture_height
However the values calculated can not be found under vt in my obj file.
Help would be appreciated, many thanks.
Assuming that your (u,v) coordinates are supposed to be within the range [0,1] x [0,1], your computation is not quite right. It should be
u = X/texture_width
v = 1 - Y/texture_height
Given an image pixel coordinate (X,Y), this will compute the corresponding texture (u,v) coordinate. However, if you pick a random image pixel and convert its (X,Y) coordinate into a (u,v) coordinate, this coordinate will most likely not show up in the list of vt entries in the OBJ file.
The reason is that (u,v) coordinates in the OBJ file are only specified at the corners of the faces of your 3D object. The coordinates that you compute from image pixels likely lie in the interior of the faces.
Assuming your OBJ file represents a triangle mesh with positions and texture coordinates, the entries for faces will look something like this:
f p1/t1 p2/t2 p3/t3
where p1, p2, p3 are position indices and t1, t2, t3 are texture coordinate indices.
To find whether your computed (u,v) coordinate maps to a given triangle, you'll need to
find the texture coordinates (u1,v1), (u2,v2), (u3,v3) of the corners by looking up the vt entries with the indices t1, t2, t3,
find out whether the point (u,v) lies inside the triangle with corners (u1,v1), (u2,v2), (u3,v3). There are several ways to compute this.
If you repeat this check for all f entries of the OBJ file, you'll find the triangle(s) which the given image pixel maps to. If you don't find any matches then the pixel does not appear on the surface of the object.
I have been using Assimp for a while and now I'm trying to load a .obj file. It loads perfectly, but I would like to manipulate the face data after loading it.
Basically I have this in the simple cube.obj file (Full file - http://pastebin.com/ha3VkZPM)
# 8 Vertices
v -1.0 -0.003248 1.0
v 1.0 -0.003248 1.0
v -1.0 1.996752 1.0
v 1.0 1.996752 1.0
v 1.0 -0.003248 -1.0
v -1.0 -0.003248 -1.0
v 1.0 1.996752 -1.0
v -1.0 1.996752 -1.0
# 36 Texture Coordinates
vt 1.0 0.0
vt 0.0 0.0
...
# 36 Vertex Normals
vn 0.0 0.0 1.0
vn 0.0 0.0 1.0
...
f 1/1/1 2/2/2 3/3/3
f 2/4/4 4/5/5 3/6/6
f 5/7/7 6/8/8 7/9/9
f 6/10/10 8/11/11 7/12/12
f 3/13/13 6/14/14 1/15/15
f 3/16/16 8/17/17 6/18/18
f 7/19/19 2/20/20 5/21/21
f 7/22/22 4/23/23 2/24/24
f 3/25/25 7/26/26 8/27/27
f 3/28/28 4/29/29 7/30/30
f 2/31/31 6/32/32 5/33/33
f 2/34/34 1/35/35 6/36/36
And as I understand face entry is V/T/N (Vertex indicies, tex coord indices and normal indices).
so
f 1/1/1 2/2/2 3/3/3 represents a triangle of vertices (1,2,3) - right?
From this face entry - I want to extract only the vertex indices.
Now enters Assimp - I have this now - Where Indices is a stl::vector
for (uint32 i = 0; i < pMesh->mNumFaces; i++) {
const aiFace& Face = pMesh->mFaces[i];
if(Face.mNumIndices == 3) {
Indices.push_back(Face.mIndices[0]);
Indices.push_back(Face.mIndices[1]);
Indices.push_back(Face.mIndices[2]);
}
Here are the values of pMesh->mNumFace = 12 - So thats correct.
(for 1st face)
Face.mindices[0] should probably point to 1/1/1
Face.mindices[1] should probably point to 2/2/2
Face.mindices[2] should probably point to 3/3/3
Now how do I extract only the vertex indices? And when I check the values of Face.mIndices[0] its index as 0,1,2...respectively. Why so? Assimp Faces all have indices (0,1,2)
I searched on Google and StackOverflow - here are some similar question but I cant seem to figure it out.
https://stackoverflow.com/questions/32788756/how-to-keep-vertex-order-from-obj-file-using-assimp-library
Assimp and D3D model loading: Mesh not being displayed in D3D
Assimp not properly loading indices
Please let me know if you need more info. Thanks.
OpenGL and DirectX use a slightly different way of indexing vertex data then the obj format does. In contrast to the file format where it is possible to use different indices for positions/texcoords etc, the graphic card requires one single index buffer for the whole vertex.
That beeing said: Assimp passes the obj format and transforms it into a single-index-buffer representation. Basically this means, that each unique vertex-texcoord-normal combination will give one vertex while the indexbuffer points to this new vertex list.
As far as I know, it is not possible to access the original indices using Assimp.
I have 2 frames of shaky video. I applied homography on all the inliers points. Now the resultant matrix that i get for different frames are like this
0.2711 -0.0036 0.853
-0.0002 0.2719 -0.2247
0.0000 -0.0000 0.2704
0.4787 -0.0061 0.5514
0.0007 0.4798 -0.0799
0.0000 -0.0000 0.4797
What are those similar values in the diagonal and how can I retrieve the translation component from this matrix ?
Start with the following observation: a homography matrix is only defined up to scale. This means that if you divide or multiply all the matrix coefficients by the same number, you obtain a matrix that represent the same geometrical transformation. This is because, in order to apply the homography to a point at coordinates (x, y), you multiply its matrix H on the right by the column vector [x, y, 1]' (here I use the apostrophe symbol to denote transposition), and then divide the result H * x = [u, v, w]' by the third component w. Therefore, if instead of H you use a scaled matrix (s * H), you end up with [s*u, s*v, s*w], which represents the same 2D point.
So, to understand what is going on with your matrices, start by dividing both of them by their bottom-right component:
octave:1> a = [
> 0.2711 -0.0036 0.853
> -0.0002 0.2719 -0.2247
> 0.0000 -0.0000 0.2704
> ];
octave:2> b=[
> 0.4787 -0.0061 0.5514
> 0.0007 0.4798 -0.0799
> 0.0000 -0.0000 0.4797];
octave:3> a/a(3,3)
ans =
1.00259 -0.01331 3.15459
-0.00074 1.00555 -0.83099
0.00000 -0.00000 1.00000
octave:4> b/b(3,3)
ans =
0.99792 -0.01272 1.14947
0.00146 1.00021 -0.16656
0.00000 -0.00000 1.00000
Now suppose, for the moment, that the third column elements in both matrices were [0, 0, 1]'. Then the effect of applying it to any point (x, y) would be to move it by approx 1/100 units (say, pixels). Basically, not changing it by much.
Plugging back the actual values for the third column shows that both matrices are, essentially, translating the whole images by constant amounts.
So, in conclusion, having equal values on the diagonals, and very small values at indices (1,2) and (2,1), means that these homographies are both (essentially) pure translations.
Various transformations involve all elementary operations such as addition, multiplication, division, and addition of a constant. Only the first two can be modeled by regular matrix multiplication. Note that addition of a constant and, in case of a Homography, division is impossible to represent with matrix multiplication in 2D. Adding a third coordinate (that is converting points to homogeneous representation) solves this problem. For example, if you want to add constant 5 to x you can do this like this
1 0 5 x x+5
0 1 0 * y = y
1
Note that matrix is 2x3, not 2x2 and coordinates have three numbers though they represent 2D points. Also, the last transition is converting back from homogeneous to Euclidian representation. Thus two results are achieved: all operations (multiplication, division, addition of variables and additions of constants) can be represented by matrix multiplication; second, we can chain multiple operations (via multiplying their matrices) and still have only a single matrix as the result (of matrix multiplication).
Ok, now let’s explain Homography. Homography is better to consider in the context of the whole family of transformation moving from simple ones to complex ones. In other words, it is easier to understand the meaning of Homography coefficients by comparing them to the meaning of coefficients of simpler Euclidean, Similarity and Affine transforms. The Euclidwan transformation is the simplest and represents a rigid rotation and translation in space (note that matrix is 2x3). For 2D case,
cos(a) -sin(a) Tx
sin(a) cos(a) Ty
Similarity adds scaling to the rotation coefficients. So now the matrix looks like this:
Scl*cos(a) -scl*sin(a) Tx
Scl*sin(a) scl*cos(a) Ty
Affiliate transformation adds shearing so the rotation coefficients become unrestricted:
a11 a12 Tx
a21 a22 Ty
Homography adds another row that divides the output x and y (see how we explained the division during the transition form homogeneous to Euclidean coordinates above) and thus introduces projectivity or non uniform scaling that is a function of point coordinates. This is better understood by looking at the transition to Euclidean coordinates.
a11 a12 Tx x a11*x+a12*y+Tx (a11*x+a12*y+Tx)/(a32*x+a32*y+a33)
a21 a22 Ty * y = a21*x+a22*y+Ty -> (a21*x+a22*y+Ty)/(a32*x+a32*y+a33)
a31 a32 a33 1 a32*x+a32*y+a33
Thus homography has an extra row compared to other transformations such as affine or similarity. This extra row allows to scale objects depending on their coordinates which is how projectivity is formed.
Finally, speaking of your numbers:
0.4787 -0.0061 0.5514
0.0007 0.4798 -0.0799
0.0000 -0.0000 0.4797
This is not homography!. Just look at the last row and you will see that the first two coefficients are 0 thus there is no projectivity. Since a11=a22 this is not even an Affine transformation. This is rather a similarity transform. The translation is
Tx=0.5514/0.4797 and Ty=-0.0799/0.4797
Are the amounts of v, vn and vt same in an .obj model ? I ask it because i can only use one index per draw so i have that to use VBO
struct VertexCoord
{
float x,y,z,w;
float nx,ny,nz;
float u,v;
};
so i can use one index for all buffers by striding offsets.
no, the number of v, vt, vn can be different.
notice that there is a list of "v", then list of "vt", "vn", etc...
At the end there is a list of faces: 1/2/3, 4/5/4, etc.
Faces index vertex pos, texture coords, normals, but since those indexes are not related to each other this also means that num of vers can be different.
Only when the list of faces looks like "1/1/1", "4/4/4" we would have the same about of attributes.
This is a bit tricky to explain, but I hope you get the point :)
So in general you cannot directly map obj data into your VBO structure.
In OpenGL you can use indexed geometry of course, but that means one index per all attribs for particular vertex. You cannot index position, texture coords separately. You have to somehow rearrange the data.
here are some links:
http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Load_OBJ
http://xiangchen.wordpress.com/2010/05/04/loading-a-obj-file-in-opengl/
Say I have a sprite. Its axis-aligned bounding box (AABB) is easy to find since I know the width and height. Say I rotate it 45 degrees, I don't think the AABB would be big enough to cover it, so I need a new AABB. How can I calculate the bounding rectangle of a rotated rectangle? (given a center point, an angle, and its width and height).
Note that OpenGL does the rotation so I do not have access to the vertex information.
What I'm trying to do is get AABBs so I can do 2D culling for rendering.
Is there possibly a greedy way of finding the AABB that satisfies any angle?
Thanks
If you want a single box that covers all angles, just take the half-diagonal of your existing box as the radius of a circle. The new box has to contain this circle, so it should be a square with side-length equal to twice the radius (equiv. the diagonal of the original AABB) and with the same center as the original.
In general the object will be rotated around an arbitrary point, so you have to compute the new location of the center and translate this box to the right place.
I don't know if this is the most efficient method, but I would just calculate the new positions of the vertices and based on that data find out the AABB. So for example,
Vertex v0, v1, v2, v3;
// in the local coordinates of the rectangle
// so for example v0 is always 0,0 and width and height define the others
// put some values to v0..v3
glLoadIdentity();
glTranslatef(the position of the rectangle);
glTranslatef(center_point);
glRotatef(angle, 0,0,1);
glTranslatef(-center_point);
GLfloat matrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
v0 = multiply_matrix_by_vector(matrix, v0);
v1 = multiply_matrix_by_vector(matrix, v1);
v2 = multiply_matrix_by_vector(matrix, v2);
v3 = multiply_matrix_by_vector(matrix, v3);
AABB = find_the_minimums_and_maximums(v0, v1, v2, v3);
If you don't know how to multiply a matrix by vector, try googling it.
Also note that since the matrix dimensions are 4x4, the vectors for the vertices also need to be 4-dimensional. You can convert a 2D vector to a 4D vector by adding a third component 0 (zero) and a fourth component 1 (one). After the multiplication has been done, you can convert the resulting 4D vector back to 2D by dividing the x and y components by the fourth component and simply by ignoring the third component because you don't need a third dimension.
Since matrix multiplications might be a quite processor-heavy operation, this approach might be good only, if you don't need to update a lot of AABBs very often.