Assimp faces indices - c++

I have been using Assimp for a while and now I'm trying to load a .obj file. It loads perfectly, but I would like to manipulate the face data after loading it.
Basically I have this in the simple cube.obj file (Full file - http://pastebin.com/ha3VkZPM)
# 8 Vertices
v -1.0 -0.003248 1.0
v 1.0 -0.003248 1.0
v -1.0 1.996752 1.0
v 1.0 1.996752 1.0
v 1.0 -0.003248 -1.0
v -1.0 -0.003248 -1.0
v 1.0 1.996752 -1.0
v -1.0 1.996752 -1.0
# 36 Texture Coordinates
vt 1.0 0.0
vt 0.0 0.0
...
# 36 Vertex Normals
vn 0.0 0.0 1.0
vn 0.0 0.0 1.0
...
f 1/1/1 2/2/2 3/3/3
f 2/4/4 4/5/5 3/6/6
f 5/7/7 6/8/8 7/9/9
f 6/10/10 8/11/11 7/12/12
f 3/13/13 6/14/14 1/15/15
f 3/16/16 8/17/17 6/18/18
f 7/19/19 2/20/20 5/21/21
f 7/22/22 4/23/23 2/24/24
f 3/25/25 7/26/26 8/27/27
f 3/28/28 4/29/29 7/30/30
f 2/31/31 6/32/32 5/33/33
f 2/34/34 1/35/35 6/36/36
And as I understand face entry is V/T/N (Vertex indicies, tex coord indices and normal indices).
so
f 1/1/1 2/2/2 3/3/3 represents a triangle of vertices (1,2,3) - right?
From this face entry - I want to extract only the vertex indices.
Now enters Assimp - I have this now - Where Indices is a stl::vector
for (uint32 i = 0; i < pMesh->mNumFaces; i++) {
const aiFace& Face = pMesh->mFaces[i];
if(Face.mNumIndices == 3) {
Indices.push_back(Face.mIndices[0]);
Indices.push_back(Face.mIndices[1]);
Indices.push_back(Face.mIndices[2]);
}
Here are the values of pMesh->mNumFace = 12 - So thats correct.
(for 1st face)
Face.mindices[0] should probably point to 1/1/1
Face.mindices[1] should probably point to 2/2/2
Face.mindices[2] should probably point to 3/3/3
Now how do I extract only the vertex indices? And when I check the values of Face.mIndices[0] its index as 0,1,2...respectively. Why so? Assimp Faces all have indices (0,1,2)
I searched on Google and StackOverflow - here are some similar question but I cant seem to figure it out.
https://stackoverflow.com/questions/32788756/how-to-keep-vertex-order-from-obj-file-using-assimp-library
Assimp and D3D model loading: Mesh not being displayed in D3D
Assimp not properly loading indices
Please let me know if you need more info. Thanks.

OpenGL and DirectX use a slightly different way of indexing vertex data then the obj format does. In contrast to the file format where it is possible to use different indices for positions/texcoords etc, the graphic card requires one single index buffer for the whole vertex.
That beeing said: Assimp passes the obj format and transforms it into a single-index-buffer representation. Basically this means, that each unique vertex-texcoord-normal combination will give one vertex while the indexbuffer points to this new vertex list.
As far as I know, it is not possible to access the original indices using Assimp.

Related

How to get KITTI camera calibration file?

I'm working with KITTI dataset for camera, lidar calibration
And also, I have other calib files for calibration which forms differently with KITTI calibration file.
Here is what i have (calib.yaml)
fx: 1065.575589
fy: 1064.775881
cx: 973.889063
cy: 614.194865
k1: -0.139457
k2: 0.071645
p1/k3: -0.000150
p2/k4: 0.000889
Quaternion: [-0.005329,-0.004166,-0.705518,0.708660]
translation_vector: [0.006560,-0.169077,-0.198306]
I can get camera intrinsic matrix with fx, fy, cx, cy and distortion matrix with k1, k2, p1/k3, p2/k4
And I got rotation matrix by computing the Quaternion values.
Here's my question.
Is there any way to convert these values to match with KITTI calibration file data like this?:
P2: 1056.437682 0.0 974.398942 0.0 0.0 1024.415886 583.178996 0.0 0.0 0.0 1.0 0.0
R0_rect: 0 1 0 0 0 -1 -1 0 0
Tr_velo_to_cam: -0.999069645887872 0.030746732104378723 -0.030240389058074302 0.9276477826710018 -0.030772133023495043 -0.9995263554968372 0.0003748284867846508 0.058829066929946106 -0.03021454111295518 0.0013050410383379344 0.999542584572174 0.21383619101949458
DistCoeff: -0.149976 0.070503 -0.000917 -0.000333 0.0
I already know P2 is for camera projection matrix, R0_rect is for rotation matrix, and Tr_velo_to_cam for transformation from Velodyne coordinate to camera coordinate.
Reason why I wanted to convert is just I have calibration reference code for KITTI dataset not for the calib.yaml.

Jagged lines while handling UV values out of range [0,1]

I am parsing an OBJ which has texture coordinates more than 1 and less than 0 as well. I then write it back by making the UV values in the range [0,1]. Based on the understanding from another question on SO, am doing the conversion to the range [0,1] as follows.
if (oldU > 1.0) or (oldU < 0.0):
oldU = math.modf(oldU)[0] # Returns the floating part
if oldU < 0.0 :
oldU = 1 + oldU
if (oldV > 1.0) or (oldV < 0.0):
oldV = math.modf(oldV)[0] # Returns the floating part
if oldV < 0.0:
oldV = 1 + oldV
But I see some jagged lines in my output obj file and the original obj file when rendered in some software:
Original
Restricted to [0,1]
This may work not as you've expected.
Given some triangle edge that starts at U=0.9 and ends at U=1.1 then after your UV clipping you'll get start at 0.9 but end at 0.1 so the triangle will use different part of your texture. I believe this happens at the bottom of your mesh.
In general there's no problem with using UV outside of 0-1 range so first try to render the mesh as it is and see if you have any problems.
If you really want to move UVs to 0-1 range then scale and move UVs instead of clipping them per vertex. Iterate over all vertices and store min and max values for U and V, then scale UV for every vartex, so min becomes 0 and max becomes 1.

How to construct a mesh from a set of vertices

I am making an OBJ importer and I happen to be stuck on how to construct the mesh from a set of given vertices. Consider a cube with these vertices (OBJ format, faces are triangles:
v -2.767533 -0.000000 2.927381
v 3.017295 -0.000000 2.927381
v -2.767533 6.311718 2.927381
v 3.017295 6.311718 2.927381
v -2.767533 6.311718 -2.845727
v 3.017295 6.311718 -2.845727
v -2.767533 -0.000000 -2.845727
v 3.017295 -0.000000 -2.845727
I know how to construct meshes using GLUT (to make my calls to GlBegin(GL_TRIANGLES), glVertex3f(x, y, z), glEnd(), etc.) Its just that I don't know how to combine the vertices to recreate the object. I thought it was to go v1, v2, v3, then v2, v3, v4, etc. until I have made enough triangles (and something like v7, v8, v1 (because it goes back to the begining)) counts. So 8 vertices is 12 triangles for the cube, and for, say, a sphere with 108 triangles and 56 vertices is (56 vertices * 2) - 4. For a cube, I make the 12 triangles, its ok but for a sphere, I make the 108 triangles with 56 vertices, it does not work. So how do I combine the vertices in my glVertex calls to make it work for any mesh? Thank you!
There should be a bunch of "face" lines in the file (lines beginning with the letter "f") that tell you how to combine the vertices into an object. For example,
f 1 2 3
would mean a triangle composed of the first three vertices in the file. You might also see something like
f 1/1 2/2 3/3
which is a triangle that also includes texture coordinates,
f 1//1 2//2 3//3
which includes vertex normal vectors, or
f 1/1/1 2/2/2 3/3/3
which is one that includes both.
Wikipedia has an article that includes an overview of the format: https://en.wikipedia.org/wiki/Wavefront_.obj_file

OpenGL Clip Space Frustum Culling Wrong Results

I implemented a Clip-Space Frustum Culling on the CPU.
In a simple reduced case, I just create a rectangle based on 4 different points which I'm going to render in GL_LINES modes.
But sometimes, it seems to me, I get wrong results. Here is an example:
In this render pass, my frustum culling computation detects, all points would be outside the positive y coordinate in NDC coordinates.
Here is the input:
Points:
P1: -5000, 3, -5000
P2: -5000, 3, 5000
P3: 5000, 3, 5000
P4: 5000, 3, -5000
MVP (rounded):
1.0550 0.0000 -1.4521 1138.9092
-1.1700 1.9331 -0.8500 -6573.4885
-0.6481 -0.5993 -0.4708 -2129.3858
-0.6478 -0.5990 -0.4707 -2108.5455
And the calculations (MVP * Position)
P1 P2 P3 P4
3124 -11397 -847 13674
3532 -4968 -16668 -8168
3463 -1245 -7726 -3018
3482 -1225 -7703 -2996
And finally, transformed by the perspective divide (w-component)
P1 P2 P3 P4
0.897 9.304 0.110 -4.564
1.014 4.056 2.164 2.726
0.995 1.016 1.003 1.007
1.0 1.0 1.0 1.0
As you can see, all transformed points have their y component greater than 1 and should be outside the Viewing Frustum.
I already double checked my matrices. I also made a transform feedback in the Vertex Shader to be sure, I use the same matrices for computation on CPU and GPU. Even the result of MVP * Point is the same in my Vertex Shader as on the GPU. My rendering pipeline is as simple as possible.
The vertex Shader is
vColor = aColor;
gl_Position = MVP * aVertexPosition;
//With Transform Feedback enabled
//transOut = MVP * aVertexPosition
And the fragment Shader
FragColor = vColor;
So the Vertex Shader has the same results as my CPU computations.
But still, they are lines drawn on the screen!
Any Ideas why there are lines?
Do I do something wrong with the perspective divide?
What do I have to do, to detect, that this rectangle should not be culled, because there is at least one line visible (basically a strip of another line is visible as well in this example)
If it helps: The visible red line is the one between P1 and P2...
[Edit] Now I implemented a world space culling by computing the camera frustum normals and hesse normal form equations. This works fine with correct recognition. Sadly I do need correct computations in clip space, since I'm going to make other computations with that points. Someone any ideas?
Here is my computation code:
int outOfBoundArray[6] = {0, 0, 0, 0, 0, 0};
std::vector<glm::dvec4> tileBounds = activeElem->getTileBounds(); //Same as I use for world space culling
const glm::dmat4& viewProj = cam->getCameraTransformations().viewProjectionMatrix; //Same camera which I use for world space culling.
for (int i=0; i<tileBounds.size(); i++) {
//Apply ModelViewProjection Matrix, to Clip Space
glm::dvec4 transformVec = viewProj * tileBounds[i];
//To NDC space [-1,1]
transformVec = transformVec / transformVec[3];
//Culling test
if ( transformVec.x > 1.0 ) outOfBoundArray[0]++;
if ( transformVec.x < -1.0 ) outOfBoundArray[1]++;
if ( transformVec.y > 1.0 ) outOfBoundArray[2]++;
if ( transformVec.y < -1.0 ) outOfBoundArray[3]++;
if ( transformVec.z > 1.0 ) outOfBoundArray[4]++;
if ( transformVec.z < -1.0 ) outOfBoundArray[5]++;
//Other computations...
}
for (int i=0; i<6; i++) {
if (outOfBoundArray[i] == tileBounds.size()) {
return false;
}
}
return true;
The problem would appear that the sign of w (the third component) is different between P1 and P2. This will cause all kinds of trouble due to the nature of projective geometry.
Even though the points are on the same side of the NDC, the line that gets drawn actually goes through infinity: consider what happens when you linearly interpolate between P1 and P2 and do the division by w at each point separately; what happens is that when w approaches zero, the y value is not exactly zero and therefore the line zooms off to infinity. And then wraps around from the other side.
Projective geometry is a weird thing :)
But, for a solution, make sure that you clip those lines that cross the w=0 plane at the positive side of that plane, and you are set - your code should then work.

Calculate saw and triangle wave from specific data

I need to calculate a triangle and saw wave but it is a little complicate because of my model and the data I'm able to work with (but maybe I'm just confused).
I'm able to calculate my sine wave but I'm not really using a frame counter. What I do is, calculate a theta_increment variable which I can use the next time I need to calculate a sample. This works like this:
float x = note.frequency / AppSettings::sampleRate;
float theta_increment = 2.0f * M_PI * x;
float value = 0;
if(waveType == SINE){
value = sin(note.theta) * fixedAmplitude;
}
Now that I have the value of the currend frame/sample I store theta_increment inside my note.theta member so I can use it for the next sample:
note.theta += theta_increment;
I've looked at tons of examples on how I should calculate a saw or a triangle but I can't figure it out. (I only have the data mentioned above at my disposal) This is my last attempt but it's not working and giving me tons of glitches:
value = 1.0f - (2.0f * ((float)note.theta / (float)44100));
If you have a loop generating your values like this:
for (size_t frame=0; frame!=n_frames; ++frame) {
float pos = fmod(frequency*frame/sample_rate,1.0);
value[frame] = xFunc(pos)*fixedAmplitude;
}
Then you can use these functions for different types of waves:
float sinFunc(float pos)
{
return sin(pos*2*M_PI);
}
float sawFunc(float pos)
{
return pos*2-1;
}
float triangleFunc(float pos)
{
return 1-fabs(pos-0.5)*4;
}
The basic idea is that you want a value (pos) that goes from 0.0 to 1.0 over each cycle. You can then shape this however you want.
For a sine wave, the sin() function does the job, you just need to multiply by 2*PI to convert the 0.0 to 1.0 range into a 0.0 to 2*PI range.
For a sawtooth wave, you just need to convert the 0.0 to 1.0 range into a -1.0 to 1.0 range. Multiplying by two and subtracting one does that.
For a triangle wave, you can use the absolute value function to cause the sudden change in direction. First we map the 0.0 to 1.0 range into a -0.5 to 0.5 range by subtracting -0.5. Then we make this into a 0.5 to 0.0 to 0.5 shape by taking the absolute value. By multiplying by 4, we convert this into a 2.0 to 0.0 to 2.0 shape. And finally by subtracting it from one, we get a -1.0 to 1.0 to -1.0 shape.
A sawtooth wave could be calculated like this:
value = x - floor(x);
A triangle could be calculated like this:
value = 1.0 - fabs(fmod(x,2.0) - 1.0);
where x is note.theta.