Kinect2 C++ get Indices for Face Triangle vertices - c++

I want to debug the face mesh generated from kinect 2 sensor's HD face tracking.
I can get the Vertex data using CalculateVerticesForAlignment however I don't know the index for these vertices to form triangles.
I tried to use result = GetFaceModelTriangles(TriangleCount, TraingleIndices) but the resukt it returns is not S_OK and Traingle indices are not a multiple of three meaning GetFaceModelTriangles does not gives the indices.
The C++ KInect API documentation is very poor so I don't know if GetFaceModelTriangles even does what I think it does.

Related

Find all neighbours of a certain primitive in a CGAL::Surface_mesh

I wrote a small raytracer (with CGAL::Surface_mesh Mesh) with tree acceleration in cgal. I would like to find all neighbours of a hit primitive.
Ray_intersection hit = tree.first_intersection(rays[y][x]);
if(hit)
{
const Point& point = boost::get<Point>(hit->first);
const Primitive_id& primitive_id = boost::get<Primitive_id>(hit->second);
//i need the neighbours of the hit primitive
}
How do I this? I found this documentation but it seems to work only for points not primitives:
https://doc.cgal.org/latest/Spatial_searching/index.html
And it searches for its euclidan distance not for being connected together.
Is there something like:
std::vector<Primitive_id&> ids = getNeighoursOfPrimive(primitive_id);
Like I said I am using CGAL::Surface_mesh Mesh for my mesh and their is only one mesh in the scene.
You can use the range returned by vertices_around_face() to get all vertices of a face, then for each vertex you can use the range returned by halfedges_around_target() to get one halfedge per face incident to that vertex (or you can do it by hand using a combinaison of next and opposite).

How do I remove self-intersecting triangles from a 3D surface mesh?

I have a CGAL surface_mesh of triangles with some self-intersecting triangles which I'm trying to remove to create a continuous 2-manifold shell, ultimately for printing.
I've attempted to use remove_self_intersection() and autorefine_and_remove_self_intersections() from this answer. The first only removes a few self-intersections while the second completely removes my mesh.
So, I'm trying my own approach - I'm finding the self-intersections and then attempting to delete them. I've tried using the low level remove_face but the borders are not detectable afterwards so I'm unable to fill the resulting holes. This answer refers to using the higher level Euler remove_face but this method, and make_hole seem to discard my mesh entirely.
Here is an extract (I'm using break to see if I can get at least one triangle removed, and I'm just trying with the first of the pair):
vector<pair<face_descriptor, face_descriptor> > intersected_tris;
PMP::self_intersections(mesh, back_inserter(intersected_tris));
for (pair<face_descriptor, face_descriptor> &p : intersected_tris) {
CGAL::Euler::remove_face(mesh.halfedge(get<0>(p)), mesh);
break;
}
My approach to removing self-intersecting triangles is to aggressively delete the intersecting faces, along with nearby faces and fill the resulting holes. Thanks to #sloriot 's comment I realised that the Euler::remove_face function was failing due to duplicate faces in the set returned from both the self_intersections and expand_face_selection functions.
A quick way to remove duplicate faces from the vector result of those two functions is:
std::set<face_descriptor> s(selected_faces.begin(), selected_faces.end());
selected_faces.assign(s.begin(), s.end());
This code converts the vector of faces into a set (sets contain no duplicates) and then converting the set back again.
Once the duplicates were removed, the Euler::remove_face function worked correctly, including updating the borders so that the triangulate_hole function could be used on the result producing a final surface with no self-intersections.

Clip Unstructured grid and keep arrays data

I'm trying to clip a vtkUnstructuredGrid using vtkClipDataSet. The problem is that after I clip, the resulting vtkUnstructuredGrid doesn't have the point/cells data (the arrays).
This is my code:
vtkSmartPointer<vtkUnstructuredGrid> model = reader->GetOutput();
// this shows that model has one point data array called "Displacements" (vectorial of 3 components)
model->Print(std::cout);
// Plane to cut it
vtkSmartPointer<vtkPlane> plane = vtkSmartPointer<vtkPlane>::New();
plane->SetOrigin(0.0,0.0,0.0); plane->SetNormal(1,0,0);
// Clip data
vtkSmartPointer<vtkClipDataSet> clipDataSet = vtkSmartPointer<vtkClipDataSet>::New();
clipDataSet->SetClipFunction(plane);
clipDataSet->SetInputConnection(model->GetProducerPort());
clipDataSet->InsideOutOn();
clipDataSet->GenerateClippedOutputOn();
//PROBLEM HERE. The print shows that there aren't any arrays on the output data
clipDataSet->GetOutput()->Print(std::cout);
I need the output grid to have the arrays, because I would like to display the values on the resulting grid.
For example, if the data are are scalars, I would like to display isovalues on the cutted mesh. If the data is vectorial, I would like to deform the mesh (warp) in the direction of the data vectors.
Here I have an example on ParaView of what I would like to do. The solid is the original mesh and the wireframe mesh is the deformed one.
I'm using VTK 5.10 under C++ (Windows 8.1 64 bits, if that helps).
Thank you!
PS: I tried asking this on the VTKusers list, but I got no answer.
Ok I found the error after the comment of user lib. I was missing the call to update after I set the inputconnection.
Thank you all.
// Clip data
vtkSmartPointer<vtkClipDataSet> clipDataSet = vtkSmartPointer<vtkClipDataSet>::New();
clipDataSet->SetClipFunction(plane);
clipDataSet->SetInputConnection(model->GetProducerPort());
clipDataSet->InsideOutOn();
clipDataSet->GenerateClippedOutputOn();
clipDataSet->Update(); // THIS is the solution

Kinect as Motion Sensor

I'm planning on creating an app that does something like this: http://www.zonetrigger.com/articles/Kinect-software/
That means, I want to be able to set up "Trigger Zones" using the Kinect and it's 3d Image. Now I know that Microsoft is stating that the Kinect can detect the skeleton of up to 6 People.
For me however, it would be enough to detect whether something is entering a trigger zone and where.
Does anyone know if the Kinect can be programmed to function as a simple Motion Sensor, so it can detect more than 6 entries?
It is well known that Kinect cannot detect more than 5 entries (just kidding). All you need to do is to get a depth map (z-map) from Kinect, and then convert it into a 3d map using these formulas,
X = (((cols - cap_width) * Z ) / focal_length_X);
Y = (((row - cap_height)* Z ) / focal_length_Y);
Z = Z;
Where row and col are calculated from the image center (not upper left corner!) and focal is a focal length of Kinect in pixels (~570). Now you can specify the exact locations in 3D where if the pixels appear, you can do whatever you want to do. Here are more pointers:
You can use openCV for the ease of visualization. To read a frame from Kinect after it was initialized you just need something like this:
Mat inputMat = Mat(h, w, CV_16U, (void*) depth_gen.GetData());
You can easily visualize depth maps using histogram equalization (it will optimally spread 10000 Kinect levels among your available 255 levels of grey)
It is sometimes desirable to do object segmentation grouping spatially close pixels with similar depth together. I did this several years ago, see this but had to delete the floor and/or common surface on which object stayed otherwise all the object were connected and extracted as a single large segment.

OBJ, Buffer objects, and face indices

I most recently had great progress in getting Vertex buffer objects to work.
So I moved on to Element arrays and I figured with such implemented I could then load vertices and face data from an obj.
I'm not too good at reading files in c++ so I wrote a python doc to parse the obj and write 2 separate txts to give me a vertex array and face indices and pasted them directly in my code. Which is like 6000 lines but it works (without compiling errors).
And Here's what it looks like
.
I think they're wrong. I'm not sure. The order of the vertices and faces aren't changed just extracted from the obj because I don't have normals or textures working for buffer objects yet. I kinda do if you look at the cube but not really.
Heres the render code
void Mesh_handle::DrawTri(){
glBindBuffer(GL_ARRAY_BUFFER,vertexbufferid);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,elementbufferid);
int index1=glGetAttribLocation(bound_program,"inputvertex");
int index2=glGetAttribLocation(bound_program,"inputcolor");
int index3=glGetAttribLocation(bound_program,"inputtexcoord");
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,9*sizeof(float),0);
glEnableVertexAttribArray(index2);
glVertexAttribPointer(index2,4,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(3*sizeof(float)));
glEnableVertexAttribArray(index3);
glVertexAttribPointer(index3,2,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(7*sizeof(float)));
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
}
My python parser which just writes the info into a file: source
The object is Ezreal from League of Legends
I'm not sure if I'm reading the faces wrong or if their not even what I thought they were. Am I suppose to use GL_TRIANGLE_STRIP or something else. Any hints or request more info.
Indices in obj-files are 1 based, so you have to subtract 1 from all indices in order to use them with OpenGL.
First, as Andreas stated, .obj files use 1-based indices, so you need to convert them to 0-based indices.
Second:
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
Unless you did some special work to turn the face list you were given in your .obj file into a triangle strip, you don't have triangle strips. You should be rendering GL_TRIANGLES, not strips.
From the image for sure your verticies are messed up. It looks like you specified a stride of 9*sizeof(float) in your glGetAttribLocation but from what I can tell from your code your array is tightly packed.
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,0,0);
Also remove stride from color/texture coords.