assimp model loading doesn't load all meshes - c++

I'm struggling to load model via assimp with multiple meshes, here is code for loading model:
bool loadAssImp(
const char * path,
std::vector<unsigned short> & indices,
std::vector<glm::vec3> & vertices,
std::vector<glm::vec2> & uvs,
std::vector<glm::vec3> & normals
) {
Assimp::Importer importer;
const aiScene* scene = importer.ReadFile(path, aiProcess_Triangulate);
if (!scene) {
cout << importer.GetErrorString() << endl;
return false;
}
aiMesh* mesh;
for (unsigned int l = 0; l < scene->mNumMeshes; l++)
{
mesh = scene->mMeshes[l];
cout << l << endl;
// Fill vertices positions
vertices.reserve(mesh->mNumVertices);
for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
aiVector3D pos = mesh->mVertices[i];
vertices.push_back(glm::vec3(pos.x, pos.y, pos.z));
}
// Fill vertices texture coordinates
uvs.reserve(mesh->mNumVertices);
for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
aiVector3D UVW = mesh->mTextureCoords[0][i]; // Assume only 1 set of UV coords; AssImp supports 8 UV sets.
uvs.push_back(glm::vec2(UVW.x, UVW.y));
}
// Fill vertices normals
normals.reserve(mesh->mNumVertices);
for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
aiVector3D n = mesh->mNormals[i];
normals.push_back(glm::vec3(n.x, n.y, n.z));
}
// Fill face indices
indices.reserve(3 * mesh->mNumFaces);
for (unsigned int i = 0; i < mesh->mNumFaces; i++) {
// Assume the model has only triangles.
indices.push_back(mesh->mFaces[i].mIndices[0]);
indices.push_back(mesh->mFaces[i].mIndices[1]);
indices.push_back(mesh->mFaces[i].mIndices[2]);
}
}
// The "scene" pointer will be deleted automatically by "importer"
return true;
}
model I'm trying to load has two meshes, head and torso
if I start loop like this: for (int l = scene->mNumMeshes - 1; l >= 0 ; l--) then it loads model almost correctly, some of the model vertices don't get drawn correctly but head and almost all of torso is drawn.
if I start loop like this: for (unsigned int l = 0; l < scene->mNumMeshes; l++) only torso is drawn (but is drawn fully without any missing parts)
strangely in both ways vertices and index count is the same.

You need to set the transformations descriped by the aiNode-instances as well. So when loading a scene loop over the nodes, set the local node transformation, the draw the assigned meshes to the current node.
Whithout applying these transformation information the rendering of the model will be broken.

Related

Calculating smooth normals on seperate faces

I've been trying to achieve smooth normals on a generated terrain mesh (tile grid style) to give off smooth lighting, however I have only be able to achieve flat shaded lighting and I believe this might be to do with the fact all the faces are separated (though some vertex's may have the same position).
I've looked around for a similar question, the closest would be: this question however I'm not sure about how to go about implementing his solution of taking a percentage.
Currently I have this code which is based on the typical way to calculate smooth normals - but only achieves flat shading in my case.
// Reset Normals
for (int i = 0; i < this->vertexCount; i++) {
this->vertices[i].normal = glm::vec3(0.0f);
}
// For each face
for (int i = 0; i < totalVerts; i += 3) {
auto index0 = indices ? this->indices[i] : i;
auto index1 = indices ? this->indices[i + 1] : i + 1;
auto index2 = indices ? this->indices[i + 2] : i + 2;
auto vertex0 = this->vertices[index0].position;
auto vertex1 = this->vertices[index1].position;
auto vertex2 = this->vertices[index2].position;
auto normal = glm::cross(vertex1 - vertex0, vertex2 - vertex0);
this->vertices[index0].normal += normal;
this->vertices[index1].normal += normal;
this->vertices[index2].normal += normal;
}
// Normalize
for (int i = 0; i < this->vertexCount; i++) {
this->vertices[i].normal = glm::normalize(this->vertices[i].normal);
}
How would I go about changing this to smooth between surrounding faces? (As for why each face is separate, it's because some may have verts with specific properties which cannot be shared with other faces)
For your reference, a working code to get the face normals, vertices, and vertex normals is:
void get_vertices_and_normals_from_triangles(vector<triangle> &t, vector<vec3> &fn, vector<vec3> &v, vector<vec3> &vn)
{
// Face normals
fn.clear();
// Vertices
v.clear();
// Vertex normals
vn.clear();
if(0 == t.size())
return;
cout << "Triangles: " << t.size() << endl;
cout << "Welding vertices" << endl;
// Insert unique vertices into set.
set<indexed_vertex_3> vertex_set;
for(vector<triangle>::const_iterator i = t.begin(); i != t.end(); i++)
{
vertex_set.insert(i->vertex[0]);
vertex_set.insert(i->vertex[1]);
vertex_set.insert(i->vertex[2]);
}
cout << "Vertices: " << vertex_set.size() << endl;
cout << "Generating vertex indices" << endl;
vector<indexed_vertex_3> vv;
// Add indices to the vertices.
for(set<indexed_vertex_3>::const_iterator i = vertex_set.begin(); i != vertex_set.end(); i++)
{
size_t index = vv.size();
vv.push_back(*i);
vv[index].index = index;
}
for (size_t i = 0; i < vv.size(); i++)
{
vec3 vv_element(vv[i].x, vv[i].y, vv[i].z);
v.push_back(vv_element);
}
vertex_set.clear();
// Re-insert modifies vertices into set.
for(vector<indexed_vertex_3>::const_iterator i = vv.begin(); i != vv.end(); i++)
vertex_set.insert(*i);
cout << "Assigning vertex indices to triangles" << endl;
// Find the three vertices for each triangle, by index.
set<indexed_vertex_3>::iterator find_iter;
for(vector<triangle>::iterator i = t.begin(); i != t.end(); i++)
{
find_iter = vertex_set.find(i->vertex[0]);
i->vertex[0].index = find_iter->index;
find_iter = vertex_set.find(i->vertex[1]);
i->vertex[1].index = find_iter->index;
find_iter = vertex_set.find(i->vertex[2]);
i->vertex[2].index = find_iter->index;
}
vertex_set.clear();
cout << "Calculating normals" << endl;
fn.resize(t.size());
vn.resize(v.size());
for(size_t i = 0; i < t.size(); i++)
{
vec3 v0;// = t[i].vertex[1] - t[i].vertex[0];
v0.x = t[i].vertex[1].x - t[i].vertex[0].x;
v0.y = t[i].vertex[1].y - t[i].vertex[0].y;
v0.z = t[i].vertex[1].z - t[i].vertex[0].z;
vec3 v1;// = t[i].vertex[2] - t[i].vertex[0];
v1.x = t[i].vertex[2].x - t[i].vertex[0].x;
v1.y = t[i].vertex[2].y - t[i].vertex[0].y;
v1.z = t[i].vertex[2].z - t[i].vertex[0].z;
fn[i] = cross(v0, v1);
fn[i] = normalize(fn[i]);
vn[t[i].vertex[0].index] = vn[t[i].vertex[0].index] + fn[i];
vn[t[i].vertex[1].index] = vn[t[i].vertex[1].index] + fn[i];
vn[t[i].vertex[2].index] = vn[t[i].vertex[2].index] + fn[i];
}
for (size_t i = 0; i < vn.size(); i++)
vn[i] = normalize(vn[i]);
}
The code to stuff the vertex data into a vector is as follows. Note that the unwelded vertices of the triangles are reconstructed in the following calls to vertex_data.push_back(v0.x);, etc.
void draw_mesh(void)
{
glUseProgram(render.get_program());
glUniformMatrix4fv(uniforms.render.proj_matrix, 1, GL_FALSE, &main_camera.projection_mat[0][0]);
glUniformMatrix4fv(uniforms.render.mv_matrix, 1, GL_FALSE, &main_camera.view_mat[0][0]);
glUniform1f(uniforms.render.shading_level, 1.0f);
vector<float> vertex_data;
for (size_t i = 0; i < triangles.size(); i++)
{
vec3 colour(0.0f, 0.8f, 1.0f);
size_t v0_index = triangles[i].vertex[0].index;
size_t v1_index = triangles[i].vertex[1].index;
size_t v2_index = triangles[i].vertex[2].index;
vec3 v0_fn(vertex_normals[v0_index].x, vertex_normals[v0_index].y, vertex_normals[v0_index].z);
vec3 v1_fn(vertex_normals[v1_index].x, vertex_normals[v1_index].y, vertex_normals[v1_index].z);
vec3 v2_fn(vertex_normals[v2_index].x, vertex_normals[v2_index].y, vertex_normals[v2_index].z);
vec3 v0(triangles[i].vertex[0].x, triangles[i].vertex[0].y, triangles[i].vertex[0].z);
vec3 v1(triangles[i].vertex[1].x, triangles[i].vertex[1].y, triangles[i].vertex[1].z);
vec3 v2(triangles[i].vertex[2].x, triangles[i].vertex[2].y, triangles[i].vertex[2].z);
vertex_data.push_back(v0.x);
vertex_data.push_back(v0.y);
vertex_data.push_back(v0.z);
vertex_data.push_back(v0_fn.x);
vertex_data.push_back(v0_fn.y);
vertex_data.push_back(v0_fn.z);
vertex_data.push_back(colour.x);
vertex_data.push_back(colour.y);
vertex_data.push_back(colour.z);
vertex_data.push_back(v1.x);
vertex_data.push_back(v1.y);
vertex_data.push_back(v1.z);
vertex_data.push_back(v1_fn.x);
vertex_data.push_back(v1_fn.y);
vertex_data.push_back(v1_fn.z);
vertex_data.push_back(colour.x);
vertex_data.push_back(colour.y);
vertex_data.push_back(colour.z);
vertex_data.push_back(v2.x);
vertex_data.push_back(v2.y);
vertex_data.push_back(v2.z);
vertex_data.push_back(v2_fn.x);
vertex_data.push_back(v2_fn.y);
vertex_data.push_back(v2_fn.z);
vertex_data.push_back(colour.x);
vertex_data.push_back(colour.y);
vertex_data.push_back(colour.z);
}
GLuint components_per_vertex = 9;
const GLuint components_per_normal = 3;
GLuint components_per_position = 3;
const GLuint components_per_colour = 3;
GLuint triangle_buffer;
glGenBuffers(1, &triangle_buffer);
GLuint num_vertices = static_cast<GLuint>(vertex_data.size()) / components_per_vertex;
glBindBuffer(GL_ARRAY_BUFFER, triangle_buffer);
glBufferData(GL_ARRAY_BUFFER, vertex_data.size() * sizeof(GLfloat), &vertex_data[0], GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(glGetAttribLocation(render.get_program(), "position"));
glVertexAttribPointer(glGetAttribLocation(render.get_program(), "position"),
components_per_position,
GL_FLOAT,
GL_FALSE,
components_per_vertex * sizeof(GLfloat),
NULL);
glEnableVertexAttribArray(glGetAttribLocation(render.get_program(), "normal"));
glVertexAttribPointer(glGetAttribLocation(render.get_program(), "normal"),
components_per_normal,
GL_FLOAT,
GL_TRUE,
components_per_vertex * sizeof(GLfloat),
(const GLvoid*)(components_per_position * sizeof(GLfloat)));
glEnableVertexAttribArray(glGetAttribLocation(render.get_program(), "colour"));
glVertexAttribPointer(glGetAttribLocation(render.get_program(), "colour"),
components_per_colour,
GL_FLOAT,
GL_TRUE,
components_per_vertex * sizeof(GLfloat),
(const GLvoid*)(components_per_normal * sizeof(GLfloat) + components_per_position * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLES, 0, num_vertices);
glDeleteBuffers(1, &triangle_buffer);
}
I'm not sure if you're getting notifications of my edits.

glDrawElements has stopped working

I had glDrawElements working consistently, initially with a simple box and then with more complex shapes made up of a large amount of vertices. Then it simply stopped drawing the mesh. I have taken the code back to it's most basic, just drawing 2 triangles to make a 2D square. This also no longer works.
void createMesh(void) {
float vertices[12];
vertices[0] = -0.5; vertices[1] = -0.5; vertices[2] = 0.0; // Bottom left corner
vertices[3] = -0.5; vertices[4] = 0.5; vertices[5] = 0.0; // Top left corner
vertices[6] = 0.5; vertices[7] = 0.5; vertices[8] = 0.0; // Top Right corner
vertices[9] = 0.5; vertices[10] = -0.5; vertices[11] = 0.0; // Bottom right corner
short indices[] = { 0, 1, 2, 0, 2, 3};
glEnableClientState(GL_VERTEX_ARRAY); // Enable Vertex Arrays
glVertexPointer(3, GL_FLOAT, 0, vertices); // Set The Vertex Pointer To Our Vertex Data
glDrawElements(GL_TRIANGLES,6 , GL_UNSIGNED_SHORT, indices);
glDisableClientState(GL_VERTEX_ARRAY);
}
The more advanced code that used to work is shown below:
void createMesh(void) {
float vertices[(amountOfHorizontalScans * 480 * 3)];// Amount of vertices
//build the array of vertices from a matrix of data
int currentVertex = -1;
std::vector <std::vector<double>> currentPointCloudMatrix = distanceCalculator.getPointCloudMatrix();
double plotY = 0;
double plotX = 0;
for (int j = 0; j < currentPointCloudMatrix.size(); j++){
std::vector <double> singleDistancesVector = currentPointCloudMatrix.at(j);
for (int i = 0; i < singleDistancesVector.size(); i++){
if (singleDistancesVector.at(i) != 0){
vertices[++currentVertex] = plotX;
vertices[++currentVertex] = plotY;
vertices[++currentVertex] = singleDistancesVector.at(i);
}
plotX += 0.1;
}
plotX = 0;
plotY += 0.2; //increment y by 0.02
}
//Creating the array of indices, 480 is the amount of columns
int i = 0;
short indices2[(amountOfHorizontalScans * 480 * 3)];
for (int row = 0; row<amountOfHorizontalScans - 1; row++) {
if ((row & 1) == 0) { // even rows
for (int col = 0; col<480; col++) {
indices2[i++] = col + row * 480;
indices2[i++] = col + (row + 1) * 480;
}
}
else { // odd rows
for (int col = 480 - 1; col>0; col--) {
indices2[i++] = col + (row + 1) * 480;
indices2[i++] = col - 1 + +row * 480;
}
}
}
glEnableClientState(GL_VERTEX_ARRAY); // Enable Vertex Arrays
glVertexPointer(3, GL_FLOAT, 0, vertices); // Set The Vertex Pointer To Our Vertex Data
glDrawElements(GL_TRIANGLE_STRIP, (amountOfHorizontalScans * 480 * 3), GL_UNSIGNED_SHORT, indices2);
glDisableClientState(GL_VERTEX_ARRAY);
}
I am at a complete loss as to why it has stopped working as it was working perfectly for a good number of runs, then just completely stopped. I have debugged through and all the code is being reached, also the vertices and indices are populated with data. What could cause this to stop working?
EDIT:
So I am really quite confused now. I came back to this issue this morning, and everything worked fine again, as in the meshes would draw with no issues. After doing some tests and running the program a number of times it has simply stopped drawing meshes again!
Could this be something memory related? I am not 100% sure on how glDrawElements stores the data passed to it, so could it be that I have to clear something somewhere that I keep filling up with data?
You cannot allocate dynamically arrays in stack:
short indices2[(amountOfHorizontalScans * 480 * 3)];
In code:
short indices2[(amountOfHorizontalScans * 480 * 3)];
for (int row = 0; row<amountOfHorizontalScans - 1; row++) {
if ((row & 1) == 0) { // even rows
for (int col = 0; col<480; col++) {
indices2[i++] = col + row * 480;
indices2[i++] = col + (row + 1) * 480;
}
}
else { // odd rows
for (int col = 480 - 1; col>0; col--) {
indices2[i++] = col + (row + 1) * 480;
indices2[i++] = col - 1 + +row * 480;
}
}
}
Must be
short* indices2 = new short[(amountOfHorizontalScans * 480 * 3)];
than free allocated memory
delete [] indices2;
Triangle strip is pretty tricky mode did you try to work directly with GL_TRIANGLES.

Assimp access violation on existing aiVector3D

I'm using the following code to load in an .obj file using Assimp. Something goes wrong at the TextureCoordinates. The aiVector3D I try to access exists, but as soon as I store the values in a temp variable, it crashes my application.
Here's the code I use:
Assimp::Importer importer;
const aiScene* scene = importer.ReadFile(filename,
aiProcess_CalcTangentSpace |
aiProcess_Triangulate |
aiProcess_JoinIdenticalVertices |
aiProcess_SortByPType);
if (!scene)
{
printf("[ASSIMP] ");
printf(importer.GetErrorString());
printf("\n");
return nullptr;
}
Mesh* newMesh = new Mesh();
unsigned int vertexCount = scene->mMeshes[0]->mNumVertices;
unsigned int triangleCount = scene->mMeshes[0]->mNumFaces;
bool hasUv = scene->mMeshes[0]->HasTextureCoords(0);
newMesh->vertexCount = vertexCount;
newMesh->triangleCount = triangleCount;
newMesh->m_Vertices = new Vertex[vertexCount];
newMesh->m_Triangles = new Triangle[triangleCount];
for (unsigned int i = 0; i < vertexCount; i++) {
aiVector3D vertexPosition = scene->mMeshes[0]->mVertices[i];
aiVector3D vertexNormal = scene->mMeshes[0]->mNormals[i];
newMesh->m_Vertices[i].pos = glm::vec3(vertexPosition.x, vertexPosition.y, vertexPosition.z);
newMesh->m_Vertices[i].normal = glm::vec3(vertexNormal.x, vertexNormal.y, vertexNormal.z);
if (hasUv) {
aiVector3D uvCoordinates = scene->mMeshes[0]->mTextureCoords[i][0];
printf("uvCoordinates: %f %f %f\n", uvCoordinates.x, uvCoordinates.y, uvCoordinates.z);
newMesh->m_Vertices[i].u = uvCoordinates.x;
newMesh->m_Vertices[i].v = uvCoordinates.y;
}
}
for (unsigned int i = 0; i < triangleCount; i++) {
aiFace face = scene->mMeshes[0]->mFaces[i];
for (int j = 0; j < 3; j++) {
Triangle* tri = &newMesh->m_Triangles[i];
tri->vertex[j] = &newMesh->m_Vertices[face.mIndices[j]];
if (tri->vertex[0]->normal.y == 1.0f || tri->vertex[0]->normal.y == -1.0f) {
tri->color = glm::vec3(0.2f, 0.2f, 0.2f);
}
else
{
tri->color = glm::vec3(1.0f, 0.0f, 0.0f);
}
}
}
It crashes at the line printf("uvCoordinates: %f %f %f\n", uvCoordinates.x, uvCoordinates.y, uvCoordinates.z);, but if I remove this line, it crashes at newMesh->m_Vertices[i].u = uvCoordinates.x;. If I comment out the printf(), set both the u and v of the vertex to 0.0f and not use uvCoordinates at all, it still crashes on the newMesh->m_Vertices[i].u = uvCoordinates.x; line. If I leave the printf() uncommented, it prints the values of the uvCoordinates, but throws an access violation after.
I'm honestly out of ideas here. Here's a screenshot showing what I explained.
scene->mMeshes[0]->mTextureCoords[i][0]; is first texture coordinates for set i. What you wanted is to get first set of texture coordinates - so it should be scene->mMeshes[0]->mTextureCoords[0][i];

How to draw this object with OpenGL

I am trying to draw something like this in OpenGL 2.1 using GLU and GLUT primitives such as gluCylinder, glutSolidCylinder, etc.
Basically it's a solid cylinder with a hole in the middle. So far I was unable to achieved it.
I also tried to import wavefront obj from Blender to my program in C++ but what I got was this.
glBegin(GL_TRIANGLES);
for(int i = 0; i < wheel->faces.size(); i++) {
int nx = atoi(wheel->faces.at(i).at(0).substr(wheel->faces.at(i).at(0).find_last_of('/') + 1).c_str()) - 1;
int ny = atoi(wheel->faces.at(i).at(1).substr(wheel->faces.at(i).at(1).find_last_of('/') + 1).c_str()) - 1;
int nz = atoi(wheel->faces.at(i).at(2).substr(wheel->faces.at(i).at(2).find_last_of('/') + 1).c_str()) - 1;
int vx = atoi(wheel->faces.at(i).at(0).substr(0, wheel->faces.at(i).at(0).find_first_of('/')).c_str()) - 1;
int vy = atoi(wheel->faces.at(i).at(1).substr(0, wheel->faces.at(i).at(1).find_first_of('/')).c_str()) - 1;
int vz = atoi(wheel->faces.at(i).at(2).substr(0, wheel->faces.at(i).at(2).find_first_of('/')).c_str()) - 1;
glNormal3f(wheel->normals.at(nx).x, wheel->normals.at(ny).y, wheel->normals.at(nz).z);
glVertex3f(wheel->vertices.at(vx).x, wheel->vertices.at(vy).y, wheel->vertices.at(vz).z);
}
glEnd();
The parsing should be correct, I double checked it line by line. I guess it's the wrong way I tried to draw it.
However I don't want to use obj since my way of drawing makes animation slow (I am fairly new to OpenGL). And I don't want to use OpenGL 3+ either.
Any suggestions?
In your code you're drawing a single point with one face, however a face (a triangle) here is supposed to have three points.
Your may try this code:
glBegin(GL_TRIANGLES);
for(int i = 0; i < wheel->faces.size(); i++) {
int v0 = atoi(wheel->faces.at(i).at(0).substr(0, wheel->faces.at(i).at(0).find_first_of('/')).c_str()) - 1;
int v1 = atoi(wheel->faces.at(i).at(1).substr(0, wheel->faces.at(i).at(1).find_first_of('/')).c_str()) - 1;
int v2 = atoi(wheel->faces.at(i).at(2).substr(0, wheel->faces.at(i).at(2).find_first_of('/')).c_str()) - 1;
glVertex3f(wheel->vertices.at(v0).x, wheel->vertices.at(v0).y, wheel->vertices.at(v0).z);
glVertex3f(wheel->vertices.at(v1).x, wheel->vertices.at(v1).y, wheel->vertices.at(v1).z);
glVertex3f(wheel->vertices.at(v2).x, wheel->vertices.at(v2).y, wheel->vertices.at(v2).z);
}
glEnd();
I removed the normal part since I don't think you would need it for now. You may add it later.
EDTIED:
This seems better:
glBegin(GL_TRIANGLES);
for(int i = 0; i < wheel->faces.size(); i++)
for (int j = 0; j < 3; j++)
{
int v_id = atoi(wheel->faces.at(i).at(j).substr(0, wheel->faces.at(i).at(j).find_first_of('/')).c_str()) - 1;
glVertex3f(wheel->vertices.at(v_id).x, wheel->vertices.at(v_id).y, wheel->vertices.at(v_id).z);
}
glEnd();
Found for my self this small lib for opening obj files.
Drawing loaded mesh with next code:
glBegin(GL_TRIANGLES);
for (size_t f = 0; f < shapes[0].mesh.indices.size(); f++) {
glVertex3f(shapes[0].mesh.positions[3*f+0],
shapes[0].mesh.positions[3*f+1],
shapes[0].mesh.positions[3*f+2]);
}
glEnd();
Load mesh:
std::vector<tinyobj::shape_t> shapes;
int loadMesh(const char *file) {
const char* basepath = NULL;
std::string err = tinyobj::LoadObj(shapes, file, basepath);
if (!err.empty()) {
std::cerr << err << std::endl;
return 0;
}
std::cout << "# of shapes : " << shapes.size() << std::endl;
return 1;
}

Isolating Player in Kinect Depth Camera

I'm trying to isolate a single player through the Kinect depth camera. I'm opening an NUI_IMAGE_TYPE_DEPTH_AND_PLAYER_INDEX stream to process the player/depth info. The code that I'm using to draw a player is this:
if (LockedRect.Pitch != 0 ) {
USHORT* curr = (USHORT*) LockedRect.pBits;
const USHORT* dataEnd = curr + ((width/2)*(height/2));
index = 0;
while (curr < dataEnd && playerId != 0) {
USHORT depth = *curr;
USHORT realDepth = NuiDepthPixelToDepth(depth);
BYTE intensity = 255;
USHORT player = NuiDepthPixelToPlayerIndex(depth);
// Only colour in the player
if (player == playerId) {
for (int i = index; i < index + 4; i++)
dest[i] = intensity;
}
else {
for (int i = index; i < index + 4; i++)
dest[i] = 0;
}
index += 4;
curr += 1;
}
}
dest is an OpenGL texture.
The problem I'm having is that the variable player changes when a second person steps into the frame, and causes the person drawn in the texture to be the new person.
OK I figured out how to do it.
I needed to get the skeletal ID (0 through 5) which maps to depth pixel user (1 through 6). So when the sensor found a skeleton, it saved the ID, and set that to playerId. PlayerId is only cleared when its associated skeleton is lost by sensor.