I am using https://github.com/syoyo/tinygltf to load glTF model. glTF models have a buffer that includes the position datas (vertex coordinates). I want to print these coordinates to the screen.
for (int i = 0; i < model.meshes.size(); ++i)
{
Mesh &gltfmesh = model.meshes[i];
Primitive &prim = gltfmesh.primitives[i];
Accessor& acess = model.accessors[prim.attributes["POSITION"]];
BufferView& bview = model.bufferViews[acess.bufferView];
Buffer& bfer = model.buffers[bview.buffer];
// cout << bfer.data ... I need to cout the vertex arrays somehow
}
As you can see the coordinates are in the Buffer& bfer now I want to extract them into Vec3F structures (one Vec3F struct has 3 variables: float x, y, z), or anything that I can print it.
Also I need to mention that the buffer not just includes vertex coordinates, there is an offset to the them that are given in the BufferView& bview (BufferViews), so I will need to use bview.byteOffset somehow to determine the location of the vertices in the buffer.
Thank you so much if you can help me out !
Related
I created a new class PieSource (based on vtkCylinderSource) which creates a number of triangles together with the colors they should have:
int vtkPieSource::RequestData(vtkInformation *info,
vtkInformationVector **infoV,
vtkInformationVector *outputVector) {
vtkInformation *outInfo = outputVector->GetInformationObject(0);
vtkPolyData *output = vtkPolyData::SafeDownCast(outInfo->Get(vtkDataObject::DATA_OBJECT()));
vtkPoints *newPoints;
vtkFloatArray *newNormals;
vtkUnsignedCharArray *newCols = vtkUnsignedCharArray::New();
newCols->SetNumberOfComponents(3);
newCols->SetName("Colors");
newPoints = vtkPoints::New();
newPoints->SetDataType(VTK_FLOAT);
newPoints->Allocate(iNumVerts);
newPolys = vtkCellArray::New();
newPolys->Allocate(iNumTriangles);
(-- here we create the triangles and determine their colors --)
for (uint i = 0; i < iNumVerts; i+=3) {
vtkTriangle *triangle1 =vtkTriangle::New();
triangle1->GetPointIds()->SetId(0, i);
triangle1->GetPointIds()->SetId(1, i+1);
triangle1->GetPointIds()->SetId(2, i+2);
newPolys->InsertNextCell(triangle1);
}
output->SetPoints(newPoints);
newPoints->Delete();
output->GetPointData()->SetScalars(newCols);
newCols->Delete();
output->SetPolys(newPolys);
newPolys->Delete();
return 0;
}
In my application i create a vtkGlyph3DMapper which uses the output of PieSource
vtkNew<vtkPieSource> pieSource;
pieSource->SetData(5, testData);
vtkNew<vtkGlyph3DMapper> glyph3Dmapper;
glyph3Dmapper->SetSourceConnection(pieSource->GetOutputPort());
glyph3Dmapper->SetInputData(polydata);
// glyph3Dmapper->SelectColorArray("Colors");
(polydata contains 2 triangles with colors -- without this InputData nothing is drawn)
When i run this, the glyph geometries are drawn in the correct places with correct orientations,
but they don't have the colors i have assigned them (all triangles are grey). If i uncomment the line glyph3Dmapper->SelectColorArray("Colors"); the entire glyph is drawn with the same color (the one specified in the array named "Colors").
Is it possible to have a glyph whose triangles are individually colored?
If yes how must i do this?
First, since your newCols array is empty, I don't know which color you are expecting.
Then if you want a color associated to the triangle, you should add your data array in CellData rather than in PointData. The array should have one tuple per cell (triangle here)
A small style remark: instead of vtkXXX * myObj = vtkXXX::New();, it is recommended to use a local-scoped object instanciated with vtkNew<vtkXXX> myObject;, so you do not need to Delete it.
I am trying to draw this free airwing model from Starfox 64 in OpenGL. I converted the .fbx file to .obj in Blender and am using tinyobjloader to load it (all requirements for my university subject).
I pretty much slapped the example code (with the modern API) into my program, replaced the file name, and grabbed the attrib.vertices and attrib.normals vectors to draw the airwing.
I can view the vertices with GL_POINTS:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &vertices[0]);
glDrawArrays(GL_POINTS, 0, vertices.size() / 3);
glDisableClientState(GL_VERTEX_ARRAY);
Which looks correct (I ... think?):
But I'm not sure how to render a solid model. Simply replacing GL_POINTS with GL_TRIANGLES (shown) or GL_QUADS doesn't work:
I am using OpenGL 1.1 w/ GLUT (again, university). I think I just don't know what I'm doing, really. Help?
E: When I wrote this answer originally I had only worked with vertices and normals. I've figured out how to get materials and textures working, but don't have time to write that out at the moment. I will add that in when I have some time, but it's largely the same logic if you wanna poke around the tinyobj header yourselves in the meantime. :-)
I've learned a lot about TinyOBJLoader in the last day so I hope this helps someone in the future. Credit goes to this GitHub repository which uses TinyOBJLoader very clearly and cleanly in fileloader.cpp.
To summarise what I learned studying that code:
Shapes are of type shape_t. For a single model OBJ, the size of shapes is 1. I'm assuming OBJ files can contain multiple objects but I haven't used the file format much to know.
shape_t's have a member mesh of type mesh_t. This member stores the information parsed from the face rows of the OBJ. You can figure out the number of faces your object has by checking the size of the material_ids member.
The vertex, texture coordinate and normal indices of each face are stored in the indices member of the mesh. This is of type std::vector<index_t>. This is a flattened vector of indices. So for a model with triangulated faces f1, f2 ... fi, it stores v1, t1, n1, v2, t2, n2 ... vi, ti, ni. Remember that these indices correspond to the whole vertex, texture coordinate or normal. Personally I triangulated my model by importing into Blender and exporting it with triangulation turned on. TinyOBJ has its own triangulation algorithm you can turn on by setting the reader_config.triangulate flag.
I've only worked with the vertices and normals so far. Here's how I access and store them to be used in OpenGL:
Convert the flat vertices and normal arrays into groups of 3, i.e. 3D vectors
for (size_t vec_start = 0; vec_start < attrib.vertices.size(); vec_start += 3) {
vertices.emplace_back(
attrib.vertices[vec_start],
attrib.vertices[vec_start + 1],
attrib.vertices[vec_start + 2]);
}
for (size_t norm_start = 0; norm_start < attrib.normals.size(); norm_start += 3) {
normals.emplace_back(
attrib.normals[norm_start],
attrib.normals[norm_start + 1],
attrib.normals[norm_start + 2]);
}
This way the index of the vertices and normals containers will correspond with the indices given by the face entries.
Loop over every face, and store the vertex and normal indices in a separate object
for (auto shape = shapes.begin(); shape < shapes.end(); ++shape) {
const std::vector<tinyobj::index_t>& indices = shape->mesh.indices;
const std::vector<int>& material_ids = shape->mesh.material_ids;
for (size_t index = 0; index < material_ids.size(); ++index) {
// offset by 3 because values are grouped as vertex/normal/texture
triangles.push_back(Triangle(
{ indices[3 * index].vertex_index, indices[3 * index + 1].vertex_index, indices[3 * index + 2].vertex_index },
{ indices[3 * index].normal_index, indices[3 * index + 1].normal_index, indices[3 * index + 2].normal_index })
);
}
}
Drawing is then quite easy:
glBegin(GL_TRIANGLES);
for (auto triangle = triangles.begin(); triangle != triangles.end(); ++triangle) {
glNormal3f(normals[triangle->normals[0]].X, normals[triangle->normals[0]].Y, normals[triangle->normals[0]].Z);
glVertex3f(vertices[triangle->vertices[0]].X, vertices[triangle->vertices[0]].Y, vertices[triangle->vertices[0]].Z);
glNormal3f(normals[triangle->normals[1]].X, normals[triangle->normals[1]].Y, normals[triangle->normals[1]].Z);
glVertex3f(vertices[triangle->vertices[1]].X, vertices[triangle->vertices[1]].Y, vertices[triangle->vertices[1]].Z);
glNormal3f(normals[triangle->normals[2]].X, normals[triangle->normals[2]].Y, normals[triangle->normals[2]].Z);
glVertex3f(vertices[triangle->vertices[2]].X, vertices[triangle->vertices[2]].Y, vertices[triangle->vertices[2]].Z);
}
glEnd();
I've got a 2D array of RGB values that represent a plane's painted vertices. I'd like to store the final vertex colours of the plane in a .dds file so I can load the .dds file later on as a texture.
How might I approach this?
Thanks to Chuck in the comments, I've found this solution works for me. There are likely improvements to be made. The approach can be broken into a few steps.
Step one is to deal with capturing the device context and D3D device to use for rendering, and setting the correct texture description to be used when creating the texture to be captured using SaveDDSTextureToFile (ScreenGrab). The below description works when each float contains four byte values for each colour and for the alpha value.
void DisplayChunk::SaveVertexColours(std::shared_ptr<DX::DeviceResources> DevResources)
{
// Setup D3DDeviceContext and D3DDevice
auto devicecontext = DevResources->GetD3DDeviceContext();
auto device = DevResources->GetD3DDevice();
// Create Texture2D and Texture2D Description
ID3D11Texture2D* terrain_texture;
D3D11_TEXTURE2D_DESC texture_desc;
// Set up a texture description
texture_desc.Width = TERRAINRESOLUTION;
texture_desc.Height = TERRAINRESOLUTION;
texture_desc.MipLevels = texture_desc.ArraySize = 1;
texture_desc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
texture_desc.SampleDesc.Count = 1;
texture_desc.SampleDesc.Quality = 0;
texture_desc.Usage = D3D11_USAGE_DYNAMIC;
texture_desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texture_desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
texture_desc.MiscFlags = 0;
Step two is to create a vector of floats and populate it with the relevant RGBA values. The reason I used a vector was for convenience in terms of pushing the separate float values for R, G, B, and A each iteration. This way at each vertex position you can push back the required sixteen byte values (four from each float as discussed above).
// Create vertex colour vector
std::vector<float> colour_vector;
for (int i = 0; i < TERRAINRESOLUTION; i++) {
for (int j = 0; j < TERRAINRESOLUTION; j++) {
colour_vector.push_back((float)m_terrainGeometry[i][j].color.x);
colour_vector.push_back((float)m_terrainGeometry[i][j].color.y);
colour_vector.push_back((float)m_terrainGeometry[i][j].color.z);
colour_vector.push_back((float)m_terrainGeometry[i][j].color.w);
}
}
Step three is to fill a buffer with the vertex RGBA values. Buffer size must be equal to the total number of bytes required for storage, which is four bytes per float, four floats per vertex, and TERRAINTRESOLUTION * TERRAINTRESOLUTION vertices. Example of a 10 x 10 terrain: 4(bytes) x 4(floats) x 10(width) x 10(height) = 1600 bytes to store RGBA of each vertex.
// Initialise buffer parameters
const int components = 4;
const int length = components * TERRAINRESOLUTION * TERRAINRESOLUTION;
// Fill buffer with vertex colours
float* buffer = new float[length * sizeof(float)];
for (int i = 0; i < length; i++)
buffer[i] = colour_vector[i];
Final step is to create the texture data using the contents of the buffer created above. pSysMem required a pointer to the buffer, used as the initialisation data. SysMemPitch must be set to the size of one row. Using CreateTexture2D, a new texture can be created using the stored byte values. SaveDDSTextureToFile allows for saving the texture resource to an external .dds file. Don't forget to delete the buffer after use.
// Set the texture data using the buffer contents
D3D11_SUBRESOURCE_DATA texture_data;
texture_data.pSysMem = (void*)buffer;
texture_data.SysMemPitch = TERRAINRESOLUTION * components * sizeof(float);
// Create the texture using the terrain colour data
device->CreateTexture2D(&texture_desc, &texture_data, &terrain_texture);
// Save the texture to a .dds file
HRESULT hr = SaveDDSTextureToFile(devicecontext, terrain_texture, L"terrain_output.dds");
// Delete the buffer
delete[] buffer;
}
Some resources I used whilst implementing:
(DDS Guide)
(ScreenGrab Source)
(ScreenGrab Example)
(Creating textures in DirectX)
(Addressing scheme example (navigating buffer/array correctly))
I've been trying to load a Wavefront obj model with ASSIMP. However, I can't get mtl materials colors to work (Kd rgb). I know how to load then, but, I don't know how to get the corresponding color to each vertex.
usemtl material_11
f 7//1 8//1 9//2
f 10//1 11//1 12//2
For example, the Wavefront obj snippet above means that thoose vertices uses material_11.
Q: So how can I get the material corresponding to each vertex?
Error
Wavefront obj materials aren't in the right vertices:
Original Model (Rendered with ASSIMP model Viewer):
Model renderered with my code:
Code:
Code that I use for loading mtl materials color:
std::vector<color4<float>> colors = std::vector<color4<float>>();
...
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const aiMesh* model = scene->mMeshes[i];
const aiMaterial *mtl = scene->mMaterials[model->mMaterialIndex];
color4<float> color = color4<float>(1.0f, 1.0f, 1.0f, 1.0f);
aiColor4D diffuse;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_DIFFUSE, &diffuse))
color = color4<float>(diffuse.r, diffuse.g, diffuse.b, diffuse.a);
colors.push_back(color);
...
}
Code for creating the vertices:
vertex* vertices_arr = new vertex[positions.size()];
for (unsigned int i = 0; i < positions.size(); i++)
{
vertices_arr[i].SetPosition(positions.at(i));
vertices_arr[i].SetTextureCoordinate(texcoords.at(i));
}
// Code for setting vertices colors (I'm just setting it in ASSIMP vertices order, since I don't know how to set it in the correct order).
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const unsigned int vertices_size = scene->mMeshes[i]->mNumVertices;
for (unsigned int k = 0; k < vertices_size; k++)
{
vertices_arr[k].SetColor(colors.at(i));
}
}
EDIT:
Looks like that the model vertices positions aren't being loaded correctly too. Even when I disable face culling and change the background color.
Ignoring issues related to the number of draw calls and extra storage, bandwidth and processing for vertex colours,
It looks like vertex* vertices_arr = new vertex[positions.size()]; is one big array you're creating to hold the entire model (which has many meshes, each with one material). Assuming your first loop is correct and positions contains all positions for all meshes of your model. The second loop starts duplicating mesh colours for each vertex within the mesh. However, vertices_arr[k] always starts at zero and needs to begin after the last vertex of the previous mesh. Instead, try:
int colIdx = 0;
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const unsigned int vertices_size = scene->mMeshes[i]->mNumVertices;
for (unsigned int k = 0; k < vertices_size; k++)
{
vertices_arr[colIdx++].SetColor(colors.at(i));
}
}
assert(colIdx == positions.size()); //double check
As you say, if the geometry isn't drawing properly, maybe positions doesn't contain all vertex data. Perhaps a similar issue to the above code? Another issue could be in joining the indices for each mesh. The indices will all need to be updated with an offset to the new vertex locations within the vertices_arr array. Though now I'm just throwing out guesses.
I'm having trouble figuring out how to ensure particles aligned in a square will always be placed in the middle of the screen, regardless of the size of the square. The square is created with:
for(int i=0; i<(int)sqrt(d_MAXPARTICLES); i++) {
for(int j=0; j<(int)sqrt(d_MAXPARTICLES); j++) {
Particle particle;
glm::vec2 d2Pos = glm::vec2(j*0.06, i*0.06) + glm::vec2(-17.0f,-17.0f);
particle.pos = glm::vec3(d2Pos.x,d2Pos.y,-70);
particle.life = 1000.0f;
particle.cameradistance = -1.0f;
particle.r = d_R;
particle.g = d_G;
particle.b = d_B;
particle.a = d_A;
particle.size = d_SIZE;
d_particles_container.push_back(particle);
}
}
the most important part is the glm::vec2(-17.0f, -17.0f) which correctly positions the square in the center of the screen. This looks like:
the problem is that my program supports any number of particles, so only specifying
now my square is off center, but how can I change glm::vec2(-17.0f,-17.0f) to account for different particles?
Do not make position dependent on "i", and "j" indices if you want a fixed position.
glm::vec2 d2Pos = glm::vec2(centerOfScreenX,centerOfScreenY); //much better
But how to compute centerOfSCreen? It depends if you are using a 2D or a 3D camera.
If you use a fixed 2D camera, then center is (Width/2,Height/2).
If you use a moving 3d camera, you need to launch a ray from the center of the screen and get any point on the ray (so you just use X,Y and then set Z as you wish)
Edit:
Now that the question is clearer here is the answer:
int maxParticles = (int)sqrt(d_MAXPARTICLES);
factorx = (i-(float)maxParticles/2.0f)/(float)maxParticles;
factory = (j-(float)maxParticles/2.0f)/(float)maxParticles;
glm::vec2 particleLocaleDelta = glm::vec2(extentX*factorx,extentY*factory)
glm::vec2 d2Pos = glm::vec2(centerOfScreenX,centerOfScreenY)
d2Pos += particleLocaleDelta;
where
extentX,extentY
are the dimensions of the "big square" and factor is the current scale by "i" and "j". The code is not optimized. Just thinked to work (assuming you have a 2D camera with world units corresponding to pixel units).