I am currently trying to load the Sponza model for my PBR Renderer. It is in the GLTF format. I have been able to load smaller models quite successfully, but when I try to load the Sponza scene, some meshes aren't properly scaled. This image demonstrates my issue (I am not actually doing the PBR calculations here, that's just the albedo of the model). The wall is there, but it's only a 1x1 quad with the wall texture, even though it's supposed to be a lot bigger ans stretch across the entire model. Same goes for every wall and every floor in the model. The model is not broken as Blender and that default windows model viewer can load it correctly. I am even applying the mNode->mTransformation, but it still doesn't work. My model loading code looks kind of like this:
void Model::LoadModel(const fs::path& filePath)
{
Assimp::Importer importer;
std::string pathString = filePath.string();
m_Scene = importer.ReadFile(pathString, aiProcess_Triangulate | aiProcess_GenNormals | aiProcess_CalcTangentSpace);
if (!m_Scene || m_Scene->mFlags & AI_SCENE_FLAGS_INCOMPLETE || !m_Scene->mRootNode)
{
// Error handling
}
ProcessNode(m_Scene->mRootNode, m_Scene, glm::mat4(1.0f));
}
void Model::ProcessNode(aiNode* node, const aiScene* m_Scene, glm::mat4 parentTransformation)
{
glm::mat4 transformation = AiMatrix4x4ToGlm(&node->mTransformation);
glm::mat4 globalTransformation = transformation * parentTransformation;
for (int i = 0; i < node->mNumMeshes; i++)
{
aiMesh* assimpMesh = m_Scene->mMeshes[node->mMeshes[i]];
// This will just get the vertex data, indices, tex coords, etc.
Mesh neroMesh = ProcessMesh(assimpMesh, m_Scene);
neroMesh.SetTransformationMatrix(globalTransformation);
m_Meshes.push_back(neroMesh);
}
for (int i = 0; i < node->mNumChildren; i++)
{
ProcessNode(node->mChildren[i], m_Scene, globalTransformation);
}
}
glm::mat4 Model::AiMatrix4x4ToGlm(const aiMatrix4x4* from)
{
glm::mat4 to;
to[0][0] = (GLfloat)from->a1; to[0][1] = (GLfloat)from->b1; to[0][2] = (GLfloat)from->c1; to[0][3] = (GLfloat)from->d1;
to[1][0] = (GLfloat)from->a2; to[1][1] = (GLfloat)from->b2; to[1][2] = (GLfloat)from->c2; to[1][3] = (GLfloat)from->d2;
to[2][0] = (GLfloat)from->a3; to[2][1] = (GLfloat)from->b3; to[2][2] = (GLfloat)from->c3; to[2][3] = (GLfloat)from->d3;
to[3][0] = (GLfloat)from->a4; to[3][1] = (GLfloat)from->b4; to[3][2] = (GLfloat)from->c4; to[3][3] = (GLfloat)from->d4;
return to;
}
I don't think the ProcessMesh function is necessary for this, but if it is I can post it as well.
Does anyone see any issues? I am really getting desperate over this...
Turns out I am the dumbest human being on earth. I implemented Parallax Mapping in my code, but disabled it immediately after as it was not working correctly on every model. BUT I still had this line in my shader code:
if (texCoords.x > 1.0 || texCoords.y > 1.0 || texCoords.x < 0.0 || texCoords.y < 0.0)
discard;
Which ended up removing some parts of my model.
If I had to rate my stupidity on a scale from 1 to 10, I'd give it a 100000000000.
Related
I'm using a translator, so please understand that I may be inexperienced. I'm currently trying to create a 3d model in dx12 using fbxsdk.
There is no problem with the modeling used in the current example code, but now the moment you use the 3d model on sites such as mixamo and turbo,
iTangentCnt or iBinormalCnt value is zero, so the program is not working.
Is this because there is no tangent and binormal in the model?
There is no value even if you use other models, so I am curious that models have no value. How did others solve this problem?
void CFBXLoader::GetTangent(FbxMesh* _pMesh, tContainer* _pContainer, int _iIdx , int _iVtxOrder){
int iTangentCnt = _pMesh->GetElementTangentCount();
if (1 != iTangentCnt)
assert(NULL);
FbxGeometryElementTangent* pTangent = _pMesh->GetElementTangent();
UINT iTangentIdx = 0;
if (pTangent->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
if (pTangent->GetReferenceMode() == FbxGeometryElement::eDirect)
iTangentIdx = _iVtxOrder;
else
iTangentIdx = pTangent->GetIndexArray().GetAt(_iVtxOrder);
}
else if (pTangent->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
if (pTangent->GetReferenceMode() == FbxGeometryElement::eDirect)
iTangentIdx = _iIdx;
else
iTangentIdx = pTangent->GetIndexArray().GetAt(_iIdx);
}
FbxVector4 vTangent = pTangent->GetDirectArray().GetAt(iTangentIdx);
_pContainer->vecTangent[_iIdx].x = (float)vTangent.mData[0];
_pContainer->vecTangent[_iIdx].y = (float)vTangent.mData[2];
_pContainer->vecTangent[_iIdx].z = (float)vTangent.mData[1];}
void CFBXLoader::GetBinormal(FbxMesh* _pMesh, tContainer* _pContainer, int _iIdx, int _iVtxOrder){
int iBinormalCnt = _pMesh->GetElementBinormalCount();
if (1 != iBinormalCnt)
assert(NULL);
FbxGeometryElementBinormal* pBinormal = _pMesh->GetElementBinormal();
UINT iBinormalIdx = 0;
if (pBinormal->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
if (pBinormal->GetReferenceMode() == FbxGeometryElement::eDirect)
iBinormalIdx = _iVtxOrder;
else
iBinormalIdx = pBinormal->GetIndexArray().GetAt(_iVtxOrder);
}
else if (pBinormal->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
if (pBinormal->GetReferenceMode() == FbxGeometryElement::eDirect)
iBinormalIdx = _iIdx;
else
iBinormalIdx = pBinormal->GetIndexArray().GetAt(_iIdx);
}
FbxVector4 vBinormal = pBinormal->GetDirectArray().GetAt(iBinormalIdx);
_pContainer->vecBinormal[_iIdx].x = (float)vBinormal.mData[0];
_pContainer->vecBinormal[_iIdx].y = (float)vBinormal.mData[2];
_pContainer->vecBinormal[_iIdx].z = (float)vBinormal.mData[1];
}
If your run-time scenario requires tangents/binormals to operate, then you need to handle the case of them not already being defined in the FBX. If the iBinormalCnt is 0, then you can generate tangents/binormals in your own program using the surface normal and texture coordinates.
Lengyel, Eric. "7.5 Tangent Space", Foundations of Game Engine Development: Volume 2 - Rendering, Terathon Software LLC (2019) link
Mittring, Martin. "Triangle Mesh Tangent Space Calculation". Shader X^4 Advanced Rendering Techniques, 2006
See DirectXMesh's ComputeTangentFrame function.
For an example of a complete model exporter using Autodesk FBX, see the DirectX SDK Sample Content Exporter.
Note another option for rendering with tangent space without exporting the tangent/binormals is to compute them in the pixel shader at render time. I use this technique in the DirectX Tool Kit.
Christian Schüler, "Normal Mapping without Precomputed Tangents", ShaderX 5, Chapter 2.6, pp. 131 – 140 and this blog post
If tangents and bitangents are not present in your fbx file (you still need normals and one set of texture coordinates to have it work), you can use the GenerateTangentsData of FbxMesh object to build them.
bool result = _pMesh->GenerateTangentsData(uvSetIndex, overwrite, ignoretangentflip);
you will want overwrite to false, and ignoretangentflip to false most of the times.
uvSetIndex will be 0 most times as well (unless you have multiple uv set on your model).
I have recently been working on a project that involves the PhysX library. I need to create a PxRigidDynamic actor from a triangle mesh. I saw the example that the documentation provided, but I have some issues with this method.
The issues that I had were mainly:
The object wasn't effected by gravity, and would just sit there and vibrate (Although, I have a feeling I did something else wrong)
From what I have read, to create a rigid body from a triangle mesh, the object needs to be kinematic. This then throws an error when I try to change the object's velocity.
Here is how I am currently creating the triangle mesh:
PhysX::PxTriangleMeshDesc meshDesc;
meshDesc.points.count = numVertices;
meshDesc.points.stride = sizeof(float) * 3;
meshDesc.points.data = vertices;
meshDesc.triangles.count = numIndices;
meshDesc.triangles.stride = 3 * sizeof(uint32_t);
meshDesc.triangles.data = indices;
PhysX::PxDefaultMemoryOutputStream writeBuffer;
PhysX::PxTriangleMeshCookingResult::Enum result;
bool status = cooking->cookTriangleMesh(meshDesc, writeBuffer, &result);
if (!status)
return false;
PhysX::PxDefaultMemoryInputData readBuffer(writeBuffer.getData(),
writeBuffer.getSize());
PhysX::PxTriangleMesh* triangleMesh =
physics->createTriangleMesh(readBuffer);
dynamicObject = physics->createRigidDynamic(PhysX::PxTransform(PhysX::PxVec3(0.0f)));
if (!dynamicObject)
return false;
dynamicObject->setMass(10.0f);
dynamicObject->setRigidBodyFlag(
PhysX::PxRigidBodyFlag::eKINEMATIC, true);
material = physics->createMaterial(0.5f, 0.5f, 0.6f);
if (!material)
return false;
shape = PhysX::PxRigidActorExt::createExclusiveShape(
*dynamicObject,
PhysX::PxTriangleMeshGeometry(triangleMesh,
PhysX::PxMeshScale(PhysX::PxVec3(scale.x, scale.y, scale.z))),
*material
);
if (!shape)
return false;
How can I solve these issues?
If this isn't possible, I would like to know how to convert a triangle mesh into a convex mesh.
I am trying to get texture file name from an osg::Geometry I get the texture coordinates like this:
osg::Geometry* geom = dynamic_cast<osg::Geometry*> (drawable);
const osg::Geometry::ArrayList& texCoordArrayList = dynamic_cast<const osg::Geometry::ArrayList&>(geom->getTexCoordArrayList());
auto texCoordArrayListSize = texCoordArrayList.size();
auto sset = geom->getOrCreateStateSet();
processStateSet(sset);
for (size_t k = 0; k < texCoordArrayListSize; k++)
{
const osg::Vec2Array* texCoordArray = dynamic_cast<const osg::Vec2Array*>(geom->getTexCoordArray(k));
//doing sth with vertexarray, normalarray and texCoordArray
}
But I am not able to get texture file name in processStateSet() function. I take the processStateSet function code from OSG examples (specifically from osganalysis example). Even though there is a texture file, Sometimes it works and gets the name but sometimes not. Here is my processStateSet function
void processStateSet(osg::StateSet* stateset)
{
if (!stateset) return;
for (unsigned int ti = 0; ti < stateset->getNumTextureAttributeLists(); ++ti)
{
osg::StateAttribute* sa = stateset->getTextureAttribute(ti, osg::StateAttribute::TEXTURE);
osg::Texture* texture = dynamic_cast<osg::Texture*>(sa);
if (texture)
{
LOG("texture! ");
//TODO: something with this.
for (unsigned int i = 0; i < texture->getNumImages(); ++i)
{
auto img (texture->getImage(i));
auto texturefname (img->getFileName());
LOG("image ! image no: " + IntegerToStr(i) + " file: " + texturefname);
}
}
}
}
EDIT:
I just realized that: if the model that I load is ".3ds", texturefname is exist but if model is ".flt" there is not texture name.
Is it about loading different types? But I know that they both have textures. What is the difference? I confused.
Some 3D models don't have texture names. Your choices are to deal with it, or use model files that do. It also depends on the format. Some formats can't have texture names. Some Blender export scripts can't write texture names even though the format supports it. And so on.
3D model formats are not interchangeable - every one is different.
I recently implemented vertex skinning in my own Vulkan engine. Pretty much all models render properly, however, I find that with Mixamo models, I get skinning artifacts. The main difference I found between "regular" glTF models and Mixamo models, is that the Mixamo models share inverse bind matrices between multiple meshes, but I highly doubt that this causes this issue.
Here you can see that for some reason the vertices are pulled towards one specific point which seems to be at (0, 0, 0). I know for sure that this is not caused by the vertex and index loading, as the model renders properly without vertex skinning.
Calculation of joint matrices
void Object::updateJointsByNode(Node *node) {
if (node->mesh && node->skinIndex > -1) {
auto inverseTransform = glm::inverse(node->getLocalMatrix());
auto skin = this->skinLookup[node->skinIndex];
auto numberOfJoints = skin->jointsIndices.size();
std::vector<glm::mat4> jointMatrices(numberOfJoints);
for (size_t i = 0; i < numberOfJoints; i++) {
jointMatrices[i] =
this->getNodeByIndex(skin->jointsIndices[i])->getLocalMatrix() * skin->inverseBindMatrices[i];
jointMatrices[i] = inverseTransform * jointMatrices[i];
}
this->inverseBindMatrices = jointMatrices;
}
for (auto &child : node->children) {
this->updateJointsByNode(child);
}
}
Calculation of vertex displacement in GLSL Vertex Shader
mat4 skinMat =
inWeight0.x * model.jointMatrix[int(inJoint0.x)] +
inWeight0.y * model.jointMatrix[int(inJoint0.y)] +
inWeight0.z * model.jointMatrix[int(inJoint0.z)] +
inWeight0.w * model.jointMatrix[int(inJoint0.w)];
localPosition = model.model * model.local * skinMat * vec4(inPosition, 1.0);
So basically the mistake I made was relatively simple. It was caused by casting the joints to the wrong type. When a glTF model uses a byte stride of size 4 for the vertex joints, the implicit type is uint8_t (unsigned byte) and not the uint16_t (unsigned short) which I was using.
I've been trying to load a Wavefront obj model with ASSIMP. However, I can't get mtl materials colors to work (Kd rgb). I know how to load then, but, I don't know how to get the corresponding color to each vertex.
usemtl material_11
f 7//1 8//1 9//2
f 10//1 11//1 12//2
For example, the Wavefront obj snippet above means that thoose vertices uses material_11.
Q: So how can I get the material corresponding to each vertex?
Error
Wavefront obj materials aren't in the right vertices:
Original Model (Rendered with ASSIMP model Viewer):
Model renderered with my code:
Code:
Code that I use for loading mtl materials color:
std::vector<color4<float>> colors = std::vector<color4<float>>();
...
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const aiMesh* model = scene->mMeshes[i];
const aiMaterial *mtl = scene->mMaterials[model->mMaterialIndex];
color4<float> color = color4<float>(1.0f, 1.0f, 1.0f, 1.0f);
aiColor4D diffuse;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_DIFFUSE, &diffuse))
color = color4<float>(diffuse.r, diffuse.g, diffuse.b, diffuse.a);
colors.push_back(color);
...
}
Code for creating the vertices:
vertex* vertices_arr = new vertex[positions.size()];
for (unsigned int i = 0; i < positions.size(); i++)
{
vertices_arr[i].SetPosition(positions.at(i));
vertices_arr[i].SetTextureCoordinate(texcoords.at(i));
}
// Code for setting vertices colors (I'm just setting it in ASSIMP vertices order, since I don't know how to set it in the correct order).
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const unsigned int vertices_size = scene->mMeshes[i]->mNumVertices;
for (unsigned int k = 0; k < vertices_size; k++)
{
vertices_arr[k].SetColor(colors.at(i));
}
}
EDIT:
Looks like that the model vertices positions aren't being loaded correctly too. Even when I disable face culling and change the background color.
Ignoring issues related to the number of draw calls and extra storage, bandwidth and processing for vertex colours,
It looks like vertex* vertices_arr = new vertex[positions.size()]; is one big array you're creating to hold the entire model (which has many meshes, each with one material). Assuming your first loop is correct and positions contains all positions for all meshes of your model. The second loop starts duplicating mesh colours for each vertex within the mesh. However, vertices_arr[k] always starts at zero and needs to begin after the last vertex of the previous mesh. Instead, try:
int colIdx = 0;
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const unsigned int vertices_size = scene->mMeshes[i]->mNumVertices;
for (unsigned int k = 0; k < vertices_size; k++)
{
vertices_arr[colIdx++].SetColor(colors.at(i));
}
}
assert(colIdx == positions.size()); //double check
As you say, if the geometry isn't drawing properly, maybe positions doesn't contain all vertex data. Perhaps a similar issue to the above code? Another issue could be in joining the indices for each mesh. The indices will all need to be updated with an offset to the new vertex locations within the vertices_arr array. Though now I'm just throwing out guesses.