Exporting 3d Models and their color information using Assimp - opengl

we're working on a custom 3d engine (OpenGL) in which people can create, import and export custom 3d models, and we are using Assimp for our importing/exporting. At this point, importing works great, but when it comes to exporting, we are unable to save out any materials other than the default. While Assimp's website and others have loads of information on importing, there is little to no documentation on exporting. We managed to work out majority of the export process, but there doesn't seem to be any way of setting Assimp's aiMaterials' color values.
Assimp's documentation explains how to GET the color information from existing materials, ie..
*aiColor3D color (0.f,0.f,0.f);
mat->Get(AI_MATKEY_COLOR_DIFFUSE,color);*
http://assimp.sourceforge.net/lib_html/materials.html
but doesn't include anything on SETTING color information based on the model's material. (FYI, all of our models are flat colors; no textures). If anyone has any experience in exporting materials/colors, any help would be greatly appreciated. Here is what we have now..
//Create an Assimp node element to be a container object to hold all graphic information
scene.mRootNode = new aiNode();
scene.mMaterials = new aiMaterial*[ 1 ];
scene.mMaterials[ 0 ] = nullptr;
scene.mNumMaterials = 1;
mAllChildren.clear();
//Get ALL children on the scene (including down the hierarchy)
FindAllChildren(CScriptingUtils::GetDoc()->GetScene());
std::vector<std::weak_ptr<CNode>> children = mAllChildren;
int size = (int)children.size();
scene.mMaterials[ 0 ] = new aiMaterial();
scene.mRootNode->mMeshes = new unsigned int[ size ];
scene.mRootNode->mNumMeshes = size;
scene.mMeshes = new aiMesh*[ size ];
scene.mNumMeshes = size;
//Iterate through all children, retrieve their graphical information and push it into the Assimp structure
for(int i = 0; i < size; i++)
{
std::shared_ptr<CNode> childNode = children[i].lock();
scene.mRootNode->mMeshes[ i ] = i;
scene.mMeshes[ i ] = nullptr;
scene.mMeshes[ i ] = new aiMesh();
scene.mMeshes[ i ]->mMaterialIndex = 0;
aiMaterial* mat = scene.mMaterials[0];
And we need to do something like..
mat.color = childNode.color;

try that:
mat->AddProperty<aiColor3D>(color, 1, AI_MATKEY_COLOR_DIFFUSE);
It helps me. Also, I trying to export textures
newMat->AddProperty(matName, AI_MATKEY_NAME);
newMat->AddProperty(texturePath, AI_MATKEY_TEXTURE_DIFFUSE(0));
where 'matName' and 'texturePath' types are 'aiString'. What additiional parameters (besides path of texture) needed for correct displaying texture (because now textures not displayed, only color)?

Related

How can you get a vertex's attribute through Open Maya

I want an Open Maya getter and setter for locking a vertex's pnt attribute.
I am currently using Maya's standard cmds, but it is too slow.
This is my getter:
mesh = cmds.ls(sl=1)[0]
vertices = cmds.ls(cmds.polyListComponentConversion(mesh, toVertex=True), flatten=True)
cmds.getAttr("{}.pntx".format(vertices[0]), lock=True)
This is my setter:
mesh = cmds.ls(sl=1)[0]
vertices = cmds.ls(mc.polyListComponentConversion(mesh, toVertex=True), flatten=True)
cmds.setAttr("{}.pntx".format(vertices[0]), lock=False)
This is what I have so far, in Open Maya:
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add(meshes[0])
dag = sel.getDagPath(0)
fn_mesh = om.MFnMesh(dag)
I think I need to pass the vertex object into an om.MPlug() so that I can compare the pntx attribute against the MPlug's isLocked function, but I'm not sure how to achieve this.
I have a suspicion that I need to get it through the om.MFnMesh(), as getting the MFnMesh vertices only returns ints, not MObjects or anything that can plug into an MPlug.
My suspicion was correct; I did need to go through the MFnMesh.
It contained an MPlug array for pnts. From there I was able to access the data I needed.
import maya.api.OpenMaya as om
meshes = mc.ls(type="mesh", long=True)
bad_mesh = []
for mesh in meshes:
selection = om.MSelectionList()
selection.add(mesh)
dag_path = selection.getDagPath(0)
fn_mesh = om.MFnMesh(dag_path)
plug = fn_mesh.findPlug("pnts", True)
for child_num in range(plug.numElements()):
child_plug = plug.elementByLogicalIndex(child_num)
for attr_num in range(child_plug.numChildren()):
if child_plug.child(attr_num).isLocked:
bad_mesh.append(mesh)
break

OSG: Why there is texture coordinate array but not texture itself?

I am trying to get texture file name from an osg::Geometry I get the texture coordinates like this:
osg::Geometry* geom = dynamic_cast<osg::Geometry*> (drawable);
const osg::Geometry::ArrayList& texCoordArrayList = dynamic_cast<const osg::Geometry::ArrayList&>(geom->getTexCoordArrayList());
auto texCoordArrayListSize = texCoordArrayList.size();
auto sset = geom->getOrCreateStateSet();
processStateSet(sset);
for (size_t k = 0; k < texCoordArrayListSize; k++)
{
const osg::Vec2Array* texCoordArray = dynamic_cast<const osg::Vec2Array*>(geom->getTexCoordArray(k));
//doing sth with vertexarray, normalarray and texCoordArray
}
But I am not able to get texture file name in processStateSet() function. I take the processStateSet function code from OSG examples (specifically from osganalysis example). Even though there is a texture file, Sometimes it works and gets the name but sometimes not. Here is my processStateSet function
void processStateSet(osg::StateSet* stateset)
{
if (!stateset) return;
for (unsigned int ti = 0; ti < stateset->getNumTextureAttributeLists(); ++ti)
{
osg::StateAttribute* sa = stateset->getTextureAttribute(ti, osg::StateAttribute::TEXTURE);
osg::Texture* texture = dynamic_cast<osg::Texture*>(sa);
if (texture)
{
LOG("texture! ");
//TODO: something with this.
for (unsigned int i = 0; i < texture->getNumImages(); ++i)
{
auto img (texture->getImage(i));
auto texturefname (img->getFileName());
LOG("image ! image no: " + IntegerToStr(i) + " file: " + texturefname);
}
}
}
}
EDIT:
I just realized that: if the model that I load is ".3ds", texturefname is exist but if model is ".flt" there is not texture name.
Is it about loading different types? But I know that they both have textures. What is the difference? I confused.
Some 3D models don't have texture names. Your choices are to deal with it, or use model files that do. It also depends on the format. Some formats can't have texture names. Some Blender export scripts can't write texture names even though the format supports it. And so on.
3D model formats are not interchangeable - every one is different.

How does a FbxMesh determine which FbxSurfaceMaterial to use? (FBX SDK 2019)

According to the FBX SDK official document, a FbxMesh contains an "IndexArray" to the materials, but it didn't clearly say whether if it's an index to the materials that attached to the parent node, or an index to the materials of the entire scene (which can be accessed through FbxScene)
So which case was it?
I thought it should be the index to the materials of the entire scene, but according to the example in the official document, it's the index to the materials of the parent node. But if THAT is the case however, then:
Since one mesh can be attached to multiple nodes, and different nodes can have different materials attached, does that means that one mesh can have different materials when referenced by different nodes? Is this intentional?
What I believed:
FbxScene* scene;
FbxMesh* mesh;
FbxLayerElementMaterial* layerElementMaterial = mesh->GetElementMaterial();
if (layerElementMaterial ->GetMappingMode() == FbxLayerElement::eAllSame)
{
int index = layerElementMaterial->GetIndexArray()[0];
FbxSurfaceMaterial* material = scene->GetMaterial(index);
}
What the official document sugguested:
FbxNode* node;
FbxMesh* mesh;
FbxLayerElementMaterial* layerElementMaterial = mesh->GetElementMaterial();
if (layerElementMaterial ->GetMappingMode() == FbxLayerElement::eAllSame)
{
int index = layerElementMaterial->GetIndexArray()[0];
FbxSurfaceMaterial* material = node->GetMaterial(index);
}

Direct2d. Texture atlas creation

My goal is to create a texture atlas in my directx application. What i have is a vector of ID2D1PathGeometries which need to to be put on a texture atlas. So i create a ID2D1Bitmap1, but i have no clue on what is my next step. In other words, - how exactly do i lay an ID2D1PathGeometry on a ID2D1Bitmap1 on the spot i need?
p/s/ it worth mentioning, that i'm kind of a newbie in directx and when i try to look for an answer on msdn i just keep getting lost in everything direct2d provides you with.
TU
p/p/s Code requested:
there is not much to show, as i mentioned already.
std::vector<Microsoft::WRL::ComPtr<ID2D1PathGeometry>> atlasGeometries; // so i have my geometries
////than i fill the vector
{
....
}
////Creating Bitmap for font sheet
Microsoft::WRL::ComPtr<ID2D1Bitmap1> bitmap;
D2D1_SIZE_U dimensions;
dimensions.height = 1024;
dimensions.width = 1024;
D2D1_BITMAP_PROPERTIES1 d2dbp;
D2D1_PIXEL_FORMAT d2dpf;
FLOAT dpiX;
FLOAT dpiY;
d2dpf.format = DXGI_FORMAT_A8_UNORM;
d2dpf.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
this->dxDevMt.GetD2DFactory()->GetDesktopDpi(&dpiX, &dpiY);
d2dbp.pixelFormat = d2dpf;
d2dbp.dpiX = dpiX;
d2dbp.dpiY = dpiY;
d2dbp.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET;
d2dbp.colorContext = nullptr;
newCtx->CreateBitmap(dimensions, nullptr, 0, d2dbp, bitmap.GetAddressOf());
But what i do next is a quest for me. i kind of figured out, i should use RenderTarget for such kind of stuff. but i failed to figure out, how exactly.
The problem was solved via using bitmap as a render target.
The idea is to create a new d2d device
create a new DeviceContext
set Bitmap as a render target
and render everything, thta needs to be rendered

Bullet Physics - Creating ShapeHull from Mesh

I am attempting to load mesh data that I use to draw with OpenGL into the Bullet Engine.
The problem is I don't believe that the mesh data is actually being read from the pointers. Thus the physical world is not matching up and the "Character Controller" is falling through the floor.
The rendered world is fine, so I know the OpenGL data is fine.
Here is the function of my controller class that I am using to transfer the data into OpenGL.
Is there something I am missing here?
Thank you - Here's the code I am using:
void PhysicsController::AddStaticBasicMesh(PhysicsObject* NewObject, bool ReducePolygonCount = true)
{
btTriangleMesh* OriginalTriangleMesh = new btTriangleMesh();
btIndexedMesh* NewMesh = new btIndexedMesh();
NewMesh->m_triangleIndexBase = (unsigned char *)NewObject->Indices;
NewMesh->m_triangleIndexStride = 3 * sizeof(unsigned int);
NewMesh->m_vertexBase = (unsigned char *)NewObject->Vertices;
NewMesh->m_vertexStride = 3 * sizeof(float);
NewMesh->m_numVertices = NewObject->NumberOfVertices;
NewMesh->m_numTriangles = NewObject->NumberOfTriangles;
OriginalTriangleMesh->addIndexedMesh((*NewMesh));
btConvexShape* NewStaticMesh = new btConvexTriangleMeshShape(OriginalTriangleMesh);
btConvexHullShape* ReducedPolygonStaticMesh;
if (ReducePolygonCount == true) {
btShapeHull* HullOfOriginalShape = new btShapeHull(NewStaticMesh);
btScalar CurrentMargin = NewStaticMesh->getMargin();
HullOfOriginalShape->buildHull(CurrentMargin);
ReducedPolygonStaticMesh = new btConvexHullShape();
for (int i = 0; i < HullOfOriginalShape->numVertices(); i++) {
ReducedPolygonStaticMesh->addPoint(HullOfOriginalShape->getVertexPointer()[i], false);
}
ReducedPolygonStaticMesh->recalcLocalAabb();
/*
Find out what this line does.
ReducedPolygonStaticMesh->initializePolyhedralFeatures();
*/
StaticShapes.push_back(ReducedPolygonStaticMesh);
btDefaultMotionState* StaticMotionState = new btDefaultMotionState((*NewObject->PositionAndOrientation));
btRigidBody::btRigidBodyConstructionInfo StaticMeshRigidBodyInfo(0, StaticMotionState, ReducedPolygonStaticMesh, btVector3(0.0f, 0.0f, 0.0f));
btRigidBody* StaticRigidMesh = new btRigidBody(StaticMeshRigidBodyInfo);
NewObject->Body = StaticRigidMesh;
StaticRigidMesh->setCollisionFlags(StaticRigidMesh->getCollisionFlags() | btCollisionObject::CF_STATIC_OBJECT);
MainPhysicsWorld->addRigidBody(StaticRigidMesh, btBroadphaseProxy::StaticFilter, btBroadphaseProxy::CharacterFilter | btBroadphaseProxy::DefaultFilter);
AllRigidBodies.push_back(NewObject);
StaticRigidMesh->setUserPointer(AllRigidBodies[AllRigidBodies.size() - 1]);
delete HullOfOriginalShape;
delete NewStaticMesh;
} else {
StaticShapes.push_back(NewStaticMesh);;
btDefaultMotionState* StaticMotionState = new btDefaultMotionState((*NewObject->PositionAndOrientation));
btRigidBody::btRigidBodyConstructionInfo StaticMeshRigidBodyInfo(0, StaticMotionState, NewStaticMesh, btVector3(0.0f, 0.0f, 0.0f));
btRigidBody* StaticRigidMesh = new btRigidBody(StaticMeshRigidBodyInfo);
NewObject->Body = StaticRigidMesh;
StaticRigidMesh->setCollisionFlags(StaticRigidMesh->getCollisionFlags() | btCollisionObject::CF_STATIC_OBJECT);
MainPhysicsWorld->addRigidBody(StaticRigidMesh, btBroadphaseProxy::StaticFilter, btBroadphaseProxy::CharacterFilter | btBroadphaseProxy::DefaultFilter);
AllRigidBodies.push_back(NewObject);
StaticRigidMesh->setUserPointer(AllRigidBodies[AllRigidBodies.size() - 1]);
}
}
Maybe, not the answer you're looking for, but probably you could use btConvexHullShape, as advised in Bullet documentation?
http://bulletphysics.org/Bullet/BulletFull/classbtConvexTriangleMeshShape.html
Nevertheless, most users should use the much better performing
btConvexHullShape instead.
And add vertices one by one using addPoint, which is also mentioned in documentation:
http://bulletphysics.org/Bullet/BulletFull/classbtConvexHullShape.html
It is easier to not pass any points in the constructor, and just add
one point at a time, using addPoint
I know it is bad to answer "just use something else" instead of answering the original question.
Also this way you'll duplicate your vertices.
However, if you're only beginning to learn some new tech, you might want to make everything work first and then it might be easier to return step-by-step to your original design.
Another good idea would be to use debug drawer and actually see what your mesh looks like.
I've created one, some time ago for OpenGL ES 2.0, maybe it will be helpful:
https://github.com/kmuzykov/custom-opengl-es-game-engine/blob/master/Engine/Physics/KMPhysicsDebugDrawer.cpp