I want an Open Maya getter and setter for locking a vertex's pnt attribute.
I am currently using Maya's standard cmds, but it is too slow.
This is my getter:
mesh = cmds.ls(sl=1)[0]
vertices = cmds.ls(cmds.polyListComponentConversion(mesh, toVertex=True), flatten=True)
cmds.getAttr("{}.pntx".format(vertices[0]), lock=True)
This is my setter:
mesh = cmds.ls(sl=1)[0]
vertices = cmds.ls(mc.polyListComponentConversion(mesh, toVertex=True), flatten=True)
cmds.setAttr("{}.pntx".format(vertices[0]), lock=False)
This is what I have so far, in Open Maya:
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add(meshes[0])
dag = sel.getDagPath(0)
fn_mesh = om.MFnMesh(dag)
I think I need to pass the vertex object into an om.MPlug() so that I can compare the pntx attribute against the MPlug's isLocked function, but I'm not sure how to achieve this.
I have a suspicion that I need to get it through the om.MFnMesh(), as getting the MFnMesh vertices only returns ints, not MObjects or anything that can plug into an MPlug.
My suspicion was correct; I did need to go through the MFnMesh.
It contained an MPlug array for pnts. From there I was able to access the data I needed.
import maya.api.OpenMaya as om
meshes = mc.ls(type="mesh", long=True)
bad_mesh = []
for mesh in meshes:
selection = om.MSelectionList()
selection.add(mesh)
dag_path = selection.getDagPath(0)
fn_mesh = om.MFnMesh(dag_path)
plug = fn_mesh.findPlug("pnts", True)
for child_num in range(plug.numElements()):
child_plug = plug.elementByLogicalIndex(child_num)
for attr_num in range(child_plug.numChildren()):
if child_plug.child(attr_num).isLocked:
bad_mesh.append(mesh)
break
Related
i have a very simple and maybe stupid question, sorry i am new to this kinf of stuff.
I had a file of points, published them to a topic as a pointcloud and subscribed from another node to the topic to modify the poincloud and published it again. Everthing is fine so far. The problem is that the pointcloud (the object) ist not in the origin when i visualize it in RVIZ. It is somewhere at the edge of the platform in RVIZ. How can i make the pointcloud move to the origin in a easy way?
So far i have tried some solutions with the tf2 package. Tried to transform the frame of the object to the map frame, where i want to have the object at the ent. But doenst seem to work. I am missing something. The question is, is this the right and easiest approach for this or is there a better way. If its the right approach, what am i missing?
I put this code in my callback function:
geometry_msgs::msg::TransformStamped transform;
transform.header.stamp = this->get_clock()->now();
transform.header.frame_id = "object_frame";
transform.child_frame_id = "map";
// Set the position of the object relative to the Rviz frame
transform.transform.translation.x = 0.0;
transform.transform.translation.y = 0.0;
transform.transform.translation.z = 0.0;
// Set the orientation of the object relative to the Rviz frame
transform.transform.rotation.x = 0.0;
transform.transform.rotation.y = 0.0;
transform.transform.rotation.z = 0.0;
transform.transform.rotation.w = 1.0;
tf_broadcaster->sendTransform(transform);
And the following to my publisher function:
void Preprocessor::publish_pointcloud_supervoxel()
{
// Convert the PointCloud to a PointCloud2 message
auto pcl_msg_supervoxel = std::make_shared<sensor_msgs::msg::PointCloud2>();
//sensor_msgs::msg::PointCloud2 pcl_msg_supervoxel;
pcl::toROSMsg(*colored_supervoxel_cloud, *pcl_msg_supervoxel);
//pcl_msg_supervoxel->width = adjacent_supervoxel_centers.size();
pcl_msg_supervoxel->header.frame_id = "map";
pcl_msg_supervoxel->header.stamp = this->get_clock()->now();
// Publish the message
supervoxel_publisher->publish(*pcl_msg_supervoxel);
}
Not knowing your rviz setup, have you checked if the Global options:fixed frame in rviz is set to the correct frame?
You can try a static transform and see if that is reflected in rviz. This would be in the launch file.
static_transform_node = Node(
package="tf2_ros",
executable="static_transform_publisher",
arguments=["x", "y", "z", "roll", "pitch", "yaw", "parent_frame_id", "child_frame_id"],
output="screen",
)
Note: the quotes in the arguments are required
ref: https://wiki.ros.org/tf2_ros#static_transform_publisher
So I want to render two independent meshes in Vulkan. I'm dabbling in textures and the 1st mesh uses 4 of them while the 2nd uses 5. I'm doing indexed draws.
Each mesh has its own uniform buffer and sampler array packed into separate descriptor sets for simplicity, each one with a binding for the UBO and another binding for the samplers. The following code is run for each mesh, where descriptorSet is the descriptor set associated to a single mesh. filepaths is the vector of image paths that mesh in particular uses.
std::vector<VkWriteDescriptorSet> descriptorWrites;
descriptorWrites.resize(2);
VkDescriptorBufferInfo bufferInfo = {};
bufferInfo.buffer = buffers[i];
bufferInfo.offset = 0;
bufferInfo.range = sizeof(UniformBufferObject);
descriptorWrites[0].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
descriptorWrites[0].dstSet = descriptorSet;
descriptorWrites[0].dstBinding = 0;
descriptorWrites[0].dstArrayElement = 0;
descriptorWrites[0].descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC;
descriptorWrites[0].descriptorCount = 1;
descriptorWrites[0].pBufferInfo = &bufferInfo;
std::vector<VkDescriptorImageInfo> imageInfos;
imageInfos.resize(filepaths.size());
for (size_t j = 0; j < filepaths.size(); j++) {
imageInfos[j].imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
imageInfos[j].imageView = imageViews[j];
imageInfos[j].sampler = samplers[j];
}
descriptorWrites[1].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
descriptorWrites[1].dstSet = descriptorSet;
descriptorWrites[1].dstBinding = 1;
descriptorWrites[1].dstArrayElement = 0;
descriptorWrites[1].descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
descriptorWrites[1].descriptorCount = imageInfos.size();
descriptorWrites[1].pImageInfo = imageInfos.data();
vkUpdateDescriptorSets(devicesHandler->device, descriptorWrites.size(), descriptorWrites.data(), 0, nullptr);
So in order to tell Vulkan how these descriptor sets are laid out I need of course two descriptor set layouts i.e. one per mesh, which differ in the binding for the samplers due to the different size of filepaths:
// <Stuff for binding 0 for UBO here>
// ...
VkDescriptorSetLayoutBinding layoutBinding = {};
layoutBinding.binding = 1;
layoutBinding.descriptorCount = filepaths.size();
layoutBinding.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
layoutBinding.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT;
Now, when I create the pipeline I need to provide the pipeline layout. I'm doing it as follows, where layouts are the descriptor set layouts of the meshes stuffed into a vector.:
VkPipelineLayoutCreateInfo pipelineLayoutInfo = {};
pipelineLayoutInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO;
pipelineLayoutInfo.setLayoutCount = layouts.size();
pipelineLayoutInfo.pSetLayouts = layouts.data();
Finally before rendering I bind the aproppriate descriptor set.
Naively I would think that way to define the pipeline layout this is the way to go (simply taking all the involved layouts and passing them on pSetLayouts) but it's not working. The error I get is:
descriptorSet #0 being bound is not compatible with overlapping descriptorSetLayout at index 0 of pipelineLayout 0x6e due to: DescriptorSetLayout 87 has 5 descriptors, but DescriptorSetLayout 88, which comes from pipelineLayout, has 6 descriptors.. The Vulkan spec states: Each element of pDescriptorSets must have been allocated with a VKDescriptorSetLayout that matches (is the same as, or identically defined as) the VkDescriptorSetLayout at set n in layout, where n is the sum of firstSet and the index into pDescriptorSets.
I also noticed that if I reduce the number of textures used from 5 to 4 in the second mesh so they match the 4 from the first mesh, then it works. So I'm wondering if I need to create a pipeline for every possible configuration of the layouts? That is, one pipeline with setLayoutCount set to 4 and another set to 5, and bind the corresponding one when I'm going to draw one mesh or the other? Is that stupid? Am I missing something?
Worth noting is that if I render each mesh alone everything runs smoothly. The problem arises when I put both of them in the scene.
Also, I know buffers should be allocated consecutively and taking into account alignments and that what I'm doing there is a bad practice - but I am just not dealing with that yet.
Passing multiple set layouts to the pipeline means that you want the pipeline to be able to access all the bindings in both sets simultaneously, e.g. the shaders have access to two UBOs at (set=0, binding=0) and (set=1, binding=0), four textures at (set=0, binding=1), and five textures as (set=1, binding=1).
Then when you bind the set for the second mesh as the only set, you get the incompatibility because it has a different layout (5 textures) than the pipeline expects for set 0 (4 textures).
So yes, when you have different descriptor set layouts, you need different pipelines. If you use the pipeline cache, much of the compilation may actually be reused between the two pipelines.
If you're trying to use the same pipeline for both meshes, then presumably the code in your shader that accesses the fifth texture is conditional, based on a uniform or something? The alternative is to bind a dummy texture when drawing the 4-texture mesh; since it won't be accessed, it doesn't matter what its contents are, it can be 1x1, etc. Then you can use the same 5-texture set layout and same pipeline for both meshes.
Hi, everyone,
I've run into a problem when manipulating a rigged mesh model in Panda3D. I loaded a mesh model which has an armature modifier consisting of two adjoint bones( one for the palm, one for a collection of four fingers, namely index, middle, ring and little finger ), which looks like this original unchanged hand model; then I transform the latter bone ( joint ) to fold the four fingers inward, using actor's 'controlJoint' method. Codes here :
self.handActor = Actor( r'/d/3DModels/TestHand.egg' )
self.handJoint1 = self.handActor.controlJoint( None,
'modelRoot',
'Bone1'
)
self.handJoint2 = self.handActor.controlJoint( None,
'modelRoot',
'Bone2'
)
self.handJoint2.setP( 90 )
Then I accessed the vertex info of the the current transformed mesh, with code like this below :
geomNodeCollection = self.handActor.findAllMatches( '**/+GeomNode' )
geomNodePath = geomNodeCollection[ 0 ]
geomNode = geomNodePath.node()
geom = geomNode.getGeom( 0 )
vData = geom.getVertexData()
reader_vertex = GeomVertexReader( vData, 'vertex' )
reader_normal = GeomVertexReader( vData, 'normal' )
vertexList = list()
normalList = list()
for i in xrange( 2000 ) :
vertex = reader_vertex.getData3f()
normal = reader_normal.getData3f()
vertexList.append( vertex )
normalList.append( normal )
Then I marked each of these positions with an smiley sphere, expecting to see a cloud of these smileys positioned just fitting around the deformed hand. However, I got a point cloud of the original hand shape, which is flattened, like this :deformed hand model and vertices obtained shown a point cloud
Any idea about how to obtain the vertices positions exactly matching the deformed hand mesh? Thanks!
I think you need to call animateVertices on the GeomVertexData, such as:
vData = geom.getVertexData().animateVertices(True, Thread.getCurrentThread())
Panda will automatically cache the animated GeomVertexData.
we're working on a custom 3d engine (OpenGL) in which people can create, import and export custom 3d models, and we are using Assimp for our importing/exporting. At this point, importing works great, but when it comes to exporting, we are unable to save out any materials other than the default. While Assimp's website and others have loads of information on importing, there is little to no documentation on exporting. We managed to work out majority of the export process, but there doesn't seem to be any way of setting Assimp's aiMaterials' color values.
Assimp's documentation explains how to GET the color information from existing materials, ie..
*aiColor3D color (0.f,0.f,0.f);
mat->Get(AI_MATKEY_COLOR_DIFFUSE,color);*
http://assimp.sourceforge.net/lib_html/materials.html
but doesn't include anything on SETTING color information based on the model's material. (FYI, all of our models are flat colors; no textures). If anyone has any experience in exporting materials/colors, any help would be greatly appreciated. Here is what we have now..
//Create an Assimp node element to be a container object to hold all graphic information
scene.mRootNode = new aiNode();
scene.mMaterials = new aiMaterial*[ 1 ];
scene.mMaterials[ 0 ] = nullptr;
scene.mNumMaterials = 1;
mAllChildren.clear();
//Get ALL children on the scene (including down the hierarchy)
FindAllChildren(CScriptingUtils::GetDoc()->GetScene());
std::vector<std::weak_ptr<CNode>> children = mAllChildren;
int size = (int)children.size();
scene.mMaterials[ 0 ] = new aiMaterial();
scene.mRootNode->mMeshes = new unsigned int[ size ];
scene.mRootNode->mNumMeshes = size;
scene.mMeshes = new aiMesh*[ size ];
scene.mNumMeshes = size;
//Iterate through all children, retrieve their graphical information and push it into the Assimp structure
for(int i = 0; i < size; i++)
{
std::shared_ptr<CNode> childNode = children[i].lock();
scene.mRootNode->mMeshes[ i ] = i;
scene.mMeshes[ i ] = nullptr;
scene.mMeshes[ i ] = new aiMesh();
scene.mMeshes[ i ]->mMaterialIndex = 0;
aiMaterial* mat = scene.mMaterials[0];
And we need to do something like..
mat.color = childNode.color;
try that:
mat->AddProperty<aiColor3D>(color, 1, AI_MATKEY_COLOR_DIFFUSE);
It helps me. Also, I trying to export textures
newMat->AddProperty(matName, AI_MATKEY_NAME);
newMat->AddProperty(texturePath, AI_MATKEY_TEXTURE_DIFFUSE(0));
where 'matName' and 'texturePath' types are 'aiString'. What additiional parameters (besides path of texture) needed for correct displaying texture (because now textures not displayed, only color)?
I would like to get the VID (vertex ID) after I have added a single vertex to an existing graph. I current get a vertex_set after adding the new vertex and loop through to the end of the vertex set (assuming this is always the last added vertex even in the event of a earlier one being deleted?). I need to test if deleting a vertex from the middle of the set changes the VIDs still. But I am sure there must be a better (read more efficient way) of doing this.. The code below is what I currently use.
Any help appreciated as I am new to iGraph.
// add into graph
igraph_integer_t t = 1;
if(igraph_add_vertices(user_graph, t, 0) != 0)
{
::MessageBoxW(NULL, L"Failed to add vertex to iGraph, vertex not added.", L"Network Model", MB_ICONSTOP);
return false;
}
/* get all verticies */
igraph_vs_t vertex_set;
igraph_vit_t vit;
igraph_integer_t vid = 0;
igraph_vs_all(&vertex_set);
igraph_vit_create(user_graph, vertex_set, &vit);
// must be a better way - look for starting from end.
while (!IGRAPH_VIT_END(vit))
{
vid = IGRAPH_VIT_GET(vit);
IGRAPH_VIT_NEXT(vit);
}
// add vid to vertex ca
ca->graphid = (int)vid;
// Add new vertex to local store
vm->CreateVertex(ca);
// cleanup
igraph_vit_destroy(&vit);
igraph_vs_destroy(&vertex_set);
Vertex IDs (and also edge IDs) in igraph are integers from zero up to the number of vertices/edges minus one. Therefore, if you add a new vertex or edge, its ID will always be equal to the number of vertices/edges before the addition. Also, if you delete some edges, the IDs of existing edges will be re-arranged to make the edge ID range continuous again. Same applies for the deletion of vertices, and note that deleting some vertices will also re-arrange the edge IDs unless the deleted vertices were isolated.