I'm trying to clip a vtkUnstructuredGrid using vtkClipDataSet. The problem is that after I clip, the resulting vtkUnstructuredGrid doesn't have the point/cells data (the arrays).
This is my code:
vtkSmartPointer<vtkUnstructuredGrid> model = reader->GetOutput();
// this shows that model has one point data array called "Displacements" (vectorial of 3 components)
model->Print(std::cout);
// Plane to cut it
vtkSmartPointer<vtkPlane> plane = vtkSmartPointer<vtkPlane>::New();
plane->SetOrigin(0.0,0.0,0.0); plane->SetNormal(1,0,0);
// Clip data
vtkSmartPointer<vtkClipDataSet> clipDataSet = vtkSmartPointer<vtkClipDataSet>::New();
clipDataSet->SetClipFunction(plane);
clipDataSet->SetInputConnection(model->GetProducerPort());
clipDataSet->InsideOutOn();
clipDataSet->GenerateClippedOutputOn();
//PROBLEM HERE. The print shows that there aren't any arrays on the output data
clipDataSet->GetOutput()->Print(std::cout);
I need the output grid to have the arrays, because I would like to display the values on the resulting grid.
For example, if the data are are scalars, I would like to display isovalues on the cutted mesh. If the data is vectorial, I would like to deform the mesh (warp) in the direction of the data vectors.
Here I have an example on ParaView of what I would like to do. The solid is the original mesh and the wireframe mesh is the deformed one.
I'm using VTK 5.10 under C++ (Windows 8.1 64 bits, if that helps).
Thank you!
PS: I tried asking this on the VTKusers list, but I got no answer.
Ok I found the error after the comment of user lib. I was missing the call to update after I set the inputconnection.
Thank you all.
// Clip data
vtkSmartPointer<vtkClipDataSet> clipDataSet = vtkSmartPointer<vtkClipDataSet>::New();
clipDataSet->SetClipFunction(plane);
clipDataSet->SetInputConnection(model->GetProducerPort());
clipDataSet->InsideOutOn();
clipDataSet->GenerateClippedOutputOn();
clipDataSet->Update(); // THIS is the solution
Related
I am looking for an idiomatic and efficient solution for this problem:
Let's say I have 3D Tensor where I want to represent an image with 100*100 pixels on 3 color channels,
Eigen::Tensor<int, 3> input(3,100,100);
The output I would like to get could be stored in
Eigen::Tensor<int, 4> output(3,3,100,100);
I would like to project the 3D input into the 4D output in a way that each color channel in the original tensor would have its own individual 3D tensor in the output, where each channel would contain the same values, that is
tensor(0,0,42,42) = tensor(0,1,42,42) = tensor(0,2,42,42)
tensor(0,0,12,12) = tensor(0,1,12,12) = tensor(0,2,12,12)
Illustrated on a picture:
Originally I wanted to solve this method:
Chip the individual color channels.
Broadcast the individual color channels into the size I need,
Reshape the broadcasted result into the desirable format(this is just a 3D Tensor at this point)
Concatenate the individual 3D Tensors into a big 4d one.
I have two problems with this approach.
Firstly, I just can not get the reshaping right, it always gives back a reshaped tensor with the dimensionality I want, but the coefficients get shuffled. I started to experiment with the layout of the Tensors, but it did not seem to help.
Secondly, this seems to be very tedious, I just feel like there should be a more convenient way to achieve this but I could not find any cue about that in the documentation.
I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3
I am trying a different Pipeline without success until now.
The Idea is to use the classic pipeline (as in the Explorer Example) but additionally to use the last ColorImage for the texutre.
So the idea (after clicking SAVE MESH):
Save current Image as BMP
Get the current transformation [m_pVolume->GetCurrentWorldToCameraTransform(&m_worldToCameraTransform);] .. lets call it M
Transform all Mesh vertices v in the last Camera Space Coordinate System ( M * v )
Now the current m_pMapper refers to the latest Frame which we want to use [ m_pMapper->MapCameraPointToColorSpace(camPoint, &colorPoint); ]
In theory I should have now every Point of the fusion mesh as a texture coordinate.. I want to use them to export as OBJ File (with texture and not only color).
What am I doing wrong?
The 3D Transformations seem to be correct.. when I visualize the resulting OBJ file in MeshLab I can see that the transformation is correct.. the WorldCoordinateSystem is Equal to the latest recorded position.
Only the texture is not set correctly.
I would be very very very very happy if anyone could help me. I am trying already for a long time :/
Thank you very much :)
I have a lot of lines and planes which are around, for example, (0.5, 0.5, 0.5) point. Also I have area where they have importance, it's a cube. And lines, planes have possibility to intersect this area, and be outside of it. Can I hide part of all elements, and parts of elements, which are not included in my area? Does Vtk have opportunity to do it very simple? Or I need to do it by myself? I want to write, for example SetBounds(bounds), and after that all what isn't included in cube dissapear.
Try using vtkClipDataSet with the clip-function set to vtkBox. Finally, render the output from the vtkClipDataSet filter.
vtkNew<vtkBox> box;
box->SetBounds(.....); // set the bounds of interest.
vtkNew<vtkClipDataSet> clipper;
clipper->SetInputConnection(....); // set to your data producer
clipper->SetClipFunction(box.GetPointer());
// since clipper will produce an unstructured grid, apply the following to
// extract a polydata from it.
vtkNew<vtkGeometryFilter> geomFilter;
geomFilter->SetInputConnection(clipper->GetOutputPort());
// now, this can be connected to the mapper.
vtkNew<vtkPolyDataMapper> mapper;
mapper->SetInputConnection(geomFilter->GetOutputPort());
I most recently had great progress in getting Vertex buffer objects to work.
So I moved on to Element arrays and I figured with such implemented I could then load vertices and face data from an obj.
I'm not too good at reading files in c++ so I wrote a python doc to parse the obj and write 2 separate txts to give me a vertex array and face indices and pasted them directly in my code. Which is like 6000 lines but it works (without compiling errors).
And Here's what it looks like
.
I think they're wrong. I'm not sure. The order of the vertices and faces aren't changed just extracted from the obj because I don't have normals or textures working for buffer objects yet. I kinda do if you look at the cube but not really.
Heres the render code
void Mesh_handle::DrawTri(){
glBindBuffer(GL_ARRAY_BUFFER,vertexbufferid);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,elementbufferid);
int index1=glGetAttribLocation(bound_program,"inputvertex");
int index2=glGetAttribLocation(bound_program,"inputcolor");
int index3=glGetAttribLocation(bound_program,"inputtexcoord");
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,9*sizeof(float),0);
glEnableVertexAttribArray(index2);
glVertexAttribPointer(index2,4,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(3*sizeof(float)));
glEnableVertexAttribArray(index3);
glVertexAttribPointer(index3,2,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(7*sizeof(float)));
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
}
My python parser which just writes the info into a file: source
The object is Ezreal from League of Legends
I'm not sure if I'm reading the faces wrong or if their not even what I thought they were. Am I suppose to use GL_TRIANGLE_STRIP or something else. Any hints or request more info.
Indices in obj-files are 1 based, so you have to subtract 1 from all indices in order to use them with OpenGL.
First, as Andreas stated, .obj files use 1-based indices, so you need to convert them to 0-based indices.
Second:
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
Unless you did some special work to turn the face list you were given in your .obj file into a triangle strip, you don't have triangle strips. You should be rendering GL_TRIANGLES, not strips.
From the image for sure your verticies are messed up. It looks like you specified a stride of 9*sizeof(float) in your glGetAttribLocation but from what I can tell from your code your array is tightly packed.
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,0,0);
Also remove stride from color/texture coords.