Vtk check if polydata objects intersect - c++

I'm trying to bring 2 vtkPolyData objects closer to each other without them intersecting. I would like to "test" if one is inside the other with a simple boolean function. My first thought was to use vtkBooleanOperationPolyDataFilter with the datasets as inputs and calculate the intersection and then check if the resulting PolyData object was a NULL object. This however, does not give the desired result.
The code I'm currently using looks like this:
bool Main::Intersect(double *trans)
{
vtkSmartPointer<vtkPolyData> data1 = vtkSmartPointer<vtkPolyData>::New();
vtkSmartPointer<vtkPolyData> data2 = vtkSmartPointer<vtkPolyData>::New();
data1->ShallowCopy(this->ICPSource1);
data2->ShallowCopy(this->ICPSource2);
//This piece is just to reposition the data to the position I want to check
for (unsigned int k=0; k<3; k++)
{
trans[k]/=2;
}
translate(data2, trans);
for (unsigned int k=0; k<3; k++)
{
trans[k]*=-1;
}
translate(data1, trans);
//This is my use of the actual vtkBooleanOperationPolyDataFilter class
vtkSmartPointer<vtkBooleanOperationPolyDataFilter> booloperator = vtkSmartPointer<vtkBooleanOperationPolyDataFilter>::New();
booloperator->SetOperationToIntersection();
booloperator->AddInputData(data1);
booloperator->AddInputData(data2);
booloperator->Update();
if (booloperator->GetOutput()==NULL)
return 0;
else
return 1;
}
Any help regarding this issue is highly appreciated. Also, I don't know if the "vtkBooleanOperationPolyDataFilter" class is really the best one to use, it's just something I found and thought might work.
Thanks in advance,
Xentro
EDIT: I said this doesn't give the desired result but it does improve my result. It has some kind of influence on my movement criterion (which was the point) but in the end result the datasets still intersect sometimes.

You can call PolyDataObject->GetBounds() for both your objects and compare their values. This does only work, of course, if your objects intersect first at their boundaries. But for intersection of simple geometries this should provide a light-weight solution. See here for an example.
Regarding the vtkBooleanOperationPolyDataFilter I can only say that I tried to use it before, too, and it did not work at all how I wanted it to. In my searches I found many other people complaining about it.
EDIT: Did you try the vtkPolyDataIntersectionFilter? The class reference can be found here.
vtkIntersectionPolyDataFilter computes the intersection between two vtkPolyData objects. The first output is a set of lines that marks the intersection of the input vtkPolyData objects. The second and third outputs are the first and second input vtkPolyData, respectively. Optionally, the two output vtkPolyData can be split along the intersection lines.

Related

CGAL - how to use CGAL::Polygon_mesh_processing::connected_components to convert one CGAL::Surface_mesh into many?

I am creating a mesh utility library, and one of the functionalities that I would like to include is the ability to break apart disjoint partitions of a mesh. To that end, I am trying to write a method that takes in a CGAL::Surface_mesh and returns a std::vector<CGAL::Surface_mesh>, where each element is a connected component of the input mesh.
I see that CGAL has CGAL::Polygon_mesh_processing::connected components function, but that just seems to assign a label to each face indicating which component it's a part of. How can I use the result of that operation to construct a new CGAL::Surface_mesh from every group of faces with the same label?
One way of doing this is to use the result of connected_components() as input parameter for the Face_filtered_graph.
You can do something like that I believe:
FCCmap fccmap = mesh.add_property_map<face_descriptor, faces_size_type>
("f:CC").first;
faces_size_type num = PMP::connected_components(mesh,fccmap);
std::vector<Mesh> meshes(num);
for(int i=0; i< num; ++i)
{
Filtered_graph ffg(mesh, i, fccmap);
CGAL::copy_face_graph(ffg, meshes[i]);
}

How to slice or acces a row of a Tensor?

I have a dataset stored in a 3D Tensor. I'd like to have an own tensor per each sample for profiling purposes. Unfortunately I only know the brute force method of accessing such a container:
auto tensor_dataset_map = dataset.tensor<float,3>();
for(int sample = 0; sample < maxSamples; sample++)
for(int time = 0; time < periodSize; time++)
for(int feature = 0; feature < amountOfFeatures; feature++)
cout << tensor_dataset_map(sample,time,feature);
I would love to avoid this. However if I try with common sense to get all elements for the first sample (=0):
tensor_dataset_map(0)
it is the same like
tensor_dataset_map(0,0,0)
which is of shape (1) and I need tensors of shape (1,periodSize,amountOfFeatures)
Is there an easy way for this I do I really have to go this unoptimized way?
I found an answer in the source code. Each Tensor has the function Slice(): Slice this tensor along the 1st dimension. where one needs to state the parameters begin of slicing and offset.
In other words for iterating in my case one needs to:
cout<<dataset.Slice(0,1).tensor<float,3>()<<endl
cout<<dataset.Slice(1,2).tensor<float,3>()<<endl
cout<<dataset.Slice(2,3).tensor<float,3>()<<endl
cout<<dataset.Slice(3,4).tensor<float,3>()<<endl
...
But because the lack of other documentation I think this might get deprecated

ITK get pixels list from itk::LabelObject

I have a problem accessing the list of pixels of an itk::LabelObject.
This LabelObject is obtained with a itk::OrientedBoundingBoxLabelObject (https://github.com/blowekamp/itkOBBLabelMap). The original 3D image is a CBCT Dicom, inside which I'm looking for the position and orientation of a small rectangular marker.
Here is the code which leads to get the itk::LabelObject :
typedef short LabelPixelType;
typedef itk::LabelMap<LabelObjectType> LabelMapType;
typedef itk::OrientedBoundingBoxLabelMapFilter<LabelMapType> OBBLabelMapFilter;
typename OBBLabelMapFilter::Pointer toOBBLabelMap = OBBLabelMapFilter::New();
typename ToLabelMapFilterType::Pointer toLabelMap = ToLabelMapFilterType::New();
toOBBLabelMap->SetInput(toLabelMap->GetOutput());
toOBBLabelMap->Update();
LabelObjectType* labelObject = toOBBLabelMap->GetOutput()->GetNthLabelObject(idx);
OBBSize = labelObject->GetOrientedBoundingBoxSize();
I guess that accessing the pixels coordinates is possible, as it has to be accessed somehow in order to calculate the bounding boxes, but I didn't manage to do it so far. I tried then to convert the itk::LabelMap (or the LabelObject directly) to a binary image, where I could get to the pixels more easily; and convert and display this markerBinaryImage with VTK, with no more results (I get a black image).
typedef itk::LabelMapToBinaryImageFilter<LabelMapType, ImageType> LabelMapToBinaryImageFilterType;
LabelMapToBinaryImageFilterType::Pointer labelImageConverter = LabelMapToBinaryImageFilterType::New();
labelImageConverter->SetInput(toLabelMap->GetOutput());
labelImageConverter->Update();
ImageType::Pointer markerBinaryImage = labelImageConverter->GetOutput();
Does anyone have an idea about how to get to this pixels list?
You may do it like this:
for(unsigned int i = 0; i < filter->GetOutput()->GetNumberOfLabelObjects(); ++i) {
//Obtain the ith label object
FilterType::OutputImageType::LabelObjectType* labelObject =
filter->GetOutput()->GetNthLabelObject(i);
//Then, you may obtain the pixels of each label object like this:
for(unsigned int pixelId = 0; pixelId < labelObject->Size(); pixelId++) {
std::cout << labelObject->GetIndex(pixelId);
}
}
This info was obtained from the Insight Journal in the article Label object representation and manipulation with ITK. There, it says that you may obtain the bounding boxes directly using the Region attribute. I did not find the way to obtain a region in itk::LabelObject, however here is the inheritance diagram of itk::LabelObject:
If your label object is of type itk::ShapeLabelObject, you can use the GetBoundingBox() method to get the bounding box. It has other many methods worth looking at.
I tried then to convert the itk::LabelMap (...) with no more results (I get a black image).
A piece of advice here, don't try this complicated stuff to verifyother complicated stuff. You may be failing somewhere else in the chain. Instead, read the pixels like I said before and check out the data. Good Look!

How to randomly choose sample points that maximize space occupation?

I would like to generate the sample points that can randomly fill/cover a space (like in the attached image). I think they have a method called "Quasi-random" that can generate such sample points. However, it's a little bit far from my knowledge. Can someone make suggestions or help me find a library that can be do this? Or suggest how to start writing such a program?
In the image, 256 sample points are applied on the given space, placed at random positions to cover the whole given space.
Update:
I just try to use some code from Halton Quasi-random Sequence and compare with the result of pseudo-random which is post by friend below. The result of Halton's method is more better in my opinion. I would like to share some result as below;
The code which I wrote is
#include "halton.hpp"
#include "opencv2/opencv.hpp"
int main()
{
int m_dim_num = 2;
int m_n = 50;
int m_seed[2], m_leap[2], m_base[2];
double m_r[100];
for (int i = 0; i < m_dim_num; i++)
{
m_seed[i] = 0;
m_leap[i] = 1;
m_base[i] = 2+i;
}
cv::Mat out(100, 100, CV_8UC1);
i4_to_halton_sequence( m_dim_num, m_n, 0, m_seed, m_leap, m_base, m_r);
int displaced = 100;
for (int i = 0; i < 100; i=i+2)
{
cv::circle(out, cv::Point2d((m_r[i])*displaced, (m_r[i+1])*displaced), 1, cv::Scalar(0, 255, 0), 1, 8, 0);
}
cv::imshow("test", out);
cv::waitKey(0);
return 0;
}
As I little bit familiar with OpenCV, I wrote this code by plot on the matrix of OpenCV (Mat). The "i4_to_halton_sequence()" is the function from the library that I mentioned above.
The result is not better, but might be use in somehow for my work. Someone have another idea?
I am going to give an answer that will seem half-assed. However, this topic has been studied extensively in the literature, so I will just refer you to some summaries from Wikipedia and other places online.
What you want is also called low-discrepancy sequence (or quasi-random, as you pointed out). You can read more about it here: http://en.wikipedia.org/wiki/Low-discrepancy_sequence. It's useful for a number of things, which includes numerical integration and, more recently, simulating retinal ganglion mosaic.
There are many ways to generate low-discrepancy sequences (or pseudo quasi random sequences :p). Some of these are in ACM Collected Algorithms (http://www.netlib.org/toms/index.html).
The most common of which, I think, is called Sobol sequence (algorithm 659 from the ACM thing). You can get some details on this here: http://en.wikipedia.org/wiki/Sobol_sequence
For the most part, unless you are really into it, that stuff looks pretty scary. For quick result, I would use GNU's GSL (GNU Scientific Library): http://www.gnu.org/software/gsl/
This library includes code to generate quasi-random sequences (http://www.gnu.org/software/gsl/manual/html_node/Quasi_002dRandom-Sequences.html) including Sobol sequence (http://www.gnu.org/software/gsl/manual/html_node/Quasi_002drandom-number-generator-examples.html).
If you're still stuck, I can paste some code here, but you're better off digging into GSL.
Well here's another way to do quasi-random that covers the entire space.
Since you have 256 points to use, you can start by plotting those points as a 16x16 grid.
Then apply some function that give some random offset to each point (say 0 to ±2 to the points' x and y coordinates).
You could create equidistant points (all points have same distance to their neighbors) and then, in a second step, move each point randomly a bit so that they appear 'random'.
The second idea I have is:
1. Start with one area.
2. Create a random point P rand about the 'middle' of your area.
3. Divide the area into 4 areas by that point. P is the upper right corner of the lower left subarea, the upper left corner of the lower right area and so on.
4. Repeat steps 2..4 for all 4 sub areas. Of course, not forever, but until you're satisfied.
This algorithms ensures that each 'hole' (i.e. the new sub area) is filled with a point.
Update: Your initial area should be twice as large as your area, because of step (2). This ensures having points at the edges and corners as well.
This is called a "low discrepancy sequence". The linked Wikipage explains how you can generate them.
But I suspect you already knew this, as your image is very similar to the 2,3 Halton sequence example from Wikipedia
You just need library rand() function:
#include <stdlib.h>
#include <time.h>
unsigned int N = 256; //number of points
int RANGE_X = 100; //x range to put sample points in
int RANGE_Y = 100;
void PutSamplePoint(int x, int y)
{
//some your code putting sample point on field
}
int main()
{
srand((unsigned)time(0)); //initialize random generator - uses current time as seed
for(unsigned int i = 0; i < N; i++)
{
int x = rand() % RANGE_X; //returns random value in range [0, RANGE_X)
int y = rand() % RANGE_Y;
PutSamplePoint(x, y);
}
return 0;
}

Scanning through a tilemap and checking properties for each tile

How does one iterate through a tilemap and check each tile?
Is there a correct way to do this, is there a built in function to in cocos2d to check a tile?
Or could it be done e.g. take the tile size set when creating the tile, make a nested for loop and take (x,y) for the middle of the first tile and just iterate by adding tilesize to the x on the inner loop and tilesize to the y on the outer loop?
I am wondering if there is a built in, more performance aware approach.
Thanks
I think you might be able to do it using a for loop and CGPoints.
I'm going to for examples sake get color
and store it in an array I guess
CGPoint myPt;
NSMutableArray *tilesofGray;
for (int x = 0; x < tilemapLength)
{
for (int y = 0; y < tilemapHeight)
{
myPt.x = x;
myPt.y = y;
if([[[tilemap layerNamed:#"background"] tileAt:myPt] getColor] == Grey)
{
[tilesofGray addObject:[[tilemap layerNamed:#"background] tileAt:myPt]];
}
}
}
Is this for a game, for like collision detection or, simply for rendering based on tile type?
Your question here is really ambiguous. Please be specific in what you want. The 3rd sentence in particular would make more sense if you explain what you are needing.
But i'll try to answer based on the title alone....
How big is the tileset? if it's not very big, brute-force may be perfectly fine.
If performance is a concern/issue, or if the tileset is large and not all tiles are ever drawn within the screen at any given time, you need to do scene management of some sort.
scene management:
i think there is a technical term/phrase for this, but basically based on some x,y pt on the tileset (i.e. matrix), you can determine (by a function) which tiles you will need to iterate thru. it should be fun to figure it out as it's presumably a 2d array.