I have a dataset stored in a 3D Tensor. I'd like to have an own tensor per each sample for profiling purposes. Unfortunately I only know the brute force method of accessing such a container:
auto tensor_dataset_map = dataset.tensor<float,3>();
for(int sample = 0; sample < maxSamples; sample++)
for(int time = 0; time < periodSize; time++)
for(int feature = 0; feature < amountOfFeatures; feature++)
cout << tensor_dataset_map(sample,time,feature);
I would love to avoid this. However if I try with common sense to get all elements for the first sample (=0):
tensor_dataset_map(0)
it is the same like
tensor_dataset_map(0,0,0)
which is of shape (1) and I need tensors of shape (1,periodSize,amountOfFeatures)
Is there an easy way for this I do I really have to go this unoptimized way?
I found an answer in the source code. Each Tensor has the function Slice(): Slice this tensor along the 1st dimension. where one needs to state the parameters begin of slicing and offset.
In other words for iterating in my case one needs to:
cout<<dataset.Slice(0,1).tensor<float,3>()<<endl
cout<<dataset.Slice(1,2).tensor<float,3>()<<endl
cout<<dataset.Slice(2,3).tensor<float,3>()<<endl
cout<<dataset.Slice(3,4).tensor<float,3>()<<endl
...
But because the lack of other documentation I think this might get deprecated
Related
Sorry for the bad title, but I can actually not think of a better one (open to suggestions).
I got a big grid (1000*1000*1000).
for (int k = 0; k<dims.nz; k++);
{
for (int i = 0; i < dims.nx; i++)
{
for (int j = 0; j < dims.ny; j++)
{
if (inputLabel->evalReg(i, j, 0) == 0)
{
sum = sum + anotherField->evalReg(i, j, 0);
}
}
}
}
I go through all grid points to find which grid points have the value 0 in my labelfield and sum up the corresponding values of another field.
After this I want to set all the points that I detected above to a certain value.
Would it be faster to do basically do the same for loop again (while this time setting values instead of reading them), or should I write all the positions that I got into a separate vector (which would have to change size in ever step of the loop in which we detect something) and simply build a for loop like
for(int p=0; p<size_vec_1,p++)
{
anotherField->set(vec_1[p],vec_2[p],vec_3[p], random value);
}
The point is that I do not know how much of the grid will be affected by my routien due to different data. Might be half of the data or something completly different. Can I do a genereal estimation of the speed of the methods or is it soley depending on the distribution of my values ?
The point is that I do not know how much of the grid will be affected by my routien due to different data.
Here's a trick, which may work: sample inputLabel randomly, to make an approximation how many entries are 0. If a few, then go the "putting indices into a vector" way. If a lot, then go the "scan the array again" way.
It needs fine tuning for a specific computer, what should be the threshold between the two cases, how many samples to take (it should not be too large, as the approximation will take too much time, but should not be too small to have a good approximation), etc.
Bonus trick: take cache-line-aligned and cache-line-sized samples. This way the approximation will take the similar amount of time (because it is memory bound), but the approximation will be better.
I'm trying to bring 2 vtkPolyData objects closer to each other without them intersecting. I would like to "test" if one is inside the other with a simple boolean function. My first thought was to use vtkBooleanOperationPolyDataFilter with the datasets as inputs and calculate the intersection and then check if the resulting PolyData object was a NULL object. This however, does not give the desired result.
The code I'm currently using looks like this:
bool Main::Intersect(double *trans)
{
vtkSmartPointer<vtkPolyData> data1 = vtkSmartPointer<vtkPolyData>::New();
vtkSmartPointer<vtkPolyData> data2 = vtkSmartPointer<vtkPolyData>::New();
data1->ShallowCopy(this->ICPSource1);
data2->ShallowCopy(this->ICPSource2);
//This piece is just to reposition the data to the position I want to check
for (unsigned int k=0; k<3; k++)
{
trans[k]/=2;
}
translate(data2, trans);
for (unsigned int k=0; k<3; k++)
{
trans[k]*=-1;
}
translate(data1, trans);
//This is my use of the actual vtkBooleanOperationPolyDataFilter class
vtkSmartPointer<vtkBooleanOperationPolyDataFilter> booloperator = vtkSmartPointer<vtkBooleanOperationPolyDataFilter>::New();
booloperator->SetOperationToIntersection();
booloperator->AddInputData(data1);
booloperator->AddInputData(data2);
booloperator->Update();
if (booloperator->GetOutput()==NULL)
return 0;
else
return 1;
}
Any help regarding this issue is highly appreciated. Also, I don't know if the "vtkBooleanOperationPolyDataFilter" class is really the best one to use, it's just something I found and thought might work.
Thanks in advance,
Xentro
EDIT: I said this doesn't give the desired result but it does improve my result. It has some kind of influence on my movement criterion (which was the point) but in the end result the datasets still intersect sometimes.
You can call PolyDataObject->GetBounds() for both your objects and compare their values. This does only work, of course, if your objects intersect first at their boundaries. But for intersection of simple geometries this should provide a light-weight solution. See here for an example.
Regarding the vtkBooleanOperationPolyDataFilter I can only say that I tried to use it before, too, and it did not work at all how I wanted it to. In my searches I found many other people complaining about it.
EDIT: Did you try the vtkPolyDataIntersectionFilter? The class reference can be found here.
vtkIntersectionPolyDataFilter computes the intersection between two vtkPolyData objects. The first output is a set of lines that marks the intersection of the input vtkPolyData objects. The second and third outputs are the first and second input vtkPolyData, respectively. Optionally, the two output vtkPolyData can be split along the intersection lines.
in my game, ObjectManager class is manage my game's all objects and it has a list of Objects.
and i use 2 for statement when i check the each object's collision.
for(int i = 0; i < objectnum; ++i )
{
for(int j = 0; j < objectnum; ++j)
{
AABB_CollisionCheck()
}
}
but if there is many objects in the game, the FPS is get lower. (80 Objects == 40frame)
maybe this is because that my collision check method is inefficient.
(if the object num is n , then my method operate n^2 times)
can you give me some tips about this Collision Check Optimizing?
i want to reduce my for loop to check each Object collision.
what is different about using Callback Fucntion or not? for Collision Check.
is there have any operate speed adventage about Callback?
p.s
thanks a lot for read my question. and please excuse for my english skill...
As is often the case, knowing an extra bit of vocabulary does wonders here for finding a wealth of information. What you are looking for is called the broad phase of collision detection, which is any method that prevents you from having to look at each pair of objects and thus hopefully avoid the n^2 complexity.
One popular method is spatial hashing, where you subdivide your space into a grid of cells and assign each object to the cells which contains it. Then you only check objects in each cell against other objects from that one and the neighboring cells.
Another method is called Sweep and Prune, which uses the fact that objects usually don't move much from one frame to the next.
You can find more information in this question: Broad-phase collision detection methods?
can you give me some tips about this Collision Check Optimizing?
Well firstly you could try to optimize your second loop making that way :
for (int i = 0; i < objectnum; i++)
{
for (int j = i+1; j < objectnum; j++)
{
AABB_collisioncheck();
}
}
This will only check collisions in one way, say A collided B, then you won't trigger B collided A.
I have a programming issue regarding the extraction of a subimage (submatrix) from a bigger image (matrix). I have two points (upper and lower bound of the subimage i want to extract) and i want to extract the subimage from the bigger one based on these points. But I can't find how to do thins with C/C++.
I know it's very easy to do with matlab. Suppose these two points are (x_max,y_max) and (x_min,y_min). To extract the subimage I just need to code the following:
(MATLAB CODE)-> small_image=big_image(x_min:x_max,y_min,y_max);
But in C i can't use an interval of indexes with : as i do with Matlab. Does anybody here faced this problem before?
If you are doing image processing in C/C++, you should probably use OpenCV.
The cv::Mat class can do this using a Region Of Interest (ROI).
In straight c++, you'd use a loop.
int* small_im[]; // or whatever the syntax is
int i = 0, j = 0;
for (i = 0; i < (x_max-x_min); i++)
{
for (j = 0; j < (y_max-y_min); j++)
{
small_im[i][j] = big_im[x_min+i][y_min+j];
}
}
I am trying to write a bag of features system image recognition system. One step in the algorithm is to take a larger number of small image patches (say 7x7 or 11x11 pixels) and try to cluster them into groups that look similar. I get my patches from an image, turn them into gray-scale floating point image patches, and then try to get cvKMeans2 to cluster them for me. I think I am having problems formatting the input data such that KMeans2 returns coherent results. I have used KMeans for 2D and 3D clustering before but 49D clustering seems to be a different beast.
I keep getting garbage values for the returned clusters vector, so obviously this is a garbage in / garbage out type problem. Additionally the algorithm runs way faster than I think it should for such a huge data set.
In the code below the straight memcpy is only my latest attempt at getting the input data in the correct format, I spent a while using the built in OpenCV functions, but this is difficult when your base type is CV_32FC(49).
Can OpenCV 1.1's KMeans algorithm support this sort of high dimensional analysis?
Does someone know the correct method of copying from images to the K-Means input matrix?
Can someone point me to a free, Non-GPL KMeans algorithm I can use instead?
This isn't the best code as I am just trying to get things to work right now:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks){
// the size of one image patch, CELL_SIZE = 7
int chunk_size = CELL_SIZE*CELL_SIZE*sizeof(float);
// create the input data, CV_32FC(49) is 7x7 float object (I think)
CvMat* data = cvCreateMat(chunks.size(),1,CV_32FC(49) );
// Create a temporary vector to hold our data
// we'll copy into the matrix for KMeans
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
float * rawdata = new float[rdsize];
// Go through each image chunk and copy the
// pixel values into the raw data array.
vector<IplImage*>::iterator iter;
int k = 0;
for( iter = chunks.begin(); iter != chunks.end(); ++iter )
{
for( int i =0; i < CELL_SIZE; i++)
{
for( int j=0; j < CELL_SIZE; j++)
{
CvScalar val;
val = cvGet2D(*iter,i,j);
rawdata[k] = (float)val.val[0];
k++;
}
}
}
// Copy the data into the CvMat for KMeans
// I have tried various methods, but this is just the latest.
memcpy( data->data.ptr,rawdata,rdsize*sizeof(float));
// Create the output array
CvMat* results = cvCreateMat(chunks.size(),1,CV_32SC1);
// Do KMeans
int r = cvKMeans2(data, 128,results, cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 1000, 0.1));
// Copy the grouping information to our output vector
vector<int> retVal;
for( int y = 0; y < chunks.size(); y++ )
{
CvScalar cvs = cvGet1D(results, y);
int g = (int)cvs.val[0];
retVal.push_back(g);
}
return retVal;}
Thanks in advance!
Though I'm not familiar with "bag of features", have you considered using feature points like corner detectors and SIFT?
You might like to check out http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/ for another open source clustering package.
Using memcpy like this seems suspect, because when you do:
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
If CELL_SIZE and chunks.size() are very large you are creating something large in rdsize. If this is bigger than the largest storable integer you may have a problem.
Are you wanting to change "chunks" in this function?
I'm guessing that you don't as this is a K-means problem.
So try passing by reference to const here. (And generally speaking this is what you will want to be doing)
so instead of:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks)
it would be:
std::vector<int> DoKMeans(const std::vector<IplImage *>& chunks)
Also in this case it is better to use static_cast than the old c style casts. (for example static_cast(variable) as opposed to (float)variable ).
Also you may want to delete "rawdata":
float * rawdata = new float[rdsize];
can be deleted with:
delete[] rawdata;
otherwise you may be leaking memory here.