opencv function transpose() giving false results, memory leak? - c++

I have a matrix of integer data type having dimensions 100 x 7000. I want to transpose it. I've used transpose() function from opencv library. but it gives the false results. Most of the values becomes floating point numbers and very high, which are not present in the original matrix. Here is my code
cv::Mat data; //data matrix with integer values, dimension is 100 x 7000
cv::Mat data_tp = cv::Mat(data.cols, data.rows, CV_32F);
cv::transpose(data, data_tp);
I think this might be the problem of memory leak or any sort of memory mismanagement. because this is just a part of a big code. Any tips regarding the memory management or anyone else faced this issue??

cv::Mat data; //data matrix with integer values, dimension is 100 x 7000
// here are 2 problems:
// - you never need to pre-allocate the result.
// - you try to transpose an int Mat into a float one.
cv::Mat data_tp = cv::Mat(data.cols, data.rows, CV_32F);
cv::transpose(data, data_tp);
// instead, just use:
cv::Mat data_tp = data.t();

Related

C++ OpenCV use vector<Point> as index of a matrix

I have a matrix img (480*640 pixel, float 64 bits) on which I apply a complex mask. After this, I need to multiply my matrix by a value but in order to win time I want to do this multiplication only on the non-zero elements because for now the multiplication is too long because I have to iterate the operation 2000 times on 2000 different matrix but with the same mask. So I found the index (on x/y axes) of the nonzero pixels which I keep in a vector of Point. But I don't succeed to use this vector to do the multplication only on the pixels indexed in this same vector.
Here is an example (with a simple mask) to understand my problem :
Mat img_temp(480, 640, CV_64FC1);
Mat img = img_temp.clone();
Mat mask = Mat::ones(img.size(), CV_8UC1);
double value = 3.56;
// Apply mask
img_temp.copyTo(img, mask);
// Finding non zero elements
vector<Point> nonZero;
findNonZero(img, nonZero);
// Previous multiplication (long because on all pixels)
Mat result = img.clone()*value;
// What I wish to do : multiplication only on non-zero pixels (not functional)
Mat result = Mat::zeros(img.size(), CV_64FC1);
result.at<int>(nonZero) = img.at(nonZero).clone() * value
What is tricky is that my pixels are not on a range (for example pixels 3, 4 and 50, 51 on a line).
Thank you in advance.
I would suggest using Mat.convertTo.
Basically, for the parameter alpha, which is the scaling factor, use the value of the mask (3.56 in your case). Make sure that the Mat is of type CV_32 or CV_64.
This will be faster than finding all non-zero pixels, saving their coordinates in a Vector and iterating (it was faster for me in Java).
Hope it helps!
Constructing vector of points will also increase computation time. I think you should consider iterating over all pixels and multiply if the pixel is not equal to zero.
Iterating will be faster if you have the matrix as raw data.
If you do
Mat result = img*value;
Instead of
Mat result = img.clone()*value;
The speed will be almost 10 times as fast
I have also tested your suggestion with vector but this is even slower than your first solution.
Below the code I used to test your firs suggestion
cv::Mat multMask(cv::Mat &img, std::vector<cv::Point> mask, double fact)
{
if (img.type() != CV_64FC1) throw "invalid format";
cv::Mat res = cv::Mat::zeros(img.size(), img.type());
int iLen = (int)mask.size();
for (int i = 0; i < iLen; i++)
{
cv::Point &p = mask[i];
((double*)(res.data + res.step.p[0] * p.y))[p.x] = ((double*)(img.data + img.step.p[0] * p.y))[p.x] * fact;
}
return res;
}

PCA output eigenvectors and eigenvalues

I have created a PCA solution in Matlab, which works. I am now in the middle of converting it to c++, where I use OpenCV's function cv::PCA. Where I found in a link that you could extract the mean, eigenvalues and eigenvectors using:
// Perform a PCA:
cv::PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, num_components);
// And copy the PCA results:
cv::Mat mean = pca.mean.clone();
cv::Mat eigenvalues = pca.eigenvalues.clone();
cv::Mat eigenvectors = pca.eigenvectors.clone();
Which compiles and runs. But when I want to use the values and look into the size and allocation, I get a size of the input data, which seems odd, i.e.
cv::Size temp= eigenvectors.size();
//temp has the same size as data.size();
In my mind the eigenvectors should be defined by the num_components and in my 3D space, should only be 3x3 size, i.e. 9 elements. Can anybody explain the reasoning behind the data sizes of pca.x.clone()?
Also what is the correct way of doing matrix operation in opencv and c++, based on the documentation it seems like you can use operators with cv::Mat. Using the above extraction of the pca information can you do:
cv::Mat Z = data - mean; //according to documentation http://stackoverflow.com/questions/10936099/matrix-multiplication-in-opencv
cv::Mat res = Z*eigenvectors;
It has crashed in my tests on runtime, probably because of the issue with "misinterpretation"/"interpretation" of the size.
Question
The basic question is how to use the pca opencv function correctly?
Edit
Another way that I have tested is to use the following code:
cv::PCA pca_analysis(mat, cv::Mat(), CV_PCA_DATA_AS_ROW, num_components);
cv::Point3d cntr = cv::Point3d(static_cast<double>(pca_analysis.mean.at<double>(0, 0)),
static_cast<int>(pca_analysis.mean.at<double>(0, 1)), static_cast<double>(pca_analysis.mean.at<double>(0, 2)));
//Store the eigenvalues and eigenvectors
vector<cv::Point3d> eigen_vecs(num_components);
vector<double> eigen_val(num_components);
for (int i = 0; i < num_components; ++i)
{
eigen_vecs[i] = cv::Point3d(pca_analysis.eigenvectors.at<double>(0, i*num_components),
pca_analysis.eigenvectors.at<double>(0, i * num_components + 1), pca_analysis.eigenvectors.at<double>(0, i*num_components + 2));
eigen_val[i] = pca_analysis.eigenvalues.at<double>(0,i);
}
When I compare the results from the above code with my Matlab implementation then the cntr seems to be plausible (under percentage difference). But the eigenvectors and eigenvalues are just zero, when I look at them in the debugger. This seems to go back to my original question of how to extract and understand pca's output.
Can anybody clarify what I am missing?
I came across your question when searching for how to access the eigenvalues and eigenvectors.
I personally needed to see the full information and would expect that myself. So, it's possible that the developers expected my use-case to more likely than yours.
In any case, as they are sorted you can take the first 'num_components' eigenvalues/eigenvectors as being the information you are after.
Also, from your perspective - i.e. if it worked as you expect it to - think about the converse: If you wanted to see the full info, how would you? The developers would have to implement a new parameter...

When I use the opencv function cvNorm(image,NULL,CV_L2),it returns an abnormal result,why?

this is the opencv code:
int main()
{
IplImage* image=cvLoadImage("C:\\boat.png",CV_LOAD_IMAGE_GRAYSCALE);
cout<<"1-norm is : "<<cvNorm(image,NULL,CV_L1)<<endl;
cout<<"2-norm is : "<<cvNorm(image,NULL,CV_L2)<<endl; //the result is 6000+,it's too
big and unnormal!
return 0;
}
The l2 norm result is so big,that is 6000+,however, the matlab answer is 229 as below,
This is the matlab code:
>> norm(image)
ans =
229.7975
Why?
On the contrary, 6000+ norm for a grayscale image looks normal. Grayscale image values range from 0 to 255, so depending on the size of the image, even for images as small as 64x64, you may get L2=sqrt(sum(image.^2)) (not an actual code) in thousands, tens of thousands or more.
More interesting is why norm(image) in Matlab is so low. norm does not accept uint8 vectors, so somewhere between loading image and calculating its norm there is a data conversion that may also have a side effect of changing absolute values in image, and subsequently its norm.

How to use opencv flann::Index?

I have some problems with opencv flann::Index -
I'm creating index
Mat samples = Mat::zeros(vfv_net_quie.size(),24,CV_32F);
for (int i =0; i < vfv_net_quie.size();i++)
{
for (int j = 0;j<24;j++)
{
samples.at<float>(i,j)=(float)vfv_net_quie[i].vfv[j];
}
}
cv::flann::Index flann_index(
samples,
cv::flann::KDTreeIndexParams(4),
cvflann::FLANN_DIST_EUCLIDEAN
);
flann_index.save("c:\\index.fln");
A fter that I'm tryin to load it and find nearest neiborhoods
cv::flann::Index flann_index(Mat(),
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
cv::Mat resps(vfv_reg_quie.size(), K, CV_32F);
cv::Mat nresps(vfv_reg_quie.size(), K, CV_32S);
cv::Mat dists(vfv_reg_quie.size(), K, CV_32F);
flann_index.knnSearch(sample,nresps,dists,K,cv::flann::SearchParams(64));
And have access violation in miniflann.cpp in line
((IndexType*)index)->knnSearch(_query, _indices, _dists, knn,
(const ::cvflann::SearchParams&)get_params(params));
Please help
You should not load the flann-file into a Mat(), as it is the place where the index is stored. It is a temporary object destroyed after the constructor was called. That's why the index isn't pointing anywhere useful when you call knnSearch().
I tried following:
cv::Mat indexMat;
cv::flann::Index flann_index(
indexMat,
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
resulting in:
Reading FLANN index error: the saved data size (100, 64) or type (5) is different from the passed one (0, 0), 0
which means, that the matrix has to be initialized with the correct dimensions (seems very stupid to me, as I don't necessarily know, how many elements are stored in my index).
cv::Mat indexMat(samples.size(), CV_32FC1);
cv::flann::Index flann_index(
indexMat,
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
does the trick.
In the accepted answer is somehow not clear and misleading why the input matrix in the cv::flann::Index constructor must have the same dimension as the matrix used for generating the saved Index. I'll elaborate on #Sau's comment with an example.
KDTreeIndex was generated using as input a cv::Mat sample, and then saved. When you load it, you must provide the same sample matrix to generate it, something like (using the templated GenericIndex interface):
cv::Mat sample(sample_num, sample_size, ... /* other params */);
cv::flann::SavedIndexParams index_params("c:\\index.fln");
cv::flann::GenericIndex<cvflann::L2<float>> flann_index(sample, index_params);
L2 is the usual Euclidean distance (other types can be found in opencv2/flann/dist.h).
Now the index can be used as shown the find the K nearest neighbours of a query point:
std::vector<float> query(sample_size);
std::vector<int> indices(K);
std::vector<float> distances(K);
flann_index.knnSearch(query, indices, distances, K, cv::flann::SearchParams(64));
The matrix indices will contain the locations of the nearest neighbours in the matrix sample, which was used at first to generate the index. That's why you need to load the saved index with the very matrix used to generate the index, otherwise the returned vector will contain indices pointing to meaningless "nearest neighbours".
In addition you get a distances matrix containing how far are the found neighbours from your query point, which you can later use to perform some inverse distance weighting, for example.
Please also note that sample_size has to match across sample matrix and query point.

OpenCV image array, 4D matrix

I am trying to store a IPL_DEPTH_8U, 3 channel image into an array so that I can store 100 images in memory.
To initialise my 4D array I used the following code (rows,cols,channel,stored):
int size[] = { 324, 576, 3, 100 };
CvMatND* cvImageBucket; = cvCreateMatND(3, size, CV_8U);
I then created a matrix and converted the image into the matrix
CvMat *matImage = cvCreateMat(Image->height,Image->width,CV_8UC3 );
cvConvert(Image, matImage );
How would I / access the CvMatND to copy the CvMat into it at the position of stored?
e.g. cvImageBucket(:,:,:,0) = matImage; // copied first image into array
You've tagged this as both C and C++. If you want to work in C++, you could use the (in my opinion) simpler cv::Mat structure to store each of the images, and then use these to populate a vector with all the images.
For example:
std::vector<cv::Mat> imageVector;
cv::Mat newImage;
newImage = getImage(); // where getImage() returns the next image,
// or an empty cv::Mat() if there are no more images
while (!newImage.empty())
{
// Add image to vector
imageVector.push_back(image);
// get next image
newImage = getImage();
}
I'm guessing something similar to:
for ith matImage
memcpy((char*)cvImageBucket->data+i*size[0]*size[1]*size[2],(char*)matImage->data,size[0]*size[1]*size[2]);
Although I agree with #Chris that it is best to use vector<Mat> rather than a 4D matrix, this answer is just to be a reference for those who really need to use 4D matrices in OpenCV (even though it is a very unsupported, undocumented and unexplored thing with so little available online and claimed to be working just fine!).
So, suppose you filled a vector<Mat> vec with 2D or 3D data which can be CV_8U, CV_32F etc.
One way to create a 4D matrix is
vector<int> dims = {(int)vec.size(), vec[0].rows, vec[0].cols};
Mat m(dims, vec[0].type(), &vec[0]);
However, this method fails when the vector is not continuous which is typically the case for big matrices. If you do this for a discontinuous matrix, you will get a segmentation fault or bad access error when you would like to use the matrix (i.e. copying, cloning, etc). To overcome this issue, you can copy matrices of the vector one by one into the 4D matrix as follows:
Mat m2(dims, vec[0].type());
for (auto i = 0; i < vec.size(); i++){
vec[i].copyTo(temp.at<Mat>(i));
}
Notice that both methods require the matrices to be the same resolution. Otherwise, you may get undesired results or errors.
Also, notice that you can always use for loops but it is generally not a good idea to use them when you can vectorize.