Store KeyPoints and descriptors in Mat structure/list - c++

I would like to store all my precalculated Keypoints/decriptors of several images in a Mat list/structure or something, so I could be able to use them later to match them with others image descriptors.
Do you have an idea?
Apparently, there is a way to use
List<Mat>
but i dont know how.

You store the descriptor of one image in one Mat variable. So, basically you have one Mat for one descriptors. So, if you have 100 descriptors then, all of these decriptors should be present in a single Mat. You can do it as following:
Step-1: Declare a vector of Mat type.
vector<Mat> allDescriptors;
Step-2: Then find descriptors for each image and store it in Mat format
Mat newImageDescriptor;
Step-3: finally, push the descriptor calculated above into the vector.
allDescriptors.push_back(newImageDescriptor);
Repeat step-2 & 3 for all of your images
Now, you can access them as following:
You can access the data in vector as you do it in case of arrays
so allDescriptors[0] will give you the first descriptor in Mat format
So, by using a for loop, you can access all of your descriptors.
for(int i=0; i<allDescriptors.size(); i++)
{
Mat accessedDescriptor;
allDescriptors[i].copyTo(accessedDescriptor);
}

If your elements are stored in contiguous array you can assign them to a list at once with:
#include <list>
std::list<Mat> l;
l.assign( ptr, ptr + sz); // where ptr is a pointer to array of Mat s
// and sz is the size of array
Create precalculated elements:
Mat mat1;
Mat mat2;
And the list of elements of this type:
#include <list>
std::list<Mat> l;
Add elements to list:
l.push_back( mat1);
l.push_back( mat2)
note: there are other modifiers that you can use to insert elements. You will find a description of them here. There are other containers which usage you can consider. Selection of appropriate container is very important. You have to take into account the operations that will be crucial to you, which will be called most often.

This is regarding to your another question of copying the vector<Mat> to another vector<Mat>
Lets say you have one vector vector<Mat> des1 and you want to copy it to vector<Mat> des2 then you should do the following:
for(int i=0; i<des1.size(); i++)
{
des1[i].copyTo(des2[i]);
}
Remember that vector<Mat> is something like an arrya of Mat. So, how can you copy a vector to another vector by CopyTo which is used to copy a Matrix.

Related

How to access elements of a std::vector<cv::Mat> and put them in separate matrices cv::Mat

I have a const std::vector<cv::Mat> containing 3 matrices (3 images), and in order to use each image further in my program I need to save them in separate matrices cv::Mat.
I know that I need to iterate over vector elements, since this vector is a list of matrices but somehow I can't manage it. At the end, I also need to push 3 matrices back to a vector.
I would appreciate if someone could help me out with this. I am still learning it.
std::vector<cv::Mat> imagesRGB;
cv::Mat imgR, imgG, imgB;
for(size_t i=0; i<imagesRGB.size(); i++)
{
imagesRGB[i].copyTo(imgR);
}
In your code, note that imagesRGB is uninitialized, and its size is 0. The for loop is not evaluated. Additionally, the copyTo method copies matrix data into another matrix (like a paste function), it is not used to store a cv::Mat into a std::vector.
Your description is not clear enough, however here's an example of what I think you might be needing. If you want to split an RGB (3 channels) image into three individual mats, you can do it using the cv::split function. If you want to merge 3 individual channels into an RGB mat, you can do that via the cv::merge function. Let's see the example using this test image:
//Read input image:
cv::Mat inputImage = cv::imread( "D://opencvImages//lena512.png" );
//Split the BGR image into its channels:
cv::Mat splitImage[3];
cv::split(inputImage, splitImage);
//show images:
cv::imshow("B", splitImage[0]);
cv::imshow("G", splitImage[1]);
cv::imshow("R", splitImage[2]);
Note that I'm already storing the channels in an array. However, if you want to store the individual mats into a std::vector, you can use the push_back method:
//store the three channels into a vector:
std::vector<cv::Mat> matVector;
for( int i = 0; i < 3; i++ ){
//get current channel:
cv::Mat currentChannel = splitImage[i];
//store into vector
matVector.push_back(currentChannel);
}
//merge the three channels back into an image:
cv::Mat mergedImage;
cv::merge(matVector, mergedImage);
//show final image:
cv::imshow("Merged Image", mergedImage);
cv::waitKey(0);
This is the result:

How to convert a cv::Mat to a 2d std::vector without copying data

In OpenCV I have cv::Mat object.
It is one channel with the format CV_8UC1 = unsigned char.
It is continuous, which means the data are stored in one place and rows by rows.
I know the numbers of rows and columns of the cv::Mat object.
I have 2D std::vector of size
vector< vector > vec1(rows, vector(img.cols));
How could I assign the data of cv::Mat object to the std::vector without copying.
For copying it is easy to create a for-loop or maybe use vector::assign with an iterator.
The storage of the data is the same between cv::Mat and 2D-std::vector. Both have an internal pointer to the data (vector::data and Mat::ptr) but vector::data can't be set to the value of Mat::ptr.
Currently, I have the code with copying the data:
cv::Mat img = cv::imread("test.tif");
cv::cvtColor(img, img, cv::COLOR_BGR2GRAY);
vector< vector<double> > vec1(img.rows, vector<double>(img.cols));
for(int i=0; i < img.rows; ++i)
for(int j=0; j < img.cols; ++j)
vec1.at(i).at(j) = img.at<double>(i, j);
Thanx.
The short answer is you shouldn't try to share data between std::vector and cv::Mat because both take ownership of the data and cv::Mat can share it with other cv::Mat objects and that will become pretty unmanageable pretty fast.
But cv::Mat can share data with another cv::Mat and you can extract one or more rows (or columns) and have a partial shared data with another cv::Mat like this:
cv::Mat img;
//.. read the img
//first row
cv::Mat row0 = img(cv::Range(0,1), cv::Range(0,img.cols));
//the 4'th column is shared
cv::Mat column3 = img(cv::Range(0,img.rows), cv::Range(3,4));
//you can set new data into column3/row0 and you'll see it's shared:
//row0.setTo(0);
//column3.setTo(0);
Now you can create a std::vector<cv::Mat> where each Mat will be a row in the original image (and it will share the data with the original image), or use it in whatever way you need.

How to use opencv flann::Index?

I have some problems with opencv flann::Index -
I'm creating index
Mat samples = Mat::zeros(vfv_net_quie.size(),24,CV_32F);
for (int i =0; i < vfv_net_quie.size();i++)
{
for (int j = 0;j<24;j++)
{
samples.at<float>(i,j)=(float)vfv_net_quie[i].vfv[j];
}
}
cv::flann::Index flann_index(
samples,
cv::flann::KDTreeIndexParams(4),
cvflann::FLANN_DIST_EUCLIDEAN
);
flann_index.save("c:\\index.fln");
A fter that I'm tryin to load it and find nearest neiborhoods
cv::flann::Index flann_index(Mat(),
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
cv::Mat resps(vfv_reg_quie.size(), K, CV_32F);
cv::Mat nresps(vfv_reg_quie.size(), K, CV_32S);
cv::Mat dists(vfv_reg_quie.size(), K, CV_32F);
flann_index.knnSearch(sample,nresps,dists,K,cv::flann::SearchParams(64));
And have access violation in miniflann.cpp in line
((IndexType*)index)->knnSearch(_query, _indices, _dists, knn,
(const ::cvflann::SearchParams&)get_params(params));
Please help
You should not load the flann-file into a Mat(), as it is the place where the index is stored. It is a temporary object destroyed after the constructor was called. That's why the index isn't pointing anywhere useful when you call knnSearch().
I tried following:
cv::Mat indexMat;
cv::flann::Index flann_index(
indexMat,
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
resulting in:
Reading FLANN index error: the saved data size (100, 64) or type (5) is different from the passed one (0, 0), 0
which means, that the matrix has to be initialized with the correct dimensions (seems very stupid to me, as I don't necessarily know, how many elements are stored in my index).
cv::Mat indexMat(samples.size(), CV_32FC1);
cv::flann::Index flann_index(
indexMat,
cv::flann::SavedIndexParams("c:\\index.fln"),
cvflann::FLANN_DIST_EUCLIDEAN
);
does the trick.
In the accepted answer is somehow not clear and misleading why the input matrix in the cv::flann::Index constructor must have the same dimension as the matrix used for generating the saved Index. I'll elaborate on #Sau's comment with an example.
KDTreeIndex was generated using as input a cv::Mat sample, and then saved. When you load it, you must provide the same sample matrix to generate it, something like (using the templated GenericIndex interface):
cv::Mat sample(sample_num, sample_size, ... /* other params */);
cv::flann::SavedIndexParams index_params("c:\\index.fln");
cv::flann::GenericIndex<cvflann::L2<float>> flann_index(sample, index_params);
L2 is the usual Euclidean distance (other types can be found in opencv2/flann/dist.h).
Now the index can be used as shown the find the K nearest neighbours of a query point:
std::vector<float> query(sample_size);
std::vector<int> indices(K);
std::vector<float> distances(K);
flann_index.knnSearch(query, indices, distances, K, cv::flann::SearchParams(64));
The matrix indices will contain the locations of the nearest neighbours in the matrix sample, which was used at first to generate the index. That's why you need to load the saved index with the very matrix used to generate the index, otherwise the returned vector will contain indices pointing to meaningless "nearest neighbours".
In addition you get a distances matrix containing how far are the found neighbours from your query point, which you can later use to perform some inverse distance weighting, for example.
Please also note that sample_size has to match across sample matrix and query point.

How to change cv::Mat image dimensions dynamically?

I would like to declare an cv::Mat object and somewhere else in my code, change its dimension (nrows and ncols). I couldn't find any method in the documentation of OpenCV. They always suggest to include the dimension in the constuctor.
An easy and clean way is to use the create() method. You can call it as many times as you want, and it will reallocate the image buffer when its parameters do not match the existing buffer:
Mat frame;
for(int i=0;i<n;i++)
{
...
// if width[i], height[i] or type[i] are != to those on the i-1
// or the frame is empty(first loop)
// it allocates new memory
frame.create(height[i], width[i], type[i]);
...
// do some processing
}
Docs are available at https://docs.opencv.org/3.4/d3/d63/classcv_1_1Mat.html#a55ced2c8d844d683ea9a725c60037ad0
If you mean to resize the image, check resize()!
Create a new Mat dst with the dimensions and data type you want, then:
cv::resize(src, dst, dst.size(), 0, 0, cv::INTER_CUBIC);
There are other interpolation methods besides cv::INTER_CUBIC, check the docs.
Do you just want to define it with a Size variable you compute like this?
// dynamically compute size...
Size dynSize(0, 0);
dynSize.width = magicWidth();
dynSize.height = magicHeight();
int dynType = CV_8UC1;
// determine the type you want...
Mat dynMat(dynSize, dynType);
If you know the maximum dimensions and only need to use a subrange of rows/cols from the total Mat use the functions cv::Mat::rowRange and/or cv::Mat::colRange
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-rowrange

OpenCV image array, 4D matrix

I am trying to store a IPL_DEPTH_8U, 3 channel image into an array so that I can store 100 images in memory.
To initialise my 4D array I used the following code (rows,cols,channel,stored):
int size[] = { 324, 576, 3, 100 };
CvMatND* cvImageBucket; = cvCreateMatND(3, size, CV_8U);
I then created a matrix and converted the image into the matrix
CvMat *matImage = cvCreateMat(Image->height,Image->width,CV_8UC3 );
cvConvert(Image, matImage );
How would I / access the CvMatND to copy the CvMat into it at the position of stored?
e.g. cvImageBucket(:,:,:,0) = matImage; // copied first image into array
You've tagged this as both C and C++. If you want to work in C++, you could use the (in my opinion) simpler cv::Mat structure to store each of the images, and then use these to populate a vector with all the images.
For example:
std::vector<cv::Mat> imageVector;
cv::Mat newImage;
newImage = getImage(); // where getImage() returns the next image,
// or an empty cv::Mat() if there are no more images
while (!newImage.empty())
{
// Add image to vector
imageVector.push_back(image);
// get next image
newImage = getImage();
}
I'm guessing something similar to:
for ith matImage
memcpy((char*)cvImageBucket->data+i*size[0]*size[1]*size[2],(char*)matImage->data,size[0]*size[1]*size[2]);
Although I agree with #Chris that it is best to use vector<Mat> rather than a 4D matrix, this answer is just to be a reference for those who really need to use 4D matrices in OpenCV (even though it is a very unsupported, undocumented and unexplored thing with so little available online and claimed to be working just fine!).
So, suppose you filled a vector<Mat> vec with 2D or 3D data which can be CV_8U, CV_32F etc.
One way to create a 4D matrix is
vector<int> dims = {(int)vec.size(), vec[0].rows, vec[0].cols};
Mat m(dims, vec[0].type(), &vec[0]);
However, this method fails when the vector is not continuous which is typically the case for big matrices. If you do this for a discontinuous matrix, you will get a segmentation fault or bad access error when you would like to use the matrix (i.e. copying, cloning, etc). To overcome this issue, you can copy matrices of the vector one by one into the 4D matrix as follows:
Mat m2(dims, vec[0].type());
for (auto i = 0; i < vec.size(); i++){
vec[i].copyTo(temp.at<Mat>(i));
}
Notice that both methods require the matrices to be the same resolution. Otherwise, you may get undesired results or errors.
Also, notice that you can always use for loops but it is generally not a good idea to use them when you can vectorize.