Convert std::list to cv::Mat in C++ using OpenCV - c++

I'm trying to solve an equation system using SVD: cv::SVD::solveZ(A, x);, but A needs to be a Matrix. OpenCV doesn't offer any convertion of a std::list to cv::Mat. So my question is, whether there is a smart way to convert it without having to convert the std::list to a std::vector before.
The Matrix A is a 3xN matrix. My list contains N cv::Point3d elements.
My code looks something like this:
std::list<cv::Point3d> points; // length: N
cv::Mat A = cv::Mat(points).reshape(1); // that's how I do it with a std::vector<cv::Point3d>
cv::Mat x;
cv::SVD::solveZ(A, x); // homogeneous linear equation system Ax = 0
If anybody has an idea about it, then please tell me.

cv::Mat can handle only continously stored data, so there are no suitable conversion from std::list. But you can implement it by yourself, as follows:
std::list<cv::Point3d> points;
cv::Mat matPoints(points.size(), 1, CV_64FC3);
int i = 0;
for (auto &p : points) {
matPoints.at<cv::Vec3d>(i++) = p;
}
matPoints = matPoints.reshape(1);

Related

How to create a 4D matrix from a vector of matrices in OpenCV c++

Suppose that I collect same size, depth and channel images/matrices into a vector. So, these images are r*c*d each and I have m of them in my vector as follows.
vector<string> imgs; --> there are m image paths in these, all are r*c*d resolution
vector<Mat> vec;
for (auto img: imgs ){
vec.push_back(cv::imread(img, COLOR_BGR)); //or gray. doesn't really matter
}
Now, I want to create a 4D Matrix. For example, in python np.array(vec) would have given me that (assuming vec is a list). I would like to the same in OpenCV c++ but I couldn't find a solution for this.
I don't want to create a 4D matrix with Mat m(dims, size, type);, then iterate through all pixels and copy the value as it is very inefficient. I would like to have a technique that will just consider vec<Mat> as 4D Mat so that it is going to be super fast. Note that I can have 100 full-resolution images.
I am using Opencv4.2 and c++ on Mac.
Thanks in advance
After many hours today, I coincidentally found an answer to my question. I will leave the answer here to have a reference for those who battle with OpenCV's documentation to find the correct answer.
vector<int> dims = {m, r, c}; //dimensions
cv::Mat m (3, &dims[0], imgs[0].type(), &imgs[0]);
This creates the 4D matrix from imgs vector where the type is one of CV_8UC1, CV_8UC3 or CV_8UC4 depending on the number of channels. The good thing is it doesn't copy the vector.
Although this is not part of the question, to access a pixel in the 4D matrix, you can do the following:
int x = v1, i = v2, j = v3, c = v4; //v1-4 are some random values within their ranges
cout << (int)m.at<Mat>(x).at<Vec3b>(i,j)[c] << " "
<< (int)imgs[x].at<Vec3b> (i,j)[c] << endl;
Both will print c-th channel of i,j-th index of x-th image.

Efficient image to matrix conversion

I am a beginner in c++ (mainly worked with Python) and I do not yet know how to properly do things. I want to process some color images as signals over time and, in order to do that, I want them to be in a double matrix.
A grayscale image would be 1d vector, from top left corner to bottom right, the color image would be a 2d vector, the second dimension being the 3 colors. That is, I want to flatten the image to a long vector, which would contain size 3 vectors with the rgb information.
I open the image using dlib like so:
#include <dlib/gui_widgets.h>
#include <dlib/image_io.h>
#include <dlib/image_transforms.h>
using namespace dlib;
array2d<rgb_pixel> img;
load_image(img, image_name);
Which gives me a dlib array2d containing pixel structs. Now, I want to change that to a flattened image. I figured that, since the images dimensions might change, I would use a
std::vector<std::vector<double>>
as my matrix.
The naive way to convert it would be the following:
#include <vector>
#include <dlib/gui_widgets.h>
#include <dlib/image_io.h>
#include <dlib/image_transforms.h>
std::vector<std::vector<double>> image_to_frame(array2d<rgb_pixel> const &image)
{
const int total_num_of_px = image.nc() * image.nr();
std::vector<std::vector<double>> frame = std::vector<std::vector<double>>(total_num_of_px);
for (int i = 0; i < image.nr(); i++)
{
for (int j = 0; j < image.nc(); j++)
{
frame[(i+1)*j] = std::vector<double>(3);
frame[(i + 1)*j][0] = (double)image[i][j].red;
frame[(i + 1)*j][1] = (double)image[i][j].green;
frame[(i + 1)*j][2] = (double)image[i][j].blue;
}
}
return frame;
}
But this takes 8 seconds for an 1280x720 image. Which seems to me to be a bit long. Is there a better way to do this? A more efficient way of converting the array2d to vector matrix?
Or is there a more efficient data structure than the vector matrix? Or should I not be using dlib and open the image in another way to be easier to convert?
In Python I can open the image directly as a numpy array then do a reshape, which is very fast. Is there some equivalent to this in c++ that I am not aware of?
From API it looks like that image inside dlib is stored exactly like it is done in OpenCV (dlib::toMat converts it by reusing the same memory). It means that you can take a pointer to the first element of array2d, then reinterpret_cast it to the pointer to the struct { uchar r, uchar g, uchar b } (or whatever you would like), its length will be nc*nr. Here you can copy the whole buffer using memcpy.
But I don't really get why you would need it because lines are stored continuosly, so you should not expect any cache misses.
UPDATE: also, cmon, half of the time your program is wasting by converting uchars to doubles. You shouldn't save RGB using double. There are unsigned chars by default.
UPDATE2:
struct rgb
{
uchar r, g, b;
};
rgb* data = reinterpret_cast<rgb*>(&frame[0][0]);
std::vector<rgb> vect;
std::copy(data, data + nc * nr * sizeof(rgb), std::back_inserter(vect));
After that, you have flattened vector of the image that is stored directly in one piece of memory. If you don't need a copy, you can simply use your data pointer.
Also, if you want index-like access, you can use uchar[3] instead of rgb struct.

converting std::string back to cv::Mat which was generated with std::stringstream << cv::Mat

I'm saving calibration-data for stereo-vision instead the given YAML data-structure from opencv in a special data-format which allow me more flexibility.
Because of that I'm using a little hack to convert the cv::Mat into a std::string:
cv::Mat mat;
// some code
std::stringstream sstrMat;
sstrMat << mat;
saveFoo(sstrMat.str()); // something like this to save the matrix
as output I become from sstrMat.str() all data which I need:
[2316.74172937253, 0, 418.0432610206069;
0, 2316.74172937253, 253.5597342849773;
0, 0, 1]
My problem is the reverse operation of that: converting this std::string back to a cv::Mat.
I have tried code like that:
cv::Mat mat;
std::stringstream sstrStr;
sstrStr << getFoo() // something like this to get the saved matrix
sstrStr >> mat; // idea-1: generate compile error
mat << sstrStr; // idea-2: does also generate compile error
all try of me failed, so I would ask you if you know a method of opencv to convert that string back, or whether I write my own method to do that.
Did you implement the operator<<(std::ostream&, const Mat&) yourself ? If so, you obviously have to do the reverse operation yourself too, if you want it.
From your output I guess the type of the matrix was CV_64F with 3 channels. Be sure to remember the size of your matrix, and check the documentation.
You can create your matrix with these specifications, and fill it with values while reading your stream. There are multiple examples of stream reading on the internet, but in your case that's quite easy. Ignore the characters you don't need ([ ] , ;) with std::istream::read into a dummy buffer, and use the operator>>(std::istream&, double) to get your values back.
What's cool about it is that you can iterate through cv::Mats like you would on a standard library container. So if you're using C++11 it could look like this (not tested):
int size[2] = {x, y}; // matrix cols and rows
cv::Mat mat(3, size, CV_F64); // 3-dimensional matrix
for(auto& elem : mat)
{
cv::Vec3d new_elem; // a 3D vector with double values
// read your 3 doubles into new_elem
// ...
elem = new_elem; // assign the new value to the matrix element
}
Again, I did not use OpenCV extensively so refer to the documentation to check everything is correct.

opencv vector<point3f> to 3 column mat

I have 2 vectors (p1 and p2) of point3f variables which represent 2 3D pointclouds. In order to match these two point clouds I want to use SVD to find a transformation for this. The problem is that SVD requires a matrix (p1*p2 transpose). My question is how do I convert a vector of size Y to a Yx3 matrix?
I tried cv::Mat p1Matrix(p1) but this gives me a row vector with two dimensions.I also found fitLine but I think this only works for 2D.
Thank you in advance.
How about something like:
cv::Mat p1copy(3, p1.size(), CV_32FC1);
for (size_t i = 0, end = p1.size(); i < end; ++i) {
p1copy.at<float>(0, i) = p1[i].x;
p1copy.at<float>(1, i) = p1[i].y;
p1copy.at<float>(2, i) = p1[i].z;
}
If this gives you the desired result, you can make the code faster by using a pointer instead of the rather slow at<>() function.
I use reshape function for convert vector of points to Mat.
vector<Point3f> P1,P2;
Point3f c1,c2;//center of two set
... //data association for two set
Mat A=Mat(P1).reshape(1).t();
Mat B=Mat(P2).reshape(1).t();
Mat AA,BB,CA,CB;
repeat(Mat(c1),1,P1.size(),CA);
repeat(Mat(c2),1,P2.size(),CB);
AA=A-CA;
BB=B-CB;
Mat H=AA*BB.t();
SVD svd(H);
Mat R_;
transpose(svd.u*svd.vt,R_);
if(determinant(R_)<0)
R_.at<float>(0,2)*=-1,R_.at<float>(1,2)*=-1,R_.at<float>(2,2)*=-1;
Mat t=Mat(c2)-R_*Mat(c1);

OpenCV image array, 4D matrix

I am trying to store a IPL_DEPTH_8U, 3 channel image into an array so that I can store 100 images in memory.
To initialise my 4D array I used the following code (rows,cols,channel,stored):
int size[] = { 324, 576, 3, 100 };
CvMatND* cvImageBucket; = cvCreateMatND(3, size, CV_8U);
I then created a matrix and converted the image into the matrix
CvMat *matImage = cvCreateMat(Image->height,Image->width,CV_8UC3 );
cvConvert(Image, matImage );
How would I / access the CvMatND to copy the CvMat into it at the position of stored?
e.g. cvImageBucket(:,:,:,0) = matImage; // copied first image into array
You've tagged this as both C and C++. If you want to work in C++, you could use the (in my opinion) simpler cv::Mat structure to store each of the images, and then use these to populate a vector with all the images.
For example:
std::vector<cv::Mat> imageVector;
cv::Mat newImage;
newImage = getImage(); // where getImage() returns the next image,
// or an empty cv::Mat() if there are no more images
while (!newImage.empty())
{
// Add image to vector
imageVector.push_back(image);
// get next image
newImage = getImage();
}
I'm guessing something similar to:
for ith matImage
memcpy((char*)cvImageBucket->data+i*size[0]*size[1]*size[2],(char*)matImage->data,size[0]*size[1]*size[2]);
Although I agree with #Chris that it is best to use vector<Mat> rather than a 4D matrix, this answer is just to be a reference for those who really need to use 4D matrices in OpenCV (even though it is a very unsupported, undocumented and unexplored thing with so little available online and claimed to be working just fine!).
So, suppose you filled a vector<Mat> vec with 2D or 3D data which can be CV_8U, CV_32F etc.
One way to create a 4D matrix is
vector<int> dims = {(int)vec.size(), vec[0].rows, vec[0].cols};
Mat m(dims, vec[0].type(), &vec[0]);
However, this method fails when the vector is not continuous which is typically the case for big matrices. If you do this for a discontinuous matrix, you will get a segmentation fault or bad access error when you would like to use the matrix (i.e. copying, cloning, etc). To overcome this issue, you can copy matrices of the vector one by one into the 4D matrix as follows:
Mat m2(dims, vec[0].type());
for (auto i = 0; i < vec.size(); i++){
vec[i].copyTo(temp.at<Mat>(i));
}
Notice that both methods require the matrices to be the same resolution. Otherwise, you may get undesired results or errors.
Also, notice that you can always use for loops but it is generally not a good idea to use them when you can vectorize.