Not quite understanding why this code works:
cv::Mat img = cv::imread('pic.jpg', -1);
cv::Mat padded;
std::uint16_t m = cv::getOptimalDFTSize(img.rows); // This will be 256
std::uint16_t n = cv::getOptimalDFTSize(img.cols); // This will be 256
cv::copyMakeBorder(img, padded, 0, m - img.rows, 0, n - img.cols,
cv::BORDER_CONSTANT, cv::Scalar::all(0)); // With my inputs, this effectively just copies img into padded
cv::Mat planes[] = { cv::Mat_<float>(padded),cv:: Mat::zeros(padded.size(), CV_32F) };
cv::Mat dft_img;
cv::merge(planes, 2, dft_img);
cv::dft(dft_img, dft_img);
cv::split(dft_img, planes);
But this breaks with an exception in memory:
cv::Mat img = cv::imread('pic.jpg', -1); // I know this image is 256x256
cv::Mat dft_img = cv::Mat::zeros(256,256,CV_32F); // Hard coding for simplicity atm
cv::dft(img,dft_img);
I'm having trouble understanding the documentation for dft() https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#dft,
and other functions and classes for that matter.
I think it has something to do with dft_img not being a multichannel array in the second segment, but I'm lost on how to initialize such an array short of copying the first segment of code.
Secondly, when trying to access either planes[0] or planes[1] and modify their values with:
planes[0].at<double>(indexi,indexj) = 0;
I get another exception in memory, though I also see a new page that says mat.inl.hpp not found. Using Visual Studio, OpenCV 3.4.3, a novice with C++ but intermediate with signal processing, any help is appreciated.
You did not specify what exception you got, but an important point is that input of the dft function must be a floating point number, either 32 bits or 64 bits floating point number. Another point is that try not to use raw arrays if you are not comfortable with c++. I would even suggest that if using c++ is not mandotary, prefer python for OpenCV. Here is a working example dft code:
// read your image
cv::Mat img = cv::imread("a2.jpg", CV_LOAD_IMAGE_GRAYSCALE); // I know this image is 256x256
// convert it to floating point
//normalization is optional(depends on library and you I guess?)
cv::Mat floatImage;
img.convertTo(floatImage, CV_32FC1, 1.0/255.0);
// create a placeholder Mat variable to hold output of dft
std::vector<cv::Mat> dftOutputs;
dftOutputs.push_back(floatImage);
dftOutputs.push_back(cv::Mat::zeros(floatImage.size(), CV_32F));
cv::Mat dftOutput;
cv::merge(dftOutputs, dftOutput);
// perform dft
cv::dft(dftOutput, dftOutput);
// separate real and complex outputs back
cv::split(dftOutput, dftOutputs);
I changed code from the tutorial a little to make it easier to understand. If you would like to obtain magnitude image and such, you can follow the tutorial after split function.
Related
I'm trying to convert an OpenCV 3-channel Mat to a 3D Eigen Tensor.
So far, I can convert 1-channel grayscale Mat by:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_GRAYSCALE);
Eigen::MatrixXd myMatrix;
cv::cv2eigen(mat, myMatrix);
My attempt to convert a BGR mat to a Tensor have been:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_COLOR);
Eigen::MatrixXd temp;
cv::cv2eigen(mat, temp);
Eigen::Tensor<double, 3> myTensor = Eigen::TensorMap<Eigen::Tensor<double, 3>>(temp.data(), 3, mat.rows, mat.cols);
However, I'm getting the following error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.1.0) /tmp/opencv-20190505-12101-14vk1fh/opencv-4.1.0/modules/core/src/matrix_wrap.cpp:1195:
error: (-215:Assertion failed) !fixedType() || ((Mat*)obj)->type() == mtype in function 'create'
in the line: cv::cv2eigen(mat, temp);
Any help is appreciated!
The answer might be disappointing for you.
After going through 12 pages, My conclusion is you have to separate the RGB to individual channel MAT and then convert to eigenmatrix. Or create your own Eigen type and opencv convert function
In OpenCV it is tested like this. It only allows a single channel greyscale image
https://github.com/daviddoria/Examples/blob/master/c%2B%2B/OpenCV/ConvertToEigen/ConvertToEigen.cxx
And in OpenCV it is implemented like this. Which dont give you much option for custom type aka cv::scalar to eigen std::vector
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
And according to this post,
https://stackoverflow.com/questions/32277887/using-eigen-array-of-arrays-for-rgb-images
I think Eigen was not meant to be used in this way (with vectors as
"scalar" types).
they also have the difficulting in dealing with RGB image in eigen.
Take note that Opencv Scalar and eigen Scalar has a different meaning
It is possible to do so if and only if you use your own datatype aka matrix
So you either choose to store the 3 channel info in 3 eigen matrix and you can use default eigen and opencv routing.
Mat src = imread("img.png",CV_LOAD_IMAGE_COLOR); //load image
Mat bgr[3]; //destination array
split(src,bgr);//split source
//Note: OpenCV uses BGR color order
imshow("blue.png",bgr[0]); //blue channel
imshow("green.png",bgr[1]); //green channel
imshow("red.png",bgr[2]); //red channel
Eigen::MatrixXd bm,gm,rm;
cv::cv2eigen(bgr[0], bm);
cv::cv2eigen(bgr[1], gm);
cv::cv2eigen(bgr[2], rm);
Or you can define your own type and write you own version of the opencv cv2eigen function
custom eigen type follow this. and it wont be pretty
https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
https://eigen.tuxfamily.org/dox/TopicNewExpressionType.html
Rewrite your own cv2eigen_custom function similar to this
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
So good luck.
Edit
Since you need tensor. forget about cv function
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Tensor<float, 3> t_3d(image.rows, image.cols, 3);
// t_3d(i, j, k) where i is row j is column and k is channel.
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
{
t_3d(i, j, 0) = (float)image.at<cv::Vec3b>(i,j)[0];
t_3d(i, j, 1) = (float)image.at<cv::Vec3b>(i,j)[1];
t_3d(i, j, 2) = (float)image.at<cv::Vec3b>(i,j)[2];
//cv ref Mat.at<data_Type>(row_num, col_num)
}
watch out for i,j as em not sure about the order. I only write the code based on reference. didnt compile for it.
Also watch out for image type to tensor type cast problem. Some times you might not get what you wanted.
this code should in principle solve your problem
Edit number 2
following the example of this
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
Applied to your case is
cv::Mat frame=imread('myimg.ppm');
TensorMap<Tensor<float, 3>> t_3d(frame.data, image.rows, image.cols, 3);
problem is I'm not sure this will work or not. Even it works, you still have to figure out how the inside data is being organized so that you can get the shape correctly. Good luck
Updated answer - OpenCV now has conversion functions for Eigen::Tensor which will solve your problem. I needed this same functionality too so I made a contribution back to the project for everyone to use. See the documentation here:
https://docs.opencv.org/3.4/d0/daf/group__core__eigen.html
Note: if you want RGB order, you will still need to reorder the channels in OpenCV before converting to Eigen::Tensor
the problem is to fourie transform ( cv::dft ) a signal with fourie descriptors. So the mat should be complex numbers :(
But my problem is how can make a mat with complex numbers ?
Please help me to find an example or any other that show me how to store a complex number(RE + IM) to a mat ?
Is there a way to use merge ?
merge()
I found an answer saying:
I think you can use merge() function here, See the Documentation
It says : Composes a multi-channel array from several single-channel arrays.
Reference: How to store complex numbers in OpenCV matrix?
look at the nice dft sample in the opencv repo, also at the dft tutorial
so, if you have a Mat real, and a Mat imag (both of type CV_32FC1):
Mat planes[] = {real,imag};
Mat complexImg;
merge(planes, 2, complexImg); // complexImg is of type CV_32FC2 now
dft(complexImg, complexImg);
split(complexImg, planes);
// real=planes[0], imag=planes[1];
I am using openCV in my c++ image processing project.
I have this two dimensional array I[800][600] filled with values between 0 and 255, and i want to put this array in a graylevel "IplImage" so i can view it and process it using openCV functions.
Any help will be appreciated.
Thanks in advance.
It's easy in Opencv C++ interface, all you need to do is to init a matrice, see the line below
cv::Mat img = cv::Mat(800, 600, CV_8UC1, I) // I[800][600]
Now you can do whatever you want, Opencv treats img as an 8-bit grayscale image.
CvSize image_size;
image_size.height = 800;
image_size.width = 600;
int channels = 1;
IplImage *image = cvCreateImageHeader(image_size, IPL_DEPTH_8U, channels);
cvSetData(image, I, image->widthStep)
this is untested, but the most important thing likely to require fixing is the second parameter to cvSetData(). This needs to be a pointer to unsigned character data, and if you're just using a 2D array that isn't part of a Mat, then you'll have to do something a bit different, (possibly a loop? although you should avoid loops in openCV as much as possible).
see this post for a highly relevant question
I am trying to store a IPL_DEPTH_8U, 3 channel image into an array so that I can store 100 images in memory.
To initialise my 4D array I used the following code (rows,cols,channel,stored):
int size[] = { 324, 576, 3, 100 };
CvMatND* cvImageBucket; = cvCreateMatND(3, size, CV_8U);
I then created a matrix and converted the image into the matrix
CvMat *matImage = cvCreateMat(Image->height,Image->width,CV_8UC3 );
cvConvert(Image, matImage );
How would I / access the CvMatND to copy the CvMat into it at the position of stored?
e.g. cvImageBucket(:,:,:,0) = matImage; // copied first image into array
You've tagged this as both C and C++. If you want to work in C++, you could use the (in my opinion) simpler cv::Mat structure to store each of the images, and then use these to populate a vector with all the images.
For example:
std::vector<cv::Mat> imageVector;
cv::Mat newImage;
newImage = getImage(); // where getImage() returns the next image,
// or an empty cv::Mat() if there are no more images
while (!newImage.empty())
{
// Add image to vector
imageVector.push_back(image);
// get next image
newImage = getImage();
}
I'm guessing something similar to:
for ith matImage
memcpy((char*)cvImageBucket->data+i*size[0]*size[1]*size[2],(char*)matImage->data,size[0]*size[1]*size[2]);
Although I agree with #Chris that it is best to use vector<Mat> rather than a 4D matrix, this answer is just to be a reference for those who really need to use 4D matrices in OpenCV (even though it is a very unsupported, undocumented and unexplored thing with so little available online and claimed to be working just fine!).
So, suppose you filled a vector<Mat> vec with 2D or 3D data which can be CV_8U, CV_32F etc.
One way to create a 4D matrix is
vector<int> dims = {(int)vec.size(), vec[0].rows, vec[0].cols};
Mat m(dims, vec[0].type(), &vec[0]);
However, this method fails when the vector is not continuous which is typically the case for big matrices. If you do this for a discontinuous matrix, you will get a segmentation fault or bad access error when you would like to use the matrix (i.e. copying, cloning, etc). To overcome this issue, you can copy matrices of the vector one by one into the 4D matrix as follows:
Mat m2(dims, vec[0].type());
for (auto i = 0; i < vec.size(); i++){
vec[i].copyTo(temp.at<Mat>(i));
}
Notice that both methods require the matrices to be the same resolution. Otherwise, you may get undesired results or errors.
Also, notice that you can always use for loops but it is generally not a good idea to use them when you can vectorize.
As an aside: Apologies if I'm flooding SO with OpenCV questions :p
I'm currently trying to port over my old C code to use the new C++ interface and I've got to the point where I'm rebuilidng my Eigenfaces face recogniser class.
Mat img = imread("1.jpg");
Mat img2 = imread("2.jpg");
FaceDetector* detect = new HaarDetector("haarcascade_frontalface_alt2.xml");
// convert to grey scale
Mat g_img, g_img2;
cvtColor(img, g_img, CV_BGR2GRAY);
cvtColor(img2, g_img2, CV_BGR2GRAY);
// find the faces in the images
Rect r = detect->getFace(g_img);
Mat img_roi = g_img(r);
r = detect->getFace(g_img2);
Mat img2_roi = g_img2(r);
// create the data matrix for PCA
Mat data;
data.create(2,1, img2_roi.type());
data.row(0) = img_roi;
data.row(1) = img2_roi;
// perform PCA
Mat averageFace;
PCA pca(data, averageFace, CV_PCA_DATA_AS_ROW, 2);
//namedWindow("avg",1); imshow("avg", averageFace); - causes segfault
//namedWindow("avg",1); imshow("avg", Mat(pca.mean)); - doesn't work
I'm trying to create the PCA space, and then see if it's working by displaying the computed average image. Are there any other steps to this?
Perhaps I need to project the images onto the PCA subspace first of all?
Your error is probably here:
Mat data;
data.create(2,1, img2_roi.type());
data.row(0) = img_roi;
data.row(1) = img2_roi;
PCA expects a matrix with the data vectors as rows. However, you never scale the images to the same size so that they have the same number of pixels (so the dimension is the same), also data.create(2,1,...) - the 1 needs to be the dimension of your vector, i.e. the number of your pixels. Then copy the pixels from the crop to your matrix.