i am selva. I am trying to apply pow value of 2.0 for an image in my project.
I am able to apply pow using the following method.
`cv::Mat src = imread("123.png",0);
cv::Mat dest ( src.size(), CV_8UC1);
for( int i=0; i<src.rows; i++)
for(int j=0; j<src.cols; j++)
dest.at<uchar>(i,j) = (int) (255* std::pow(src.at<uchar>(i,j)/255.0,val))`
But this increases the execution time.
I am trying to implement pow transformation(GAMMA) by,
Mat src = imread("123.png",0);
cv::Mat dest(src.size(),CV_8UC1);
src.convertTo(src,CV_32FC1);
cv::pow(src,2.0,dest);
I am getting a complete white image. I don't know what to change in my code to get the right output. Help me to solve this, Thanks.
The problem is that you have converted it from Unsigned Char (UC) to 32-bit floating point. You have converted your source, but not the destination type. Something is getting lost here.
Your pow type, convert it to CV_32FC1 and try again. Also, check for your scaling factors that is necessary for convertTo method. It's in OpenCV documentation - http://docs.opencv.org/doc/user_guide/ug_mat.html
Someone has already explained it here
Related
I'm trying to convert an OpenCV 3-channel Mat to a 3D Eigen Tensor.
So far, I can convert 1-channel grayscale Mat by:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_GRAYSCALE);
Eigen::MatrixXd myMatrix;
cv::cv2eigen(mat, myMatrix);
My attempt to convert a BGR mat to a Tensor have been:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_COLOR);
Eigen::MatrixXd temp;
cv::cv2eigen(mat, temp);
Eigen::Tensor<double, 3> myTensor = Eigen::TensorMap<Eigen::Tensor<double, 3>>(temp.data(), 3, mat.rows, mat.cols);
However, I'm getting the following error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.1.0) /tmp/opencv-20190505-12101-14vk1fh/opencv-4.1.0/modules/core/src/matrix_wrap.cpp:1195:
error: (-215:Assertion failed) !fixedType() || ((Mat*)obj)->type() == mtype in function 'create'
in the line: cv::cv2eigen(mat, temp);
Any help is appreciated!
The answer might be disappointing for you.
After going through 12 pages, My conclusion is you have to separate the RGB to individual channel MAT and then convert to eigenmatrix. Or create your own Eigen type and opencv convert function
In OpenCV it is tested like this. It only allows a single channel greyscale image
https://github.com/daviddoria/Examples/blob/master/c%2B%2B/OpenCV/ConvertToEigen/ConvertToEigen.cxx
And in OpenCV it is implemented like this. Which dont give you much option for custom type aka cv::scalar to eigen std::vector
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
And according to this post,
https://stackoverflow.com/questions/32277887/using-eigen-array-of-arrays-for-rgb-images
I think Eigen was not meant to be used in this way (with vectors as
"scalar" types).
they also have the difficulting in dealing with RGB image in eigen.
Take note that Opencv Scalar and eigen Scalar has a different meaning
It is possible to do so if and only if you use your own datatype aka matrix
So you either choose to store the 3 channel info in 3 eigen matrix and you can use default eigen and opencv routing.
Mat src = imread("img.png",CV_LOAD_IMAGE_COLOR); //load image
Mat bgr[3]; //destination array
split(src,bgr);//split source
//Note: OpenCV uses BGR color order
imshow("blue.png",bgr[0]); //blue channel
imshow("green.png",bgr[1]); //green channel
imshow("red.png",bgr[2]); //red channel
Eigen::MatrixXd bm,gm,rm;
cv::cv2eigen(bgr[0], bm);
cv::cv2eigen(bgr[1], gm);
cv::cv2eigen(bgr[2], rm);
Or you can define your own type and write you own version of the opencv cv2eigen function
custom eigen type follow this. and it wont be pretty
https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
https://eigen.tuxfamily.org/dox/TopicNewExpressionType.html
Rewrite your own cv2eigen_custom function similar to this
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
So good luck.
Edit
Since you need tensor. forget about cv function
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Tensor<float, 3> t_3d(image.rows, image.cols, 3);
// t_3d(i, j, k) where i is row j is column and k is channel.
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
{
t_3d(i, j, 0) = (float)image.at<cv::Vec3b>(i,j)[0];
t_3d(i, j, 1) = (float)image.at<cv::Vec3b>(i,j)[1];
t_3d(i, j, 2) = (float)image.at<cv::Vec3b>(i,j)[2];
//cv ref Mat.at<data_Type>(row_num, col_num)
}
watch out for i,j as em not sure about the order. I only write the code based on reference. didnt compile for it.
Also watch out for image type to tensor type cast problem. Some times you might not get what you wanted.
this code should in principle solve your problem
Edit number 2
following the example of this
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
Applied to your case is
cv::Mat frame=imread('myimg.ppm');
TensorMap<Tensor<float, 3>> t_3d(frame.data, image.rows, image.cols, 3);
problem is I'm not sure this will work or not. Even it works, you still have to figure out how the inside data is being organized so that you can get the shape correctly. Good luck
Updated answer - OpenCV now has conversion functions for Eigen::Tensor which will solve your problem. I needed this same functionality too so I made a contribution back to the project for everyone to use. See the documentation here:
https://docs.opencv.org/3.4/d0/daf/group__core__eigen.html
Note: if you want RGB order, you will still need to reorder the channels in OpenCV before converting to Eigen::Tensor
How can I convert vector<Point2d> to Mat.
Mat newImg = Mat(ImagePoints);
imwrite("E:/softwares/1.8.0.71/bin/newImg.png", newImg);
This is not working since imWrite() will only accept channel 1 or 3 or 4 and the Image Points are 2 channel.
I am using OpenCV version 3.
Here is the answer:
Dont worry about the type casting. Using integers in double. But this is just to give the gist of the solution.
std::vector< cv::Point2d> points;
for(int i =0; i < 10; i++)
{
points.push_back(cv::Point2d(i,i));
}
cv::Mat_<cv::Point2d> matrix(points);
std::cout<<matrix.at<cv::Point2d>(1);
But if you want to save this Mat then use XML. Imwrite wont write the Mat.
I have created a PCA solution in Matlab, which works. I am now in the middle of converting it to c++, where I use OpenCV's function cv::PCA. Where I found in a link that you could extract the mean, eigenvalues and eigenvectors using:
// Perform a PCA:
cv::PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, num_components);
// And copy the PCA results:
cv::Mat mean = pca.mean.clone();
cv::Mat eigenvalues = pca.eigenvalues.clone();
cv::Mat eigenvectors = pca.eigenvectors.clone();
Which compiles and runs. But when I want to use the values and look into the size and allocation, I get a size of the input data, which seems odd, i.e.
cv::Size temp= eigenvectors.size();
//temp has the same size as data.size();
In my mind the eigenvectors should be defined by the num_components and in my 3D space, should only be 3x3 size, i.e. 9 elements. Can anybody explain the reasoning behind the data sizes of pca.x.clone()?
Also what is the correct way of doing matrix operation in opencv and c++, based on the documentation it seems like you can use operators with cv::Mat. Using the above extraction of the pca information can you do:
cv::Mat Z = data - mean; //according to documentation http://stackoverflow.com/questions/10936099/matrix-multiplication-in-opencv
cv::Mat res = Z*eigenvectors;
It has crashed in my tests on runtime, probably because of the issue with "misinterpretation"/"interpretation" of the size.
Question
The basic question is how to use the pca opencv function correctly?
Edit
Another way that I have tested is to use the following code:
cv::PCA pca_analysis(mat, cv::Mat(), CV_PCA_DATA_AS_ROW, num_components);
cv::Point3d cntr = cv::Point3d(static_cast<double>(pca_analysis.mean.at<double>(0, 0)),
static_cast<int>(pca_analysis.mean.at<double>(0, 1)), static_cast<double>(pca_analysis.mean.at<double>(0, 2)));
//Store the eigenvalues and eigenvectors
vector<cv::Point3d> eigen_vecs(num_components);
vector<double> eigen_val(num_components);
for (int i = 0; i < num_components; ++i)
{
eigen_vecs[i] = cv::Point3d(pca_analysis.eigenvectors.at<double>(0, i*num_components),
pca_analysis.eigenvectors.at<double>(0, i * num_components + 1), pca_analysis.eigenvectors.at<double>(0, i*num_components + 2));
eigen_val[i] = pca_analysis.eigenvalues.at<double>(0,i);
}
When I compare the results from the above code with my Matlab implementation then the cntr seems to be plausible (under percentage difference). But the eigenvectors and eigenvalues are just zero, when I look at them in the debugger. This seems to go back to my original question of how to extract and understand pca's output.
Can anybody clarify what I am missing?
I came across your question when searching for how to access the eigenvalues and eigenvectors.
I personally needed to see the full information and would expect that myself. So, it's possible that the developers expected my use-case to more likely than yours.
In any case, as they are sorted you can take the first 'num_components' eigenvalues/eigenvectors as being the information you are after.
Also, from your perspective - i.e. if it worked as you expect it to - think about the converse: If you wanted to see the full info, how would you? The developers would have to implement a new parameter...
I want to ask if it is possible to access single channel matrix using img.at<T>(y, x) instead using img.ptr<T>(y, x)[0]
In the example below, I create a simple program to copy an image to another
cv::Mat inpImg = cv::imread("test.png");
cv::Mat img;
inpImg.convertTo(img, CV_8UC1); // single channel image
cv::Mat outImg(img.rows, img.cols, CV_8UC1);
for(int a = 0; a < img.cols; a++)
for(int b = 0; b < img.rows; b++)
outImg.at<uchar>(b, a) = img.at<uchar>(b, a); // This is wrong
cv::imshow("Test", outImg);
The shown result was wrong, but if I change it to
outImg.ptr<uchar>(b, a)[0] = img.ptr<uchar>(b, a)[0];
The result was correct.
I'm quite puzzled since using img.at<T>(y, x) should also be okay. I also tried with 32FC1 and float, the result is similar.
Although I know you already found it, the real reason - buried nicely in the documentation - is that cv::convertTo ignores the number of channels implied by the output type, so when you do this:
inpImg.convertTo(img, CV_8UC1);
And, assuming your input image has three channels, you actually end up with a CV_8UC3 format, which explains why your initial workaround was successful - effectively, you only took a single channel by doing this:
outImg.ptr<uchar>(b, a)[0] // takes the first channel of a CV_8UC3
This only worked by accident as the pixel should have been accessed like this:
outImg.ptr<Vec3b>(b, a)[0] // takes the blue channel of a CV_8UC3
As the data is still packed uchar in both cases, the effective reinterpretation happened to work.
As you noted, you can either convert to greyscale on loading:
cv::imread("test.png", CV_LOAD_IMAGE_GRAYSCALE)
Or, you can convert explicitly:
cv::cvtColor(inpImg, inpImg, CV_BGR2GRAY);
I read from this thread - Get most accurate image using OpenCV - that I can use variance to measure which of the input images are the sharpest. I can't seem to find a tutorial for this. I am very new to openCV. Right now, my code scans images from a folder and stores them using vector
for (int ct = 0; ct < images.size() ; ct++) {
//should i put the cvAvgSdv function here?
waitKey(0);
}
Thank you for any help!
Update: I called this fxn:
cvAvgSdv(images[ct],&scalar_mean,&std_dev);
and it gave me an error:
No suitable conversion function from cv::Mat to const cvArr * exists.
Can I use the fxn without converting the Mat to iplImage? If not, what's the easiest way to convert the Mat?
yes, it is.
you should calc like this:
CvScalar mean, std_dev;
cvAvgSdv(img,&mean,&std_dev,NULL);