HI i found this code to compare images
cv::Mat img1 = ...
cv::Mat img2 = ...
cv::Mat result = ...
int threshold = (double)(img1.rows * img1.cols) * 0.7;
cv::compare(img1 , img2 , result , cv::CMP_EQ );
int similarPixels = countNonZero(result);
if ( similarPixels > threshold ) {
cout << "similar" << endl;
}
but since i m new to OPENCV i dont know what are the values for " cv::Mat img1=..."
please help me out and tried out the code with the image path as value but its giving an error
void compare(const MatND& src1, const MatND& src2, MatND& dst, int cmpop)
compare function takes 4 parametres and this are ;
•src1 – The first source array
•src2 – The second source array; must have the same size and same type as src1
•value – The scalar value to compare each array element with
•dst – The destination array; will have the same size as src1 and type= CV_8UC1
And as i understand this image has some fiducial points like coordinate system ( X / Y )
such as ;
1 45 123
2 56 164
3 64 147
and with these points are storing in array and then it send arrays to compare function in order to find matchers .
If you want to work with image comparasion my advise is you can search feret database which is free to use for research.
You can use imread to load an image to cv::Mat img1, e.g.
cv::Mat img1 = imread("c:\\test.bmp"); // To load an RGB image
Related
I'm using OpenCV (v 2.4.9.1, Ubuntu 16.04) to do a resize and crop on an image, the original image is a JPEG file with dimensions 640x480.
cv::Mat _aspect_preserving_resize(const cv::Mat& image, int target_width)
{
cv::Mat output;
int min_dim = ( image.cols >= image.rows ) ? image.rows : image.cols;
float scale = ( ( float ) target_width ) / min_dim;
cv::resize( image, output, cv::Size(int(image.cols*scale), int(image.rows*scale)));
return output;
}
cv::Mat _center_crop(cv::Mat& image, cv::Size& input_size)
{
cv::Rect myROI(int(image.cols/2-input_size.width/2), int(image.rows/2-input_size.height/2), input_size.width, input_size.height);
cv::Mat croppedImage = image(myROI);
return croppedImage;
}
int min_input_size = int(input_size.height * 1.14);
cv::Mat image = cv::imread("power-dril/47105738371_72f83eeb37_z.jpg");
cv::Mat output = _aspect_preserving_resize(image, min_input_size);
cv::Mat result = _center_crop(output, input_size);
After this I display the images, and it looks perfect - as I would expect it to be:
The problem comes when I stream this image, where I notice that the size (in elements) of the cropped image is only a third of what I would expect. It looks as if there is only one cannel on the resultant crop. It should have had 224*224*3=150528, but I'm getting only 50176 when I'm doing
std::cout << cropped_image.total() << " " << cropped_image.type() << endl;
>>> 50176 16
Any idea what's wrong here? The type of the resulting cv::Mat looks okay, and also visually it looks ok, so how there is only one channel?
Thanks in advance.
Basic Structures — OpenCV 2.4.13.7 documentation says:
Mat::total
Returns the total number of array elements.
C++: size_t Mat::total() const
The method returns the number of array elements (a number of pixels if
the array represents an image).
Therefore, the return value is the number of pixels 224*224=50176 and your expected value is wrong.
My terminology was wrong, as pointed by #MikeCAT, and it seems that the issue should be solved in the serialization logic. I went with a solution along the lines of this one:
Convert Mat to Array/Vector in OpenCV
My original code didn't check the channels() function.
if (curr_img.isContinuous()) {
int totalsz = curr_img.dataend-curr_img.datastart;
array.assign(curr_img.datastart, curr_img.datastart + totalsz);
} else {
int rowsz = CV_ELEM_SIZE(curr_img.type()) * curr_img.cols;
for (int i = 0; i < curr_img.rows; ++i) {
array.insert(array.end(), curr_img.ptr<uint8_t>(i), curr_img.ptr<uint8_t>(i) + rowsz);
}
}
I'm trying to implement color conversion from RGB-LMS and LMS-RGB back and using reshape for multiplication matrix, following answer from this question : Fastest way to apply color matrix to RGB image using OpenCV 3.0?
My ori Mat object is from an image with 3 channel (RGB), and I need to multiply them with matrix of 1 channel (lms), it seems like I have an issue with the matrix type. I've read reshape docs and questions related to this issue, like Issues multiplying Mat matrices, and I believe I have followed the instructions.
Here's my code : [UPDATED : Convert into flat image]
void test(const Mat &forreshape, Mat &output, Mat &pic, int rows, int cols)
{
Mat lms(3, 3, CV_32FC3);
Mat rgb(3, 3, CV_32FC3);
Mat intolms(rows, cols, CV_32F);
lms = (Mat_<float>(3, 3) << 1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884 );
/* switch the order of the matrix according to the BGR order of color on OpenCV */
Mat transpose = (3, 3, CV_32F, lms).t(); // this will do transpose from matrix lms
pic = forreshape.reshape(1, rows*cols);
Mat flatFloatImage;
pic.convertTo(flatFloatImage, CV_32F);
rgb = flatFloatImag*transpose;
output = rgb.reshape(3, cols);
}
I define my Mat object, and I have converted it into float using convertTo
Mat ori = imread("ori.png", CV_LOAD_IMAGE_COLOR);
int rows = ori.rows;
int cols = ori.cols;
Mat forreshape;
ori.convertTo(forreshape, CV_32F);
Mat pic(rows, cols, CV_32FC3);
Mat output(rows, cols, CV_32FC3);
Error is :
OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) ,
so it's the type issue.
I tried to change all type into either 32FC3 of 32FC1, but doesn't seem to work. Any suggestion ?
I believe what you need is to convert your input to a flat image and than multiply them
float lms [] = {1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884};
Mat lmsMat(3, 3, CV_32F, lms );
Mat flatImage = ori.reshape(1, ori.rows * ori.cols);
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage * lmsMat;
Mat output = mixedImage.reshape(3, imData.rows);
I might have messed up with lms matrix there, but I guess you will catch up from here.
Also see 3D matrix multiplication in opencv for RGB color mixing
EDIT:
Problem with distortion is that you got overflow after float to 8U conversion. This would do the trick:
rgb = flatFloatImage*transpose;
rgb.convertTo(pic, CV_32S);
output = pic.reshape(3, rows)
Output:
;
Also I'm not sure but quick google search gives me different matrix for LMS see here. Also note that opencv stores colors in B-G-R format instead of RGB so change your mix mtraixes recordingly.
I am looking to normalize the pixel values of an image to the range [0..1] using C++/OpenCV. However, when I do the normalization using either image *= 1./255 or the normalize function the pixel values are rounded down to zero. I have tried setting the image to type CV_32FC3.
Below is the code I have:
Mat image;
image = imread(imageLoc, CV_LOAD_IMAGE_COLOR | CV_LOAD_IMAGE_ANYDEPTH);
Mat tempImage;
// (didn't work) tempImage *= 1./255;
image.convertTo(tempImage, CV_32F, 3);
normalize(image, tempImage, 0, 1, CV_MINMAX);
int r = 100;
int c = 150;
uchar* ptr = (uchar*)(tempImage.data + r * tempImage.step);
Vec3f tempVals;
tempVals.val[0] = ptr[3*c+1];
tempVals.val[1] = ptr[3*c+2];
tempVals.val[2] = ptr[3*c+3];
cout<<" temp image - "<< tempVals << endl;
uchar* ptr2 = (uchar*)(image.data + r * image.step);
Vec3f imVals;
imVals.val[0] = ptr2[3*c+1];
imVals.val[1] = ptr2[3*c+2];
imVals.val[2] = ptr2[3*c+3];
cout<<" image - "<< imVals << endl;
This produces the following output in the console:
temp image - [0, 0, 0]
image - [90, 78, 60]
You can make convertTo() do the normalization for you:
image.convertTo(tempImage, CV_32FC3, 1.f/255);
You are passing 3 to convertTo(), presumably as channel-count, but that's not the correct signature.
I used the normalize function and it worked (Java):
Core.normalize(src,dst,0.0,1.0,Core.NORM_MINMAX,CvType.CV_32FC1);
You should use a 32F depth for your destination image. I believe the reason for this, is that since you need to get decimal values, you should use an a non-integer OpenCV data type. According to this table, the float types correspond to the 32F depth. I chose the number of channels to be 1 and it worked; CV_32FC1
Remember also that it's unlikely to spot any visual difference in the image.
Finally, since you probably have thousands of pixels in your image, your console might seem that it's printing only zeros. However due to the large amount of data, try to use CTRL+F to see what's going on. Hope this helps.
I am relatively new to C++ and coding in general and have run into a problem when attempting to convert an image to a floating point image. I am attempting to do this to eliminate round off errors with calculating the mean and standard deviation of pixel intensity for images as it starts to effect data quite substantially. My code is below.
Mat img = imread("Cells2.tif");
cv::namedWindow("stuff", CV_WINDOW_NORMAL);
cv::imshow("stuff",img);
CvMat cvmat = img;
Mat dst = cvCreateImage(cvGetSize(&cvmat),IPL_DEPTH_32F,1);
cvConvertScale(&cvmat,&dst);
cvScale(&dst,&dst,1.0/255);
cvNamedWindow("Test",CV_WINDOW_NORMAL);
cvShowImage("Test",&dst);
And I am running into this error
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in an unknown function, file ......\modules\core\src\array.cpp, line 1238
I've looked everywhere and everyone was saying to convert img to CvMat which I attempted above.
When I did that as above code shows I get
OpenCV Error: Bad argument (Unknown array type) in unknown function, file ......\modules\core\src\matrix.cpp line 697
Thanks for your help in advance.
Just use C++ OpenCV interface instead of C interface and use convertTo function to convert between data types.
Mat img = imread("Cells2.tif");
cv::imshow("source",img);
Mat dst; // destination image
// check if we have RGB or grayscale image
if (img.channels() == 3) {
// convert 3-channel (RGB) 8-bit uchar image to 32 bit float
src.convertTo(dst, CV_32FC3);
}
else if (img.channels() == 1) {
// convert 1-chanel (grayscale) 8-bit uchar image to 32 bit float
img1.convertTo(dst, CV_32FC1);
}
// display output, note that to display dst image correctly
// we have to divide each element of dst by 255 to keep
// the pixel values in the range [0,1].
cv::imshow("output",dst/255);
waitKey();
Second part of the question To calculate the mean of all elements in dst
cv::Salar avg_pixel;
double avg;
// note that Scalar is a vector.
// If your image is RGB, Scalar will contain 3 values,
// representing color values for each channel.
avg_pixel = cv::mean(dst);
if (dst.channels() == 3) {
//if 3 channels
avg = (avg_pixel[0] + avg_pixel[1] + avg_pixel[2]) / 3;
}
if(dst.channels() == 1) {
avg = avg_pixel[0];
}
cout << "average element of m: " << avg << endl;
Here is my code for calculating the average in C++ OpenCV.
int NumPixels = img.total();
double avg;
double c;
for(int y = 0; y <= img.cols; y++)
for(int x = 0; x <= dst.rows; x++)
c+=img.at<uchar>(x,y);
avg = c/NumPixels;
cout << "Avg Value\n" << 255*avg;
For MATLAB I just load the image and take Q = mean(img(:)); which returns 1776.23
And for the return of 1612.36 I used cv:Scalar z = mean(dst);
When working with 1-channel (e.g. CV_8UC1) Mat objects in OpenCV, this creates a Mat of all ones: cv::Mat img = cv::Mat::ones(x,y,CV_8UC1).
However, when I use 3-channel images (e.g. CV_8UC3), things get a little more complicated. Doing cv::Mat img = cv::Mat::ones(x,y,CV_8UC3) puts ones into channel 0, but channels 1 and 2 contain zeros. So, how do I use cv::Mat::ones() for multi-channel images?
Here's some code that might help you to see what I mean:
void testOnes() {
int x=2; int y=2; //arbitrary
// 1 channel
cv::Mat img_C1 = cv::Mat::ones(x,y,CV_8UC1);
uchar px1 = img_C1.at<uchar>(0,0); //not sure of correct data type for px in 1-channel img
printf("px of 1-channel img: %d \n", (int)px1); //prints 1
// 3 channels
cv::Mat img_C3 = cv::Mat::ones(x,y,CV_8UC3); //note 8UC3 instead of 8UC1
cv::Vec3b px3 = img_C3.at<cv::Vec3b>(0,0);
printf("px of 3-channel img: %d %d %d \n", (int)px3[0], (int)px3[1], (int)px3[2]); //prints 1 0 0
}
So, I would have expected to see this printout: px of 3-channel img: 1 1 1, but instead I see this: px of 3-channel img: 1 0 0.
P.S. I did a lot of searching before posting this. I wasn't able to resolve this by searching SO for "[opencv] Mat::ones" or "[opencv] +mat +ones".
I don't use OpenCV, but I believe I know what's going on here. You define a data-type, but you are requesting the value '1' for that. The Mat class appears not to pay attention to the fact that you have a multi-channel datatype, so it simply casts '1' as a 3-byte unsigned char.
So instead of using the ones function, just use the scalar constructor:
cv::Mat img_C3( x, y, CV_8UC3, CV_RGB(1,1,1) );
You can also initialize like this:
Mat img;
/// Lots of stuff here ...
// Need to initialize again for some reason:
img = Mat::Mat(Size(width, height), CV_8UC3, CV_RGB(255,255,255));