Opencv error while building it with c++ - c++

I am creating a code for change detection in C++ using OpenCV but this code shows runtime error if I change the the image
void MainWindow::on_pushButton_2_clicked()
{
cv::Mat input1 = cv::imread("C:\\Users\\trainee2017233\\Desktop\\pre-post\\sulamani_ms1p1_pre_gref.tif");
cv::Mat input2 = cv::imread("C:\\Users\\trainee2017233\\Desktop\\post-post\\sulamani_ms1p1_pre_gref.tif");
cv::Mat diff;
cv::absdiff(input1, input2, diff);
cv::Mat diff1Channel;
// WARNING: this will weight channels differently! - instead you might want some different metric here. e.g. (R+B+G)/3 or MAX(R,G,B)
cv::cvtColor(diff, diff1Channel, CV_BGR2GRAY);
float threshold = 30; // pixel may differ only up to "threshold" to count as being "similar"
cv::Mat mask = diff1Channel < threshold;
cv::imshow("similar in both images" , mask);
// use similar regions in new image: Use black as background
cv::Mat similarRegions(input1.size(), input1.type(), cv::Scalar::all(0));
// copy masked area
input1.copyTo(similarRegions, mask);
cv::imshow("input1", input1);
cv::imshow("input2", input2);
cv::imshow("similar regions", similarRegions);
cv::imwrite("../outputData/Similar_result.png", similarRegions);
cv::waitKey(0);
}
when I am writing both images as the same image then no error is there but while changing them to different images it shows the error
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op, file D:\opencv\sources\modules\core\src\arithm.cpp, line 659

Here input1 and input2 should be of the same size for the function absdiff
...
cv::resize(input2, input2, input1.size());
cv::Mat diff;
...

Related

Perfoming Image filtering with OpenCV & C++, error : "Sizes of input arguments do not match"

Here's how I call my image and define my button :
img = imread("lena.jpg");
createButton("Show histogram", showHistCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Equalize histogram", equalizeCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Cartoonize", cartoonCallback, NULL, QT_PUSH_BUTTON, 0);
imshow("Input", img);
waitKey(0);
return 0;
I can call and show my image properly. Function Show histogram and equalize histogram also work properly. But when I tried to call Cartoonize, I got this error :
[ WARN:0] global /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/matrix_expressions.cpp (1334)
assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739
terminate called after throwing an instance of 'cv::Exception'
what():OpenCV(4.3.0) /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/arithm.cpp:669:
error: (-209:Sizes of input arguments do not match)
The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'arithm_op'
So I'm guessing my error comes from CartoonCallback function, channel error. I have made sure that my mutiplication is between image of same channels, I converted everything back to 3 channels, yet I can't seem to figure out where the error comes from. Here's the code :
void cartoonCallback(int state, void* userdata){
Mat imgMedian;
medianBlur(img, imgMedian, 7);
Mat imgCanny;
Canny(imgMedian, imgCanny, 50, 150); //Detect edges with canny
Mat kernel = getStructuringElement (MORPH_RECT, Size(2,2));
dilate(imgCanny, imgCanny, kernel); //Dilate image
imgCanny = imgCanny/255;
imgCanny = 1 - imgCanny;
Mat imgCannyf; //use float values to allow multiply between 0 and 1
imgCanny.convertTo(imgCannyf, CV_32FC3);
blur(imgCannyf, imgCannyf, Size(5,5));
Mat imgBF;
bilateralFilter(img, imgBF, 9, 150.0, 150.0); //apply bilateral filter
Mat result = imgBF/25; //truncate color
result = result*25;
Mat imgCanny3c; //Create 3 channels for edges
Mat cannyChannels[] = {imgCannyf, imgCannyf, imgCannyf};
merge(cannyChannels, 3, imgCanny3c);
Mat resultFloat;
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float
multiply(resultFloat, imgCanny3c, resultFloat);
resultFloat.convertTo(result, CV_8UC3); //convert back to 8 bit
imshow("Cartoonize", result);
}
Any suggestion ?
The problem is within this snippet:
cv::Mat resultFloat; // You prepare an output mat... with no dimensions nor type
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float..ok
cv::multiply(resultFloat, imgCanny3c, resultFloat); //resultFloat is empty and has no dimensions!
As you can see, you pass resultFloat to cv::multiply(operand1, operand2, output), but resultFloat is empty, without dimensions nor type and then attempt to multiply it with imgCanny3c. This seems the cause of the error.

OpenCV - absdiff with a mask

I am trying to calculate the absolute difference of two images using a mask so only a region of the images is considered in calculating the difference. But OpenCV does not have the mask part in its function. I saw this question but did not work for me. I am trying to multiply the result in the mask so that only the specified region remains.
code:
Mat region = //a grayscale image containing a region of 255 and the rest is zero
Mat img1, img2 = //two images of the same size as the region image and of type CV_8UC1
Mat mask = region / 255; //to make a binary mask
Mat difference = Mat::zeros(region .rows, region .cols, CV_8UC1);
cv::absdiff(img1, img2, difference);
difference = difference * mask;
if (!difference.empty()) imshow("difference", difference);
When I try this, I get an error.
error:
Error: Assertion failed (a_size.width == len) in cv::gemm
which happens here:
inline
Mat& Mat::operator = (const MatExpr& e)
{
e.op->assign(e, *this);
return *this;
}
difference * mask meaning that you are performing Matrix multiplication, in this case the height of difference must be the same as width of mask, if you want to perform an Element wise multiplication you should call difference.mul(mask)

OpenCV, DFT function don't use in the image with IMREAD_COLOR

While reading the image with IMREAD_COLOR, 'dft' function throws the error:
DFT function works just fine when reading an image with IMREAD_GRAYSCALE. But I want to read the image with IMREAD_COLOR.
main function
const char* filename = "face.jpg";
Mat I = imread(filename, IMREAD_COLOR);
if(I.empty()) return 0;
Mat padded;
I.convertTo(padded, CV_32F);
Mat fft;
Mat planes[2];
dft(padded, fft, DFT_SCALE|DFT_COMPLEX_OUTPUT);
Mat fftBlur = fft.clone();
fftBlur *= 0.5;
split(fftBlur, planes);
Mat ph, mag;
mag.zeros(planes[0].rows, planes[0].cols, CV_32F);
ph.zeros(planes[0].rows, planes[0].cols, CV_32F);
cartToPolar(planes[0], planes[1], mag, ph);
merge(planes, 2, fftBlur);
//inverse
Mat invfft;
dft(fftBlur, invfft, DFT_INVERSE|DFT_REAL_OUTPUT);
Mat result;
invfft.convertTo(result, CV_8U);
Mat image;
cvtColor(result, image, COLOR_GRAY2RGB);
imshow("Output", result);
imshow("Image", image);
waitKey();
The message you receive is an assertion it tells you DFT function only takes single precision floating point image with one or two channels (CV_32FC1, CV_32FC2, the letter C at the end of the flag mean channel) or double precision floating point images with one or two channels (CV_64FC1, CV_64FC2).
The two channel case is actually the representation of complex image in OpenCV data storage.
If you want you can split you image to std::vector<cv::Mat> where each element does represent one channel, using cv::split apply the DFT on each channels do the processing you want on it and recreate an multichannel image thanks to cv::merge.
From Learning OpenCV (about dft function):
The input array must be of floating-point type and may be single- or double-channel. In the single-channel case, the entries are assumed to be real numbers, and the output will be packed in a special space-saving format called complex conjugate symmetrical.
The same question is mentioned here in terms of matlab image processing.
You can check out cv::split function if you want to separate channels of your initial image.

how to apply bitwise_and on cv::Mat?

I am trying to apply cartoon filter to a UIImage, with the help of OpenCV. My code is as the following
+ (UIImage *)createCartoonizedImageFromImage:(UIImage *)inputImage {
int num_down = 2; //number of downsampling steps
cv::Mat image_rgb = [self cvMatFromUIImage:inputImage];
cv::Mat image_color;
cv::cvtColor(image_rgb, image_color, cv::COLOR_RGBA2RGB);
//downsample image using Gaussian pyramid
for(int i = 0; i < num_down; i++)
{
cv::pyrDown(image_color, image_color);
}
// apply bilateral filter
cv::Mat image_bilateral = image_color.clone();
cv::bilateralFilter(image_color, image_bilateral, 9, 9, 7);
// upsample image to original size
for(int i = 0; i < num_down; i++)
{
cv::pyrUp(image_color, image_color);
}
// convert to grayscale
cv::Mat image_gray;
cv::cvtColor(image_rgb, image_gray, cv::COLOR_RGB2GRAY);
// apply median blur
cv::Mat image_blur;
cv::medianBlur(image_gray, image_blur, 7);
// detect and enhance edges
cv::Mat image_edge;
cv::adaptiveThreshold(image_blur, image_edge, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 9, 2);
// convert back to color, bit-AND with color image
cv::cvtColor(image_edge, image_edge, cv::COLOR_GRAY2RGB);
cv::Mat image_cartoon;
cv::bitwise_and(image_bilateral, image_edge, image_cartoon);
UIImage *cartoonImage = [self UIImageFromCVMat:image_cartoon];
return cartoonImage;
}
on the line
cv::bitwise_and(image_bilateral, image_edge, image_cartoon);
the above code gives me a following error
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array') in binary_op, file /Users/kyle/code/opensource/opencv/modules/core/src/arithm.cpp, line 225
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/kyle/code/opensource/opencv/modules/core/src/arithm.cpp:225: error: (-209) The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array' in function binary_op
My Question
I know that the problem is with the incorrect sizes of input arrays. but how can i correct them and make them of the same size without effecting the end result ?
As clearly Stated in the OpenCV Error: "Sizes of input arguments do not match". i.e image_bilateral.size != image_edge.size(). A simple debug print will do the trick! So next time try to use your debugger! Here is your modified code!
int num_down = 2; //number of downsampling steps
cv::Mat image_rgb = imread(FileName1,1);
cv::Mat image_color;
cv::cvtColor(image_rgb, image_color, cv::COLOR_RGBA2RGB);
//downsample image using Gaussian pyramid
for(int i = 0; i < num_down; i++)
{
cv::pyrDown(image_color, image_color);
}
// apply bilateral filter
cv::Mat image_bilateral = image_color.clone();
cv::bilateralFilter(image_color, image_bilateral, 9, 9, 7);
// upsample image to original size
for(int i = 0; i < num_down; i++)
{
cv::pyrUp(image_color, image_color);
cv::pyrUp(image_bilateral, image_bilateral);//Bug <-- Here Missing to Resize the bilateralFilter image?
}
// convert to grayscale
cv::Mat image_gray;
//cv::cvtColor(image_rgb, image_gray, cv::COLOR_RGB2GRAY);//Bug <-- Using RGBA instead of RGB
cv::cvtColor(image_color, image_gray, cv::COLOR_RGB2GRAY);
// apply median blur
cv::Mat image_blur;
cv::medianBlur(image_gray, image_blur, 7);
// detect and enhance edges
cv::Mat image_edge;
cv::adaptiveThreshold(image_blur, image_edge, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 9, 2);
// convert back to color, bit-AND with color image
cv::cvtColor(image_edge, image_edge, cv::COLOR_GRAY2RGB);
cv::Mat image_cartoon;
//cv::bitwise_and(image_bilateral, image_edge, image_cartoon);//Bug <-- Here Size of image_bilateral is 1/4 of the image_edge
cv::bitwise_and(image_bilateral, image_edge, image_cartoon);
imshow("Cartoon ",image_cartoon);

Mask color image in OpenCV C++

I have a black/white image and a colour image of same size. I want to combine them to get one image which is black where black/white image was black and same colour as coloured image where black/white image was white.
This is the code in C++:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main(){
Mat img1 = imread("frame1.jpg"); //coloured image
Mat img2 = imread("framePr.jpg", 0); //grayscale image
imshow("Oreginal", img1);
//preform AND
Mat r;
bitwise_and(img1, img2, r);
imshow("Result", r);
waitKey(0);
return 0;
}
This is the error message:
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array') in binary_op, file /home/voja/src/opencv-2.4.10/modules/core/src/arithm.cpp, line 1021
terminate called after throwing an instance of 'cv::Exception'
what(): /home/voja/src/opencv-2.4.10/modules/core/src/arithm.cpp:1021: error: (-209) The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array' in function binary_op
Aborted (core dumped)
Firstly, a black/white(binary) image is different from a grayscale image. Both are Mat's of type CV_8U. But each pixel in grayscale image could take any value between 0 and 255. A binary image is expected to have only two values - a zero and a non zero number.
Secondly, bitwise_and cannot be applied to Mat's of different type. Grayscale image is a single channel image of type CV_8U( 8 bits per pixel ) and color image is a 3 channel image of type CV_BGRA ( 32 bits per pixel ).
It appears what you are trying to do could be done with a mask.
//threshold grayscale to binary image
cv::threshold(img2 , img2 , 100, 255, cv::THRESH_BINARY);
//copy the color image with binary image as mask
img1.copyTo(r, img2);
Actually, it is fairly simple using img1 as mask in a copyTo:
//Create a black colored image, with same size and type of the input color image
cv::Mat r = zeros(img2.size(),img2.type());
img1.copyTo(r, img2); //Only copies pixels which are !=0 in the mask
As said by Kiran, you get an error because bitwise_and cannot operate on image of different type.
As noted by Kiran, the initial allocation and zeroing is not mandatory (however doing things preliminarily has no impact on the performance). From the documentation:
When the operation mask is specified, if the Mat::create call shown
above reallocates the matrix, the newly allocated matrix is
initialized with all zeros before copying the data.
So the whole operation can be done with a simple:
img1.copyTo(r, img2); //Only copies pixels which are !=0 in the mask