how to apply bitwise_and on cv::Mat? - c++

I am trying to apply cartoon filter to a UIImage, with the help of OpenCV. My code is as the following
+ (UIImage *)createCartoonizedImageFromImage:(UIImage *)inputImage {
int num_down = 2; //number of downsampling steps
cv::Mat image_rgb = [self cvMatFromUIImage:inputImage];
cv::Mat image_color;
cv::cvtColor(image_rgb, image_color, cv::COLOR_RGBA2RGB);
//downsample image using Gaussian pyramid
for(int i = 0; i < num_down; i++)
{
cv::pyrDown(image_color, image_color);
}
// apply bilateral filter
cv::Mat image_bilateral = image_color.clone();
cv::bilateralFilter(image_color, image_bilateral, 9, 9, 7);
// upsample image to original size
for(int i = 0; i < num_down; i++)
{
cv::pyrUp(image_color, image_color);
}
// convert to grayscale
cv::Mat image_gray;
cv::cvtColor(image_rgb, image_gray, cv::COLOR_RGB2GRAY);
// apply median blur
cv::Mat image_blur;
cv::medianBlur(image_gray, image_blur, 7);
// detect and enhance edges
cv::Mat image_edge;
cv::adaptiveThreshold(image_blur, image_edge, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 9, 2);
// convert back to color, bit-AND with color image
cv::cvtColor(image_edge, image_edge, cv::COLOR_GRAY2RGB);
cv::Mat image_cartoon;
cv::bitwise_and(image_bilateral, image_edge, image_cartoon);
UIImage *cartoonImage = [self UIImageFromCVMat:image_cartoon];
return cartoonImage;
}
on the line
cv::bitwise_and(image_bilateral, image_edge, image_cartoon);
the above code gives me a following error
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array') in binary_op, file /Users/kyle/code/opensource/opencv/modules/core/src/arithm.cpp, line 225
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/kyle/code/opensource/opencv/modules/core/src/arithm.cpp:225: error: (-209) The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array' in function binary_op
My Question
I know that the problem is with the incorrect sizes of input arrays. but how can i correct them and make them of the same size without effecting the end result ?

As clearly Stated in the OpenCV Error: "Sizes of input arguments do not match". i.e image_bilateral.size != image_edge.size(). A simple debug print will do the trick! So next time try to use your debugger! Here is your modified code!
int num_down = 2; //number of downsampling steps
cv::Mat image_rgb = imread(FileName1,1);
cv::Mat image_color;
cv::cvtColor(image_rgb, image_color, cv::COLOR_RGBA2RGB);
//downsample image using Gaussian pyramid
for(int i = 0; i < num_down; i++)
{
cv::pyrDown(image_color, image_color);
}
// apply bilateral filter
cv::Mat image_bilateral = image_color.clone();
cv::bilateralFilter(image_color, image_bilateral, 9, 9, 7);
// upsample image to original size
for(int i = 0; i < num_down; i++)
{
cv::pyrUp(image_color, image_color);
cv::pyrUp(image_bilateral, image_bilateral);//Bug <-- Here Missing to Resize the bilateralFilter image?
}
// convert to grayscale
cv::Mat image_gray;
//cv::cvtColor(image_rgb, image_gray, cv::COLOR_RGB2GRAY);//Bug <-- Using RGBA instead of RGB
cv::cvtColor(image_color, image_gray, cv::COLOR_RGB2GRAY);
// apply median blur
cv::Mat image_blur;
cv::medianBlur(image_gray, image_blur, 7);
// detect and enhance edges
cv::Mat image_edge;
cv::adaptiveThreshold(image_blur, image_edge, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 9, 2);
// convert back to color, bit-AND with color image
cv::cvtColor(image_edge, image_edge, cv::COLOR_GRAY2RGB);
cv::Mat image_cartoon;
//cv::bitwise_and(image_bilateral, image_edge, image_cartoon);//Bug <-- Here Size of image_bilateral is 1/4 of the image_edge
cv::bitwise_and(image_bilateral, image_edge, image_cartoon);
imshow("Cartoon ",image_cartoon);

Related

C++ and OpenCV 4.5.3 - (-215: Assertion failed)

Problem : Watershed algorithm
I started app project, for image processing, using OpenCv 4.5.3 and Swift ( with C++ ). I'm fighting with watershaded alg. for a really long time... And i have no clue what did i do wrong. Just don't know...
Error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161:
error: (-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161: error:
(-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
In the definition of openCv's watershed we can find :
#param image Input 8-bit 3-channel image.
#param markers Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .
Code
+(UIImage *) watershed:(UIImage *)src{
cv::Mat img, mask;
UIImageToMat(src, img);
// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
cv::inRange(img, cv::Scalar(255,255,255), cv::Scalar(255,255,255), mask);
img.setTo(cv::Scalar(0,0,0), mask);
// Create a kernel that we will use to sharpen our image
// an approximation of second derivative, a quite strong kernel
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat lapl;
cv::filter2D(img, lapl, CV_32F, kernel);
cv::Mat sharp;
img.convertTo(sharp, CV_32F);
cv::Mat result = sharp - lapl;
// convert back to 8bits gray scale
result.convertTo(result, CV_8UC3);
lapl.convertTo(lapl, CV_8UC3);
cv::Mat bw;
cv::cvtColor(result, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, cv::DIST_MASK_3);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1.0, cv::NORM_MINMAX);
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, 0.4, 1.0, cv::THRESH_BINARY);
// Dilate a bit the dist image
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8U);
dilate(dist, dist, kernel1);
// Create the CV_8U version of the distance image
// It is needed for findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i)+1), -1);
}
// Draw the background marker
circle(markers, cv::Point(5,5), 3, cv::Scalar(255), -1);
cv::Mat markers8u;
markers.convertTo(markers8u, CV_8U, 10);
// Perform the watershed algorithm
watershed(result, markers);
return MatToUIImage(result);
}
You can clearly see, that variables has proper type, as in descr. of function:
result.convertTo(result, CV_8UC3);
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
The convertTo can not add channels as well can not reduce/convert image to image with smaller amount of channels.
The key in this case is to use :
cvtColor(src, src, COLOR_BGRA2BGR); // change 4 to 3 channels

Matrix assignement value error in opencv C++ with mat.at<uchar>(i,j)

I am learning image processing with OpenCV in C++. To implement a basic down-sampling algorithm I need to work on the pixel level -to remove rows and columns. However, when I assign values with mat.at<>(i,j) other values are assign - things like 1e-38.
Here is the code :
Mat src, dst;
src = imread("diw3.jpg", CV_32F);//src is a 479x359 grayscale image
//dst will contain src low-pass-filtered I checked by displaying it works fine
Mat kernel;
kernel = Mat::ones(3, 3, CV_32F) / (float)(9);
filter2D(src, dst, -1, kernel, Point(-1, -1), 0, BORDER_DEFAULT);
// Now I try to remove half the rows/columns result is stored in downsampled
Mat downsampled = Mat::zeros(240, 180, CV_32F);
for (int i =0; i<downsampled.rows; i ++){
for (int j=0; j<downsampled.cols; j ++){
downsampled.at<uchar>(i,j) = dst.at<uchar>(2*i,2*j);
}
}
Since I read here OpenCV outputing odd pixel values that for cout I needed to cast, I wrote downsampled.at<uchar>(i,j) = (int) before dst.at<uchar> but it does not work also.
The second argument to cv::imread is cv::ImreadModes, so the line:
src = imread("diw3.jpg", CV_32F);
is not correct; it should probably be:
cv::Mat src_8u = imread("diw3.jpg", cv::IMREAD_GRAYSCALE);
src_8u.convertTo(src, CV_32FC1);
which will read the image as 8-bit grayscale image, and will convert it to floating point values.
The loop should look something like this:
Mat downsampled = Mat::zeros(240, 180, CV_32FC1);
for (int i = 0; i < downsampled.rows; i++) {
for (int j = 0; j < downsampled.cols; j++) {
downsampled.at<float>(i,j) = dst.at<float>(2*i,2*j);
}
}
note that the argument to cv::Mat::zeros is CV_32FC1 (1 channel, with 32-bit floating values), so Mat::at<float> method should be used.

Perfoming Image filtering with OpenCV & C++, error : "Sizes of input arguments do not match"

Here's how I call my image and define my button :
img = imread("lena.jpg");
createButton("Show histogram", showHistCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Equalize histogram", equalizeCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Cartoonize", cartoonCallback, NULL, QT_PUSH_BUTTON, 0);
imshow("Input", img);
waitKey(0);
return 0;
I can call and show my image properly. Function Show histogram and equalize histogram also work properly. But when I tried to call Cartoonize, I got this error :
[ WARN:0] global /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/matrix_expressions.cpp (1334)
assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739
terminate called after throwing an instance of 'cv::Exception'
what():OpenCV(4.3.0) /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/arithm.cpp:669:
error: (-209:Sizes of input arguments do not match)
The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'arithm_op'
So I'm guessing my error comes from CartoonCallback function, channel error. I have made sure that my mutiplication is between image of same channels, I converted everything back to 3 channels, yet I can't seem to figure out where the error comes from. Here's the code :
void cartoonCallback(int state, void* userdata){
Mat imgMedian;
medianBlur(img, imgMedian, 7);
Mat imgCanny;
Canny(imgMedian, imgCanny, 50, 150); //Detect edges with canny
Mat kernel = getStructuringElement (MORPH_RECT, Size(2,2));
dilate(imgCanny, imgCanny, kernel); //Dilate image
imgCanny = imgCanny/255;
imgCanny = 1 - imgCanny;
Mat imgCannyf; //use float values to allow multiply between 0 and 1
imgCanny.convertTo(imgCannyf, CV_32FC3);
blur(imgCannyf, imgCannyf, Size(5,5));
Mat imgBF;
bilateralFilter(img, imgBF, 9, 150.0, 150.0); //apply bilateral filter
Mat result = imgBF/25; //truncate color
result = result*25;
Mat imgCanny3c; //Create 3 channels for edges
Mat cannyChannels[] = {imgCannyf, imgCannyf, imgCannyf};
merge(cannyChannels, 3, imgCanny3c);
Mat resultFloat;
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float
multiply(resultFloat, imgCanny3c, resultFloat);
resultFloat.convertTo(result, CV_8UC3); //convert back to 8 bit
imshow("Cartoonize", result);
}
Any suggestion ?
The problem is within this snippet:
cv::Mat resultFloat; // You prepare an output mat... with no dimensions nor type
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float..ok
cv::multiply(resultFloat, imgCanny3c, resultFloat); //resultFloat is empty and has no dimensions!
As you can see, you pass resultFloat to cv::multiply(operand1, operand2, output), but resultFloat is empty, without dimensions nor type and then attempt to multiply it with imgCanny3c. This seems the cause of the error.

Training an SVM using hu moments

im learning about SVM, so im making a sample program that trains an SVM to detect if a symbol is in an image or if its not. All the images are black and white (the symbols would be black and the background white). I have 12 training images, 6 positives (with the symbol) and 6 negatives (without it). Im using hu moments to get the descriptors of every image and then i construct the training matrix with those descriptors. also i have a Labels matrix, which contains a label for each image: 1 if its positive and 0 if its negative. but im getting an error (something like a segmentation fault) at the line where i train the SVM. here is my code:
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
//arrays where the labels and the features will be stored
float labels[12] ;
float trainingData[12][7] ;
Moments moment;
double hu[7];
//===============extracting the descriptos for each positive image=========
for ( int i = 0; i <= 5; i++){
//the images are called t0.png ... t5.png and are in the folder train
std::string path("train/t");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
//setting the row i of the training data as the hu moments
for (int j = 0; j <= 6; j++){
trainingData[i][j] = (float)hu[j];
}
labels[i] = 1; //label=1 because is a positive image
}
//===============extracting the descriptos for each negative image=========
for (int i = 0; i <= 5; i++){
//the images are called tn0.png ... tn5.png and are in the folder train
std::string path("train/tn");
path += std::to_string(i);
path += ".png";
Mat input = imread(path, 0); //read the images
bitwise_not(input, input); //invert black and white
Mat BinaryInput;
threshold(input, BinaryInput, 100, 255, cv::THRESH_BINARY); //apply theshold
moment = moments(BinaryInput, true); //calculate the moments of the current image
HuMoments(moment, hu); //calculate the hu moments (this will be our descriptor)
for (int j = 0; j <= 6; j++){
trainingData[i + 6][j] = (float)hu[j];
}
labels[i + 6] = 0; //label=0 because is a negative image
}
//===========================training the SVM================
//we convert the labels and trainingData matrixes to Mat objects
Mat labelsMat(12, 1, CV_32FC1, labels);
Mat trainingDataMat(12, 7, CV_32FC1, trainingData);
//create the SVM
Ptr<ml::SVM> svm = ml::SVM::create();
//set the parameters of the SVM
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::LINEAR);
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
svm->setTermCriteria(criteria);
//Train the SVM !!!!!HERE OCCURS THE ERROR!!!!!!
svm->train(trainingDataMat, ml::ROW_SAMPLE, labelsMat);
//Testing the SVM...
Mat test = imread("train/t1.png", 0); //this should be a positive test
bitwise_not(test, test);
Mat testBin;
threshold(test, testBin, 100, 255, cv::THRESH_BINARY);
Moments momentP = moments(testBin, true); //calculate the moments of the test image
double huP[7];
HuMoments(momentP, huP);
Mat testMat(1, 7, CV_32FC1, huP); //setting the hu moments to the test matrix
double resp = svm->predict(testMat); //pretiction of the SVM
printf("%f", resp); //Response
getchar();
}
i know that the program is running fine until that line because i printed labelsMat and trainingDataMat and the values inside them are ok. Even in the console i can see that the program is running fine until that exact line executes. the console then shows this message:
OpenCV error: Bad argument (in the case of classification problem the responses must be categorical; either specify varType when creating TrainDatam or pass integer responses)
i dont really know what this means. any idea of what could be causing the problem? if you need any other details please tell me.
EDIT
for future readers:
the problem was in the way i defined the labels array as an array of float and the LabelsMat as a Mat of CV_32FC1. the array that contains the labels needs to have integers inside, so i changed:
float labels[12];
to
int labels[12];
and also changed
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
and that solved the error. Thank you
Trying changing:
Mat labelsMat(12, 1, CV_32FC1, labels);
to
Mat labelsMat(12, 1, CV_32SC1, labels);
From: http://answers.opencv.org/question/63715/svm-java-opencv-3/
If that doesn't work, hopefully one of these posts will help you:
Opencv 3.0 SVM train classification issues
OpenCV SVM Training Data

OpenCV running kmeans algorithm on an image

I am trying to run kmeans on a 3 channel color image, but every time I try to run the function it seems to crash with the following error:
OpenCV Error: Assertion failed (data.dims <= 2 && type == CV_32F && K > 0) in unknown function, file ..\..\..\OpenCV-2.3.0\modules\core\src\matrix.cpp, line 2271
I've included the code below with some comments to help specify what is being passed in. Any help is greatly appreciated.
// Load in an image
// Depth: 8, Channels: 3
IplImage* iplImage = cvLoadImage("C:/TestImages/rainbox_box.jpg");
// Create a matrix to the image
cv::Mat mImage = cv::Mat(iplImage);
// Create a single channel image to create our labels needed
IplImage* iplLabels = cvCreateImage(cvGetSize(iplImage), iplImage->depth, 1);
// Convert the image to grayscale
cvCvtColor(iplImage, iplLabels, CV_RGB2GRAY);
// Create the matrix for the labels
cv::Mat mLabels = cv::Mat(iplLabels);
// Create the labels
int rows = mLabels.total();
int cols = 1;
cv::Mat list(rows, cols, mLabels .type());
uchar* src;
uchar* dest = list.ptr(0);
for(int i=0; i<mLabels.size().height; i++)
{
src = mLabels.ptr(i);
memcpy(dest, src, mLabels.step);
dest += mLabels.step;
}
list.convertTo(list, CV_32F);
// Run the algorithm
cv::Mat labellist(list.size(), CV_8UC1);
cv::Mat centers(6, 1, mImage.type());
cv::TermCriteria termcrit(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0);
kmeans(mImage, 6, labellist, termcrit, 3, cv::KMEANS_PP_CENTERS, centers);
The error says all: Assertion failed (data.dims <= 2 && type == CV_32F && K > 0)
These are very simple rules to understand, the function will work only if:
mImage.depth() is CV_32F
if mImage.dims is <= 2
and if K > 0. In this case, you define K as 6.
From what you stated on the question, it seems that:
IplImage* iplImage = cvLoadImage("C:/TestImages/rainbox_box.jpg");`
is loading the image as IPL_DEPTH_8U by default and not IPL_DEPTH_32F. This means that mImage is also IPL_DEPTH_8U, which is why your code is not working.