when using warpPerspective,
OpenCV Error: Bad number of channels (Source image must have 1, 3 or 4
channels) in cvConvertImage, file
/build/opencv-ys8xiq/opencv-2.4.9.1+dfsg/modules/highgui/src/utils.cpp,
line 622 terminate called after throwing an instance of
'cv::Exception' what():
/build/opencv-ys8xiq/opencv-2.4.9.1+dfsg/modules/highgui/src/utils.cpp:622:
error: (-15) Source image must have 1, 3 or 4 channels in function
cvConvertImage
But, the source image being used is 1 channel and has the desired size.
This code is basically to get the birdeye's view of an image.
cv::Mat warped;
std::vector<cv::Point2f> src ;
src.push_back(cv::Point2f(640, 470));
src.push_back(cv::Point2f(0, 470));
src.push_back(cv::Point2f(150, 250));
src.push_back(cv::Point2f(490, 250));
std::vector<cv::Point2f> dst ;
dst.push_back(cv::Point2f(640, 480));
dst.push_back(cv::Point2f(0, 480));
dst.push_back(cv::Point2f(0, 0));
dst.push_back(cv::Point2f(640, 0));
cv::Mat M = cv::getPerspectiveTransform(src,dst);
cv::warpPerspective(src, warped, M, image.size());
It was discussed in topic: https://stackoverflow.com/a/17863381
Short answer:
Use cv::perspectiveTransform or matrix multiplication for points and cv::warpPerspective for images
I hope it will help.
Related
Here's how I call my image and define my button :
img = imread("lena.jpg");
createButton("Show histogram", showHistCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Equalize histogram", equalizeCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Cartoonize", cartoonCallback, NULL, QT_PUSH_BUTTON, 0);
imshow("Input", img);
waitKey(0);
return 0;
I can call and show my image properly. Function Show histogram and equalize histogram also work properly. But when I tried to call Cartoonize, I got this error :
[ WARN:0] global /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/matrix_expressions.cpp (1334)
assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739
terminate called after throwing an instance of 'cv::Exception'
what():OpenCV(4.3.0) /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/arithm.cpp:669:
error: (-209:Sizes of input arguments do not match)
The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'arithm_op'
So I'm guessing my error comes from CartoonCallback function, channel error. I have made sure that my mutiplication is between image of same channels, I converted everything back to 3 channels, yet I can't seem to figure out where the error comes from. Here's the code :
void cartoonCallback(int state, void* userdata){
Mat imgMedian;
medianBlur(img, imgMedian, 7);
Mat imgCanny;
Canny(imgMedian, imgCanny, 50, 150); //Detect edges with canny
Mat kernel = getStructuringElement (MORPH_RECT, Size(2,2));
dilate(imgCanny, imgCanny, kernel); //Dilate image
imgCanny = imgCanny/255;
imgCanny = 1 - imgCanny;
Mat imgCannyf; //use float values to allow multiply between 0 and 1
imgCanny.convertTo(imgCannyf, CV_32FC3);
blur(imgCannyf, imgCannyf, Size(5,5));
Mat imgBF;
bilateralFilter(img, imgBF, 9, 150.0, 150.0); //apply bilateral filter
Mat result = imgBF/25; //truncate color
result = result*25;
Mat imgCanny3c; //Create 3 channels for edges
Mat cannyChannels[] = {imgCannyf, imgCannyf, imgCannyf};
merge(cannyChannels, 3, imgCanny3c);
Mat resultFloat;
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float
multiply(resultFloat, imgCanny3c, resultFloat);
resultFloat.convertTo(result, CV_8UC3); //convert back to 8 bit
imshow("Cartoonize", result);
}
Any suggestion ?
The problem is within this snippet:
cv::Mat resultFloat; // You prepare an output mat... with no dimensions nor type
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float..ok
cv::multiply(resultFloat, imgCanny3c, resultFloat); //resultFloat is empty and has no dimensions!
As you can see, you pass resultFloat to cv::multiply(operand1, operand2, output), but resultFloat is empty, without dimensions nor type and then attempt to multiply it with imgCanny3c. This seems the cause of the error.
I'm trying to fill a triangle in a mask using the fillConvexPoly function.
But I get the following error.
OpenCV Error: Assertion failed (points.checkVector(2, CV_32S) >= 0) in fillConvexPoly, file /home/iris/Downloads/opencv-3.1.0/modules/imgproc/src/drawing.cpp, line 2256
terminate called after throwing an instance of 'cv::Exception'
what(): /home/iris/Downloads/opencv-3.1.0/modules/imgproc/src/drawing.cpp:2256: error: (-215) points.checkVector(2, CV_32S) >= 0 in function fillConvexPoly
I call the function as like so,
cv::Mat mask = cv::Mat::zeros(r2.size(), CV_32FC3);
cv::fillConvexPoly(mask, trOutCroppedInt, cv::Scalar(1.0, 1.0, 1.0), 16, 0);
where the trOutCroppedInt defined like so,
std::vector<cv::Point> trOutCroppedInt
And I push 3 points in the vector,
[83, 46; 0, 48; 39, 0]
How should I correct this error?
When points.checkVector(2, CV_32S) >= 0) is encountered
This error may occur when the data type is more complex than CV_32S and the dimension is greater than two, for example all data type like vector<Point2f> can create the problem. As the result we can use fillConvexpoly according to the following steps:
1. Reading an Image with
cv::Mat src=cv::imread("what/ever/directory");
2. determine points
You must determine your points like in the following graphic
Thus, our code for this point is:
vector<cv::Point> point;
point.push_back(Point(163,146)); //point1
point.push_back(Point(100,148)); //point2
point.push_back(Point(100,110)); //point3
point.push_back(Point(139,110)); //point4
3.Use cv::fillConvexPoly function
Consider the image src and draw a polygon ((with the points)) on this image then code would be as follows:
cv::fillConvexPoly(src, //Image to be drawn on
point, //C-Style array of points
Scalar(255, 0, 0), //Color , BGR form
CV_AA, // connectedness, 4 or 8
0); // Bits of radius to treat as fraction
(so output image is as follows: before:left side - after:right side)
I have a black/white image and a colour image of same size. I want to combine them to get one image which is black where black/white image was black and same colour as coloured image where black/white image was white.
This is the code in C++:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main(){
Mat img1 = imread("frame1.jpg"); //coloured image
Mat img2 = imread("framePr.jpg", 0); //grayscale image
imshow("Oreginal", img1);
//preform AND
Mat r;
bitwise_and(img1, img2, r);
imshow("Result", r);
waitKey(0);
return 0;
}
This is the error message:
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array') in binary_op, file /home/voja/src/opencv-2.4.10/modules/core/src/arithm.cpp, line 1021
terminate called after throwing an instance of 'cv::Exception'
what(): /home/voja/src/opencv-2.4.10/modules/core/src/arithm.cpp:1021: error: (-209) The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array' in function binary_op
Aborted (core dumped)
Firstly, a black/white(binary) image is different from a grayscale image. Both are Mat's of type CV_8U. But each pixel in grayscale image could take any value between 0 and 255. A binary image is expected to have only two values - a zero and a non zero number.
Secondly, bitwise_and cannot be applied to Mat's of different type. Grayscale image is a single channel image of type CV_8U( 8 bits per pixel ) and color image is a 3 channel image of type CV_BGRA ( 32 bits per pixel ).
It appears what you are trying to do could be done with a mask.
//threshold grayscale to binary image
cv::threshold(img2 , img2 , 100, 255, cv::THRESH_BINARY);
//copy the color image with binary image as mask
img1.copyTo(r, img2);
Actually, it is fairly simple using img1 as mask in a copyTo:
//Create a black colored image, with same size and type of the input color image
cv::Mat r = zeros(img2.size(),img2.type());
img1.copyTo(r, img2); //Only copies pixels which are !=0 in the mask
As said by Kiran, you get an error because bitwise_and cannot operate on image of different type.
As noted by Kiran, the initial allocation and zeroing is not mandatory (however doing things preliminarily has no impact on the performance). From the documentation:
When the operation mask is specified, if the Mat::create call shown
above reallocates the matrix, the newly allocated matrix is
initialized with all zeros before copying the data.
So the whole operation can be done with a simple:
img1.copyTo(r, img2); //Only copies pixels which are !=0 in the mask
my question is very similar to this one... I'm trying to extract a sub matrix from a grayscale image wich is a polygon by 5 points , and convert it to a Mat.
This does not work:
std::vector<Point> vert(5);
vert.push_back(pt1);
vert.push_back(pt2);
vert.push_back(pt3);
vert.push_back(pt4);
vert.push_back(pt5);
Mat matROI = Mat(vert);
It shows me the following error message:
OpenCV Error: Bad number of channels (Source image must have 1, 3 or 4 channels) in cvConvertImage, file /home/user/opencv-2.4.6.1/modules/highgui/src/utils.cpp, line 611
terminate called after throwing an instance of 'cv::Exception'
what(): /home/user/opencv-2.4.6.1/modules/highgui/src/utils.cpp:611: error: (-15) Source image must have 1, 3 or 4 channels in function cvConvertImage
I'm using OpenCV 2.4.6.1 and C++.
Thank you
Edit:
I will rephrase my question: my objective is to obtain the right side of the image.
I thought I'd see the image as a polygon because I have the coordinates of the vertices, and then transform the vector that has the vertices in a matrix (cvMat).
My thought is correct or is there a simpler way to get this submatrix?
Your code has two problems:
First:
std::vector<Point> vert(5);
creates a vector initially with 5 points, so after you use push_back() 5 times you have a vector of 10 points, the first 5 of of which are (0, 0).
Second:
Mat matROI = Mat(vert);
creates a 10x1 Mat (from a vector of 10 points) with TWO channels. Check that with:
cout << "matROI.channels()=" << matROI.channels() << endl;
If you have a code like:
imshow("Window", matROI);
it will pass matROI through to cvConvertImage() which has the following code (and this causes the error you are seeing):
if( src_cn != 1 && src_cn != 3 && src_cn != 4 )
CV_ERROR( CV_BadNumChannels, "Source image must have 1, 3 or 4 channels" );
Since matROI is a list of points, it doesn't make sense to pass it to imshow().
Instead, try this:
Mat img(image.rows, image.cols, CV_8UC1);
polylines(img, vert, true, Scalar(255)); // or perhaps 0
imshow("Window", img);
int c = waitKey(0);
I'm trying to get contours from my frame and this is what I have done:
cv::Mat frame,helpframe,yframe,frame32f;
Videocapture cap(1);
.......
while(waitkey()){
cv::Mat result = cv::Mat::zeros(height,width,CV_32FC3);
cap >> frame; // get a new frame from camera
frame.convertTo(frame32f,CV_32FC3);
for ( int w = 0; w< 10;w ++){
result += frame32f;
}
//Average the frame values.
result *= (1.0/10);
result /= 255.0f;
cvtColor(result,yframe,CV_RGB2YCrCb); // converting to Ycbcr///// the code work fine when I use frame instead of result
extractChannel(yframe,helpframe,0); //extracting the Y channel
cv::minMaxLoc(helpframe,&minValue,&maxValue,&min,&max);
std::vector<std::vector<cv::Point>> contours;
cv::findContours(helpframe, contours,CV_RETR_LIST /*CV_RETR_EXTERNAL*/, CV_CHAIN_APPROX_SIMPLE);
....................................................
the program crashs at findcontours, and I debbug I get this error message:
OpenCV Error: Unsupported format or combination of formats ([Start]FindContours
support only 8uC1 and 32sC1 images) in unknown function, file ......\src\openc
v\modules\imgproc\src\contours.cpp, line 196
#Niko thanks for help I think that I have to convert helpframe to another type.
when I use result I get for
helpframe.type() => 5
and with frame I get 0
I don't know what does it mean ? but I'll try to find a way to convert helpframe.
after converting helpframe with :
helpframe.convertTo(helpframe2,CV_8U)
I get nothing helpframe2 is = 0 ?? and when I try the same with frame instead of resultframe the conversion works ??
Any idea how I should change the helpframe type because I use result instead of frame?
You need to reduce the image to a binary image before you can identify contours. This can be done by, e.g., applying some kind of edge detection algorithm or by simple thresholding:
// Binarize the image
cv::threshold(helpframe, helpframe, 50, 255, CV_THRESH_BINARY);
// Convert from 32F to 8U
cv::Mat helpframe2;
helpframe.convertTo(helpframe2, CV_8U);
cv::findContours(helpframe2, ...);