cv::findcontours causes a program crash - c++

I'm trying to get contours from my frame and this is what I have done:
cv::Mat frame,helpframe,yframe,frame32f;
Videocapture cap(1);
.......
while(waitkey()){
cv::Mat result = cv::Mat::zeros(height,width,CV_32FC3);
cap >> frame; // get a new frame from camera
frame.convertTo(frame32f,CV_32FC3);
for ( int w = 0; w< 10;w ++){
result += frame32f;
}
//Average the frame values.
result *= (1.0/10);
result /= 255.0f;
cvtColor(result,yframe,CV_RGB2YCrCb); // converting to Ycbcr///// the code work fine when I use frame instead of result
extractChannel(yframe,helpframe,0); //extracting the Y channel
cv::minMaxLoc(helpframe,&minValue,&maxValue,&min,&max);
std::vector<std::vector<cv::Point>> contours;
cv::findContours(helpframe, contours,CV_RETR_LIST /*CV_RETR_EXTERNAL*/, CV_CHAIN_APPROX_SIMPLE);
....................................................
the program crashs at findcontours, and I debbug I get this error message:
OpenCV Error: Unsupported format or combination of formats ([Start]FindContours
support only 8uC1 and 32sC1 images) in unknown function, file ......\src\openc
v\modules\imgproc\src\contours.cpp, line 196
#Niko thanks for help I think that I have to convert helpframe to another type.
when I use result I get for
helpframe.type() => 5
and with frame I get 0
I don't know what does it mean ? but I'll try to find a way to convert helpframe.
after converting helpframe with :
helpframe.convertTo(helpframe2,CV_8U)
I get nothing helpframe2 is = 0 ?? and when I try the same with frame instead of resultframe the conversion works ??
Any idea how I should change the helpframe type because I use result instead of frame?

You need to reduce the image to a binary image before you can identify contours. This can be done by, e.g., applying some kind of edge detection algorithm or by simple thresholding:
// Binarize the image
cv::threshold(helpframe, helpframe, 50, 255, CV_THRESH_BINARY);
// Convert from 32F to 8U
cv::Mat helpframe2;
helpframe.convertTo(helpframe2, CV_8U);
cv::findContours(helpframe2, ...);

Related

Opencv - How to get number of vertical lines present in image (count of lines)

Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar &currentPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9

Perfoming Image filtering with OpenCV & C++, error : "Sizes of input arguments do not match"

Here's how I call my image and define my button :
img = imread("lena.jpg");
createButton("Show histogram", showHistCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Equalize histogram", equalizeCallback, NULL, QT_PUSH_BUTTON, 0);
createButton("Cartoonize", cartoonCallback, NULL, QT_PUSH_BUTTON, 0);
imshow("Input", img);
waitKey(0);
return 0;
I can call and show my image properly. Function Show histogram and equalize histogram also work properly. But when I tried to call Cartoonize, I got this error :
[ WARN:0] global /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/matrix_expressions.cpp (1334)
assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739
terminate called after throwing an instance of 'cv::Exception'
what():OpenCV(4.3.0) /home/hiro/Documents/OpenCV/opencv-4.3.0-source/modules/core/src/arithm.cpp:669:
error: (-209:Sizes of input arguments do not match)
The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'arithm_op'
So I'm guessing my error comes from CartoonCallback function, channel error. I have made sure that my mutiplication is between image of same channels, I converted everything back to 3 channels, yet I can't seem to figure out where the error comes from. Here's the code :
void cartoonCallback(int state, void* userdata){
Mat imgMedian;
medianBlur(img, imgMedian, 7);
Mat imgCanny;
Canny(imgMedian, imgCanny, 50, 150); //Detect edges with canny
Mat kernel = getStructuringElement (MORPH_RECT, Size(2,2));
dilate(imgCanny, imgCanny, kernel); //Dilate image
imgCanny = imgCanny/255;
imgCanny = 1 - imgCanny;
Mat imgCannyf; //use float values to allow multiply between 0 and 1
imgCanny.convertTo(imgCannyf, CV_32FC3);
blur(imgCannyf, imgCannyf, Size(5,5));
Mat imgBF;
bilateralFilter(img, imgBF, 9, 150.0, 150.0); //apply bilateral filter
Mat result = imgBF/25; //truncate color
result = result*25;
Mat imgCanny3c; //Create 3 channels for edges
Mat cannyChannels[] = {imgCannyf, imgCannyf, imgCannyf};
merge(cannyChannels, 3, imgCanny3c);
Mat resultFloat;
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float
multiply(resultFloat, imgCanny3c, resultFloat);
resultFloat.convertTo(result, CV_8UC3); //convert back to 8 bit
imshow("Cartoonize", result);
}
Any suggestion ?
The problem is within this snippet:
cv::Mat resultFloat; // You prepare an output mat... with no dimensions nor type
result.convertTo(imgCanny3c, CV_32FC3); //convert result to float..ok
cv::multiply(resultFloat, imgCanny3c, resultFloat); //resultFloat is empty and has no dimensions!
As you can see, you pass resultFloat to cv::multiply(operand1, operand2, output), but resultFloat is empty, without dimensions nor type and then attempt to multiply it with imgCanny3c. This seems the cause of the error.

Select part of a cv::Mat based on non-zero pixels of another Mat?

I am trying to update part of a Mat based on another Mat. For example, I want to select a part of img that is not zero in mask and add a constant value to it. When I try this:
Mat mask = imread("some grayscale image with a white area in a black background", IMREAD_GRAYSCALE);
Mat img = Mat::zeros(mask.rows, mask.cols, CV_8UC1);
Mat bnry, locations;
threshold(mask, bnry, 100, 255, THRESH_BINARY);
findNonZero(bnry, locations);
img(locations) += 5;
I get this error:
Error: Assertion failed ((int)ranges.size() == d)
img and mask have the same size.
How can I select an area of an image based on another image (mask)?
Many of the OpenCV functions will support mask in default, in other word you don't need to find non zero values and based on that doing sum operation, you just need to use cv::add function that in default support using mask as an argument,
cv::add(img,10,img,mask); // 10 is an arbitrary constant value
And about your code
img(locations) += 5;
As far as I know we don't have any like this overloaded operator+ in OpenCV to use.

How to display PGM image using OpenCV

I'm trying to load and display a .PGM image using OpenCV(2.4.0) for C++.
void open(char* location, int flag, int windowFlag)
{
Mat image = imread(location, flag);
namedWindow("Image window", windowFlag);
imshow("Image window", image);
waitKey(0);
}
I'm calling open like this:
open("./img_00245_c1.pgm", IMREAD_UNCHANGED, CV_WINDOW_AUTOSIZE);
The problem is that the image shown when the window is opened is darker than if I'm opening the file with IrfanView.
Also if I'm trying to write this image to another file like this:
Mat imgWrite;
imgWrite = image;
imwrite("newImage.pgm", imgWrite)
I will get a different file content than the original one and IrfanView will display this as my function displays with imshow.
Is there a different flag in imread for .PGM files such that I can get the original file to be displayed and saved ?
EDIT: Image pgm file
EDIT 2 : Remarked that: IrfanView normalizes the image to a maximum pixel value of 255 .
In order to see the image clearly using OpenCV I should normalize the image also when loading in Mat. Is this possible directly with OpenCV functions without iterating through pixels and modifying their values ?
The problem is not in the way data are loaded, but in the way they are displayed.
Your image is a CV_16UC1, and both imshow and imwrite normalize the values from original range [0, 65535] to the range [0, 255] to fit the range of the type CV_8U.
Since your PGM image has max_value of 4096:
P2
1176 640 // width height
4096 // max_value
it should be normalized from range [0, 4096] instead of [0, 65535].
You can do this with:
Mat img = imread("path_to_image", IMREAD_UNCHANGED);
img.convertTo(img, CV_8U, 255.0 / 4096.0);
imshow("Image", img);
waitKey();
Please note that the values range in your image doesn't correspond to [0, 4096], but:
double minv, maxv;
minMaxLoc(img, &minv, &maxv);
// minv = 198
// maxv = 2414
So the straightforward normalization in [0,255] like:
normalize(img, img, 0, 255, NORM_MINMAX);
img.convertTo(img, CV_8U);
won't work, as it will produce an image brighter than it should be.
This means that to properly show your image you need to know the max_value (here 4096). If it changes every time, you can retrieve it parsing the .pgm file.
Again, it's just a problem with visualization. Data are correct.

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)