How to get clean edge without broken lines? - c++

I'm working on a binary image to get its edge. I used cannyedge function from opencv but the result is less desirable.
Click for the Images
int edgeThresh = 1;
int lowThreshold = 100;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
blur(binaryImage, detected_edges, Size(3, 3));
Canny(binaryImage, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size);
dst = Scalar::all(0);
src.copyTo(dst, detected_edges);
imwrite(defaultPath + "edge_" + filename, dst);
I did a dirty workaround which works but again added to processing time:
Canny(detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size);
blur(detected_edges, detected_edges, Size(3, 3));
Canny(detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size);
I am new to opencv and image processing so most likely I am missing something.
Please enlighten me. Thanks!

FIRSTLY: for the image like this, where it just white and black pixels, use findContours function, it would be faster and more precise. If you need to draw contours you just found (draw your result) use drawContours function (draw on new Mat, the same size of the one you found the contours on).
Documentation about this HERE.
SECONDLY: it may be a problem that your values of kernel size or thresholds are incorrect. The second threshold is reccomended to be 3 times bigger than the first one. I think that the problem may be with your kernel size (last argument in Canny function). Try not using this function overload at all, use the one without this argument or lower your kernel size.

Related

Trying to fill holes in occupancy grid uting OpenCV

I have this project where we are trying to make an autonomous vehicle using a lidar and a stereo camera. To to this we're making two maps with cartographer and merging them together. However, the data from the stereo camera is not very accurate and we therefor have to manipulate the map made by cartographer. To make the camera map we are detecting lines, reading the distance and turning this into a laser scan which is the sent to cartographer. Ideally we would be able to convert the map into just the lines. This is what the camera map looks like: Camera map
What I would like to do first is fill out the holes in the map to make it easier to find lines and such later. This is where I am struggling. I have written code to convert from nav_msgs::OccupancyGrid to cv::Mat and back in addition to merging the maps. I have looked over this code and I don't think this is where the problem is. I have tried different suggestions online but have not gotten close to a solution. This is my code:
cv::Mat fill_cam_mat(cv::Mat mat) {
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
//std::vector<cv::Vec4i> hierarchy;
cv::Mat mat_floodfill = canny_output.clone();
cv::floodFill(mat_floodfill, cv::Point(0,0), cv::Scalar(255));
cv::Mat mat_floodfill_inv;
cv::bitwise_not(mat_floodfill, mat_floodfill_inv);
cv::Mat mat_out = (canny_output | mat_floodfill_inv);
return mat_out;
}
And my result is as follows when merged with the lidar map:
Final map
I have also tried:
cv::Mat fill_cam_mat(cv::Mat mat) {
int mat_height = mat.rows;
int mat_width = mat.cols;
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
cv::Mat non_zero;
cv::findNonZero(canny_output, non_zero);
std::vector<std::vector<cv::Point>> hull(non_zero.total());
for(unsigned int i = 0, n = non_zero.total(); i < n; ++i) {
cv::convexHull(non_zero, hull[i], false);
}
cv::Mat fill_contours_result(mat_height, mat_width, CV_8UC3, cv::Scalar(0));
cv::fillPoly(fill_contours_result, hull, 255);
return fill_contours_result;
}
Which gives the same result. I have also tried using cv::findContours to spicify the hull, but that worked even worse.
I am new with OpenCV and I don't understand what is wrong with my output. Would really appreciate any help on the code or if anybody have any better suggestions on how to solve the problem. Is it even necessary to fill the holes in order to get useful information from the map?
Thank you in advance!

opencv connect thin lines

I have a black and white image with lines. some of these lines, however, are not perfectly connected where they should be (though they are close) I have attached an example.
I want to make it so that the lines are close to 1px thick. I have been playing with a few ideas, but not having much sucess. I have tried dilate erote, and dilate like such:
int dsize = 5;
cv::Mat element = getStructuringElement(cv::MORPH_CROSS,
cv::Size(2*dsize + 1, 2*dsize + 1),
cv::Point( dsize, dsize ) );
cv::dilate( src, src, element );
Is there a better way, as op[p[osed to just dilating and eroding to do specifically what I am after?
There is at least a couple of solutions we can try out, but I'm gonna need more info about your problem. For example, are you trying to close the (in)complete contour of a detected object? How much "contour degradation" are you willing to take to approximate a fully closed contour?
Here's a first and very basic solution, assuming you need a 1 pixel width contour. It involves dilating the image N times and then applying a thinning/skeletonize transformation. (The function is part of the Extended Image Processing module of OpenCV ).
Let's see the code:
#include <opencv2/ximgproc.hpp>
//Read input image:
std::string imagePath = "C://opencvImages//lineImg.png";
cv::Mat imageInput= cv::imread( imagePath );
//Convert it to grayscale:
cv::Mat grayImg;
cv::cvtColor( imageInput, grayImg, cv::COLOR_BGR2GRAY );
//Get binary image via Otsu:
cv::threshold( grayImg, grayImg, 0, 255 , cv::THRESH_OTSU );
//Dilate the binary image with 5 iterations:
cv::Mat morphKernel = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
int morphIterations = 5;
cv::morphologyEx( grayImg, grayImg, cv::MORPH_DILATE, morphKernel, cv::Point(-1,-1), morphIterations );
This is the Dilated image:
//Get the skeleton:
cv::Mat skel;
int algorithmType = 1;
cv::ximgproc::thinning( grayImg, skel, algorithmType );
This is the Skeleton Image. The line has been "thinned" back to a width of 1 pixel:
I don't know if this is good enough for your application, but, as I said, depending on what you are doing we can try a couple of alternative solutions.
Is it you who draw the lines to the mat, It seems like the problem should be take in hands before.
You should draw line in a bigger cv::mat then resize to make your line thicker.
if you want to have complete line, don't draw each points on the map but line between points to get line from bresenham.

cv::imshow in opencv is only displaying parts of a composite image, but displaying the parts separately works. Why?

Objective and problem
I'm trying to process a video file on the fly using OpenCV 3.4.1 by grabbing each frame, converting to grayscale, then doing Canny edge detection on it. In order to display the images (on the fly as well), I created a Mat class with 3 additional headers that is three times as wide as the original frame. The 3 extra headers represent the images I would like to display in the composite, and are positioned to the 1st, 2nd and 3rd horizontal segment of the composite.
After image processing however, the display of the composite image is not as expected: the first segment (where the original frame should be) is completely black, while the other segments (of processed images) are displayed fine. If, on the other hand, I display the ROIs one by one in separate windows, all the images look fine.
These are the things I tried to overcome this issue:
use .copyTo to actually copy the data into the appropriate image segments. The result was the same.
I put the Canny image to the compOrigPart ROI, and it did display in the first segment, so it is not a problem with the definition of the ROIs.
Define the composite as three channel image
In the loop convert it to grayscale
put processed images into it
convert back to BGR
put the original in.
This time around the whole composite was black, nothing showed.
As per gameon67's suggestion, I tried to create a namedWindow as well, but that doesn't help either.
Code:
int main() {
cv::VideoCapture vid("./Vid.avi");
if (!vid.isOpened()) return -1;
int frameWidth = vid.get(cv::CAP_PROP_FRAME_WIDTH);
int frameHeight = vid.get(cv::CAP_PROP_FRAME_HEIGHT);
int frameFormat = vid.get(cv::CAP_PROP_FORMAT);
cv::Scalar fontColor(250, 250, 250);
cv::Point textPos(20, 20);
cv::Mat frame;
cv::Mat compositeFrame(frameHeight, frameWidth*3, frameFormat);
cv::Mat compOrigPart(compositeFrame, cv::Range(0, frameHeight), cv::Range(0, frameWidth));
cv::Mat compBwPart(compositeFrame, cv::Range(0, frameHeight), cv::Range(frameWidth, frameWidth*2));
cv::Mat compEdgePart(compositeFrame, cv::Range(0, frameHeight), cv::Range(frameWidth*2, frameWidth*3));
while (vid.read(frame)) {
if (frame.empty()) break;
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
compOrigPart = frame;
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compBwPart, "GrayScale", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compEdgePart, "Canny edge detection", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::imshow("Composite of Original, BW and Canny frames", compositeFrame);
cv::imshow("Original", compOrigPart);
cv::imshow("BW", compBwPart);
cv::imshow("Canny", compEdgePart);
cv::waitKey(33);
}
}
Questions
Why can't I display the entirety of the composite image in a single window, while displaying them separately is OK?
What is the difference between these displays? The data is obviously there, as evidenced by the separate windows.
Why only the original frame is misbehaving?
Your compBwPart and compEdgePart are grayscale images so the Mat type is CV8UC1 - single channel and therefore your compositeFrame is in grayscale too. If you want to combine these two images with a color image you have to convert it to BGR first and then fill the compOrigPart.
while (vid.read(frame)) {
if (frame.empty()) break;
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
cv::cvtColor(compositeFrame, compositeFrame, cv::COLOR_GRAY2BGR);
frame.copyTo(compositeFrame(cv::Rect(0, 0, frameWidth, frameHeight)));
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor); //the rest of your code
This is a combination of several issues.
The first problem is that you set the type of compositeFrame to the value returned by vid.get(cv::CAP_PROP_FORMAT). Unfortunately that property doesn't seem entirely reliable -- I've just had it return 0 (meaning CV_8UC1) after opening a color video, and then getting 3 channel (CV_8UC3) frames. Since you want to have the compositeFrame the same type as the input frame, this won't work.
To work around it, instead of using those properties, I'd lazy initialize compositeFrame and the 3 ROIs after receiving the first frame (based on it's dimensions and type).
The next set of problems lies in those two statements:
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
In this case assumption is made that frame is BGR (since you're trying to convert), meaning compositeFrame and its ROIs are also BGR. Unfortunately, in both cases you're writing a grayscale image into the ROI. This will cause a reallocation, and the target Mat will cease to be a ROI.
To correct this, use temporary Mats for the grayscale data, and use cvtColor to turn it back to BGR to write into the ROIs.
Similar problem lies in the following statement:
compOrigPart = frame;
That's a shallow copy, meaning it will just make compOrigPart another reference to frame (and therefore it will cease to be a ROI of compositeFrame).
What you need is a deep copy, using copyTo (note that the data types still need to match, but that was fixed earlier).
Finally, even though you try to be flexible regarding the type of the input video (judging by the vid.get(cv::CAP_PROP_FORMAT)), the rest of the code really assumes that the input is 3 channel, and will break if it isn't.
At the least, there should be some assertion to cover this expectation.
Putting this all together:
#include <opencv2/opencv.hpp>
int main()
{
cv::VideoCapture vid("./Vid.avi");
if (!vid.isOpened()) return -1;
cv::Scalar fontColor(250, 250, 250);
cv::Point textPos(20, 20);
cv::Mat frame, frame_gray, edges_gray;
cv::Mat compositeFrame;
cv::Mat compOrigPart, compBwPart, compEdgePart; // ROIs
while (vid.read(frame)) {
if (frame.empty()) break;
if (compositeFrame.empty()) {
// The rest of code assumes video to be BGR (i.e. 3 channel)
CV_Assert(frame.type() == CV_8UC3);
// Lazy initialize once we have the first frame
compositeFrame = cv::Mat(frame.rows, frame.cols * 3, frame.type());
compOrigPart = compositeFrame(cv::Range::all(), cv::Range(0, frame.cols));
compBwPart = compositeFrame(cv::Range::all(), cv::Range(frame.cols, frame.cols * 2));
compEdgePart = compositeFrame(cv::Range::all(), cv::Range(frame.cols * 2, frame.cols * 3));
}
cv::cvtColor(frame, frame_gray, cv::COLOR_BGR2GRAY);
cv::Canny(frame_gray, edges_gray, 100, 150);
// Deep copy data to the ROI
frame.copyTo(compOrigPart);
// The ROI is BGR, so we need to convert back
cv::cvtColor(frame_gray, compBwPart, cv::COLOR_GRAY2BGR);
cv::cvtColor(edges_gray, compEdgePart, cv::COLOR_GRAY2BGR);
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compBwPart, "GrayScale", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compEdgePart, "Canny edge detection", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::imshow("Composite of Original, BW and Canny frames", compositeFrame);
cv::imshow("Original", compOrigPart);
cv::imshow("BW", compBwPart);
cv::imshow("Canny", compEdgePart);
cv::waitKey(33);
}
}
Screenshot of the composite window (using some random test video off the web):

OpenCV - Filling empty spaces in object using c++

How can I fill empty space of object in OpenCV ?
Let me clarify my question.
I have an image below
Now I want to fill all the gaps in the image like this :
in Matlab I have done it by convex hull, but I don't know how to do it in C++.
Thanks.
Try morphological operations. If you go this way, note, that you may vary either kernel size (increase to decrease number of iterations), or iterations (more iterations will eliminate empty space even if kernel is small), or both.
cv::Mat img = cv::imread("cwyX5.jpeg");
cv::imshow("image", img);
cv::Size kernelSize(5, 5);
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, kernelSize);
cv::Mat result;
int iterations = 3;
cv::morphologyEx(img, result, cv::MORPH_OPEN, kernel, cv::Point(-1,-1), iterations);
cv::imshow("result", result);
cv::waitKey();

Opencv2.4.9 SimpleBlobDetector mask does not work

I've perused this site for an explanation but to no avail...hopefully someone knows the answer.
I'm using simpleBlobDetector to track some blobs. I would like to specify a mask via the detect method, but for some reason the mask doesn't seem to work - my keypoints show up for the whole image. Here are some snippets of my code:
Mat currFrame;
Mat mask;
Mat roi;
cv::Ptr<cv::FeatureDetector> blob_detector = new cv::SimpleBlobDetector(params);//custom set of params I've left out for legibility
blob_detector->create("SimpleBlob");
vector<cv::KeyPoint> myblob;
while(true)
{
captured >> currFrame; // get a new frame from camera >> is grab and retrieve in one go, note grab does not allow frame to be modified but edges can be
// do nothing if frame is empty
if(currFrame.empty())
{
break;
}
/******************** make mask***********************/
mask = Mat::zeros(currFrame.size(),CV_8U);
roi = Mat(mask,Rect(400,400,400,400));
roi = 255;
/******************** image cleanup with some filters*/
GaussianBlur(currFrame,currFrame, Size(5,5), 1.5, 1.5);
cv::medianBlur(currFrame,currFrame,3);
blob_detector->detect(fgMaskMOG,myblob,mask);//fgMaskMOG is currFrame after some filtering and background subtraction
cv::drawKeypoints(fgMaskMOG,myblob,fgMaskMOG,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
imshow("mogForeground", fgMaskMOG);
imshow("original", currFrame);
imshow("mask",mask);
if(waitKey(1) != -1)
break;
}
The thing is, I confirmed that my mask is correctly made by using SurfFeatureDetector as described here (OpenCV: howto use mask parameter for feature point detection (SURF)) If anyone can see whats wrong with my mask, I'd really appreciate the help. Sorry about the messy code!
I had the same issue and couldn't find the solution, so I solved it by checking the mask myself:
blob_detector->detect(img, keypoints);
std::vector<cv::KeyPoint> keypoints_in_range;
for (cv::KeyPoint &kp : keypoints)
if (mask.at<char>(kp.pt) > 0)
keypoints_in_range.push_back(kp)
I found i opencv2.4.8 this code:
void SimpleBlobDetector::detectImpl(const cv::Mat& image, std::vector<cv::KeyPoint>& keypoints, const cv::Mat&) const
{
//TODO: support mask
keypoints.clear();
Mat grayscaleImage;
which means that this option is not supported yet.
Solution with filtering keyPoints is not quite good, because it is time taking ( you have to detect blobs in whole image ).
Better workaround is to cut ROI before detection and move each KeyPoint after detection:
int x = 500;
int y = 200;
int width = 700;
int height = 700;
Mat roi = frame(Rect(x,y,width,height));
blob_detector.detect(roi, keypoints);
for (KeyPoint &kp : keypoints)
{
kp.pt.x +=x;
kp.pt.y +=y;
}
drawKeypoints(frame, keypoints, frame,Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);