I developed some code to extract the centroids of a binary image with several small blobs (like blurred dots). The code is C++ and I have been using the findContours routine from OpenCV as follows
vector<vector<cv::Point> > contours;
cv::findContours(src, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
cv::Moments M1;
vector<cv::Point2f> dots(contours.size());
for (int i = 0; i < contours.size(); i++)
{
M1 = cv::moments(contours[i], true);
dots[i] = cv::Point2f(float(M1.m10/M1.m00), float(M1.m01/M1.m00));
}
The problem is that the findContours cannot be synthesized into an FPGA, so I must follow a different approach. I thought at something like an erosion that stops when the blob size is 1 pixel, but I am having hard time at thinking at an algorithm that avoids findContours. Any idea?
Related
I have this project where we are trying to make an autonomous vehicle using a lidar and a stereo camera. To to this we're making two maps with cartographer and merging them together. However, the data from the stereo camera is not very accurate and we therefor have to manipulate the map made by cartographer. To make the camera map we are detecting lines, reading the distance and turning this into a laser scan which is the sent to cartographer. Ideally we would be able to convert the map into just the lines. This is what the camera map looks like: Camera map
What I would like to do first is fill out the holes in the map to make it easier to find lines and such later. This is where I am struggling. I have written code to convert from nav_msgs::OccupancyGrid to cv::Mat and back in addition to merging the maps. I have looked over this code and I don't think this is where the problem is. I have tried different suggestions online but have not gotten close to a solution. This is my code:
cv::Mat fill_cam_mat(cv::Mat mat) {
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
//std::vector<cv::Vec4i> hierarchy;
cv::Mat mat_floodfill = canny_output.clone();
cv::floodFill(mat_floodfill, cv::Point(0,0), cv::Scalar(255));
cv::Mat mat_floodfill_inv;
cv::bitwise_not(mat_floodfill, mat_floodfill_inv);
cv::Mat mat_out = (canny_output | mat_floodfill_inv);
return mat_out;
}
And my result is as follows when merged with the lidar map:
Final map
I have also tried:
cv::Mat fill_cam_mat(cv::Mat mat) {
int mat_height = mat.rows;
int mat_width = mat.cols;
int thresh = 50;
cv::Mat canny_output;
cv::Canny( mat, canny_output, thresh, thresh*2 );
cv::Mat non_zero;
cv::findNonZero(canny_output, non_zero);
std::vector<std::vector<cv::Point>> hull(non_zero.total());
for(unsigned int i = 0, n = non_zero.total(); i < n; ++i) {
cv::convexHull(non_zero, hull[i], false);
}
cv::Mat fill_contours_result(mat_height, mat_width, CV_8UC3, cv::Scalar(0));
cv::fillPoly(fill_contours_result, hull, 255);
return fill_contours_result;
}
Which gives the same result. I have also tried using cv::findContours to spicify the hull, but that worked even worse.
I am new with OpenCV and I don't understand what is wrong with my output. Would really appreciate any help on the code or if anybody have any better suggestions on how to solve the problem. Is it even necessary to fill the holes in order to get useful information from the map?
Thank you in advance!
I am doing a real-time shapes and colors classification system with very high accuracy. It seems like my preprocessing phase is not good enough so that the result is not as accurate as I expected. Here is what I'm doing:
Take data from the Camera can crop it to receive ROI.
Convert ROI Image from RGB to HSV space.
Using a median filter to reduce noise in HSV image.
Threshold the image
Using dilate and erode to remove small holes and small objects in Image
Using findContours and approxPolyDP to detect square objects.
This is my preprocessing phase:
image_cv = cv::cvarrToMat(image_camera);
Mat cropped = image_cv(cv::Rect(0, 190, 640, 110));
imshow("origin", cropped);
Mat croppedCon = CropConveyor(cropped);
cv::cvtColor(croppedCon, croppedCon, CV_RGB2HSV);
medianBlur(croppedCon, croppedCon, 3);
cv::Mat binRect;
cv::inRange(croppedCon, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), binRect);
This is the code for detecting squares:
vector<vector<Point>> contours;
findContours(binarizedIm, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
vector<Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
//double arclength = arcLength(Mat(contours[i]), true);
approxPolyDP(Mat(contours[i]), approx, 3.245 , true); //0.04 for wood
if (approx.size() != 4) continue;
if (isContourConvex(Mat(approx)) && contourArea(Mat(approx)) > 250)
{
double MaxCos = 0;
for (int j = 2; j < 5; j++)
{
double cos = angle(approx[j % 4], approx[j - 1], approx[j - 2]);
MaxCos = MAX(cos, MaxCos);
}
if (MaxCos < 0.2)
squares.push_back(approx);
}
}
I think noise in HSV Image is the main reason. Here is some images illustrating my problems. I saw a lot of noise in HSV Image, that's why I use a media filter to it to reduce noise but preserve the edges becase I think that edges information is very important when using findContours function.
HSV and HSV in separate channels
My question is:
What is the noise in HSV Image, refer to the above Image, how can I
enhance my Image's quality?
The reason for noise in your saturation image is noise in your input image. Caused by a bad camera / optics and further increased by JPEG compression.
That's by far the worst image I have seen in years. You shouldn't invest another second into processing that, unless you live on Mars and need results tomorrow.
Your input image is super noisy, undersampled, defocussed, underexposed, full of aliasing and compression artifacts and pretty much anything else you can do wrong with an image.
First rule of signal processing:
crap in = crap out
You can get much better cameras basically for free. Find and use one.
Part of the problem is that you're doing the noise reduction in HSV space. In your example you can see the V channel is better-behaved than H and S. It would be better to do noise-reduction in RGB (which is more linear and closer, though not identical, to the camera's native colour space where the noise originates; of course there's also gamma-correction).
Maybe consider a stronger edge-preserving noise-reducing filter such as Bilateral Filter.
I don't get it why are you using HSV for segmenting the objects, the RGB image is good enough. Separate the image into 3 channels (r,g,b) and apply an adaptive threshold on them. dilate and erode the images then add (not merging) those 3 binary images to have one binary image. Finally do level 6 of your recipe to extract the objects. If the noise still effects the result, apply a bilateral filter on r,g,b channels before the threshold.
I am pretty new to programming. And I want to make program which is able to filtrate image from small objects and non-convex objects so only shapes such as rectangles, triangles, circles etc. stay.
What have I done so far?
I managed to obtain image in binary by two separate ways (color detection and canny function) Then I created contours with function findContours. So that is working flawlessly.
here's the code:
vector<Point> approxShape;
vector<vector<Point>> FiltredContours;
vector<vector<Point>> TRI;
vector<vector<Point>> RECT;
vector<vector<Point>> PENTA;
Mat WSO= Mat::zeros(im.size(), CV_8UC3); //Without Small Objects
for ( int j = 0; j < contours.size(); j++)
{
if ((fabs(contourArea(contours[j]))) >100)
drawContours(WSO, contours, j,Scalar(128,128,128),2,8,hiearchy,0, Point()); // to see how it looks before it goes further
{
approxPolyDP( Mat(contours[j]), approxShape, arcLength(Mat(contours[j]), true) * 0.02, true);
if (isContourConvex(approxShape))
{
FiltredContours.push_back(approxShape);
}
}
}
///--------Show image after filtring small obj. -----
imshow("WSO",WSO);
////--------Filtred-Image-Drawing---------------------
Mat approxmat = Mat::zeros(imHSV.size(),CV_8UC3);
drawContours(approxmat, FiltredContours, -1,barva,2,8,hiearchy,0, Point());//drawContours(approxkresba, FiltredContours, -1,Scalar(255, 0, 0),2,8,hiearchy,0, Point());
namedWindow("Filtred objects",CV_WINDOW_AUTOSIZE);
imshow("Filtred objects",approxmat);
I tried to change parameters in contourArea and in approxPollyDP as well. It still doesn't work the way I thought it would.
I'm currently making a program to track 4 paddles, with 3 different colors. I'm having trouble understanding how best to proceed, with the knowledge I have now, and how to reduce the computational costs of running the project. There are code examples of the steps listed at the end of this post.
The program contains a class file called Controllers, that has simple get and set functions for things such as X and Y position, and which HSV values are used for thresholding.
The program in its unoptimized state now does the following:
Reads image from webcam
Converts image to HSV colorspace
Uses the inRange function of OpenCV, together with some previously defined max/min values for HSV, to threshold the HSV image 3 times, one for each colored paddle. This saves to seperate Mat arrays.
(This step is problematic for me) - Performs erosion and dilation of EACH of the three thresholded images.
Passes the image into a function, that uses Moments to create a vector of point describing the contours, and then uses moments to calculate the X and Y location, which is saved as an object and pushed back into a vector of these paddle objects.
Everything technically works at this point, but the resources required to perform the morphological operations three times each loop through the while loop that reads images from the webcam is slowing the program immensely.(Applying 2 iterations of erosion, and 3 of dilation on 3 640*480 images at an acceptable frame rate.)
Threshold Images for different paddles
inRange(HSV, playerOne.getHSVmin(), playerOne.getHSVmax(), threshold1);
inRange(HSV, playerTwo.getHSVmin(), playerTwo.getHSVmax(), threshold2);
inRange(HSV, powerController.getHSVmin(), powerController.getHSVmax(), threshold3);
Perform morphological operations
morphOps(threshold1);
void morphOps(Mat &thresh)
{
//Create a structuring element to be used for morph operations.
Mat structuringElement = getStructuringElement(MORPH_RECT, Size(3,3));
Mat dilateElement = getStructuringElement(MORPH_RECT, Size(6, 6));
//Perform the morphological operations, using two/three iterations because the noise is horrible.
erode(thresh, thresh, structuringElement, Point(-1, -1), 3);
dilate(thresh, thresh, dilateElement, Point(-1, -1), 2);
}
Track the image
trackFilteredObject(playerOne, threshold1, cameraFeed);
trackFilteredObject(playerTwo, threshold2, cameraFeed);
trackFilteredObject(powerController, threshold3, cameraFeed);
void trackFilteredObject(Controllers theControllers, Mat threshold, Mat HSV, Mat &cameraFeed)
{
vector <Controllers> players;
Mat temp;
threshold.copyTo(temp);
//these vectors are needed to save the output of findCountours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//Find the contours of the image
findContours(temp, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
//Moments are used to find the filtered objects.
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0)
{
int numObjects = hierarchy.size();
//If there are more objects than the maximum number of objects we want to track, the filter may be noisy.
if (numObjects < MAX_NUM_OBJECTS)
{
for (int i = 0; i >= 0; i = hierarchy[i][0])
{
Moments moment = moments((Mat)contours[i]);
double area = moment.m00;
//If the area is less than min area, then it is probably noise
if (area > MIN_AREA)
{
Controllers player;
player.setXPos(moment.m10 / area);
player.setYPos(moment.m01 / area);
player.setType(theControllers.getType());
player.setColor(theControllers.getColor());
players.push_back(player);
objectFound = true;
}
else objectFound = false;
}
//Draw the object location on screen if an object is found
if (objectFound)
{
drawObject(players, cameraFeed);
}
}
}
}
The idea is that I want to be able to isolate each object, and use the X and Y positions as points of a triangle, and use the information to calculate angle and power of an arrow shot. So I want to know if there is a better way to isolate the colored paddles and remove the noise, that doesn't require me to perform these morphological operations for each color.
I am working in C++ and opencv
I am detecting the big contour in an image because I have a black area in it.
In this case, the area is only horizontally, but it can be in any place.
Mat resultGray;
cvtColor(result,resultGray, COLOR_BGR2GRAY);
medianBlur(resultGray,resultGray,3);
Mat resultTh;
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny( resultGray, canny_output, 100, 100*2, 3 );
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
Vector<Point> best= contours[0];
int max_area = -1;
for( int i = 0; i < contours.size(); i++ ) {
Scalar color = Scalar( 0, 0, 0 );
if(contourArea(contours[i])> max_area)
{
max_area=contourArea(contours[i]);
best=contours[i];
}
}
Mat approxCurve;
approxPolyDP(Mat(best),approxCurve,0.01*arcLength(Mat(best),true),true);
Wiht this, i have the big contour and it approximation (in approxCurve). Now, I want to obtain the corners of this approximation and get the image inside this contour, but I dont know how can I do it.
I am using this How to remove black part from the image?
But the last part I dont understad very well.
Anyone knows how can I obtain the corners? It is another way more simple that this?
Thanks for your time,
One much simpler way you could do that is to check the image pixels and find the minimum/maximum coordinates of non-black pixels.
Something like this:
int maxx,maxy,minx,miny;
maxx=maxy=-std::numeric_limits<int>::max();
minx=miny=std::numeric_limits<int>::min();
for(int y=0; y<img.rows; ++y)
{
for(int x=0; x<img.cols; ++x)
{
const cv::Vec3b &px = img.at<cv::Vec3b>(y,x);
if(px(0)==0 && px(1)==0 && px(2)==0)
continue;
if(x<minx) minx=x;
if(x>maxx) maxx=x;
if(y<miny) miny=y;
if(y>maxy) maxy=y;
}
}
cv::Mat subimg;
img(cv::Rect(cv::Point(minx,miny),cv::Point(maxx,maxy))).copyTo(subimg);
In my opinion, this approach is more reliable since you don't have to detect any contour, which could lead to false detections depending on the input image.
In a very efficient way, you can sample the original image until you find a pixel on, and from there move along a row and along a column to find the first (0,0,0) pixel. It will work, unless in the good part of the image you can have (0,0,0) pixels. If this is the case (e.g.: dead pixel), you can add a double check checking the neighbourhood of this (0,0,0) pixel (it should contain other (0,0,0) pixels.