Obstacle detection for ground robot using aerial image - c++

I want to perform obstacle detection for a ground robot by using a picture taken by a drone of the area the ground robot will cover. Since I have limited background in image processing I am not sure how to carry this out. I tried using the following method but the result is not very accurate. It detects very small edges also and it does not work well with aerial images.
#pragma once
#include <string>
#include <iostream>
#include <vector>
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
//----------------------------------------------------------
// MAIN
//----------------------------------------------------------
int main(int argc, char* argv[])
{
// src image
Mat src;
//grayscale image
Mat gray;
// edges image
Mat edges;
//dst image
Mat dst;
//eroded image
Mat erosion;
//smoothed result
Mat result;
//----------------------------------------------------------
// Image loading
//----------------------------------------------------------
namedWindow("result");
namedWindow("src");
namedWindow("edges");
src = imread("C:/Users/HP/Desktop/SDP/obstacle detection/obstacle detection/obstacle detection/shapes.jpg");
//----------------------------------------------------------
//Specifying size and type of image
//----------------------------------------------------------
edges = Mat::zeros(src.size(), CV_8UC1);
dst = Mat::zeros(src.size(), CV_8UC1);
gray= Mat::zeros(src.size(), CV_8UC1);
erosion = Mat::zeros(src.size(), CV_8UC1);
result = Mat::zeros(src.size(), CV_8UC1);
//----------------------------------------------------------
//Converting from RGB to grayscale
//----------------------------------------------------------
cvtColor(src, gray, COLOR_BGR2GRAY);
//----------------------------------------------------------
//Edge Detetcion using OpenCV Canny Edge Detector function
//----------------------------------------------------------
Canny(gray, edges, 80, 255);
//----------------------------------------------------------
//Filling in the non-obstacle areas with white
//----------------------------------------------------------
for (int i = 0; i<edges.cols; ++i)
{
int j = edges.rows - 1;
for (j = edges.rows - 1; j>0; --j)
{
if (edges.at<uchar>(j, i)>0)
{
break;
}
}
dst(Range(j, dst.rows - 1), Range(i, i + 1)) = 255;
}
//----------------------------------------------------------
// Appying erosion function to remove noise
//----------------------------------------------------------
Mat element = getStructuringElement(MORPH_RECT, Size(10, 10));
erode(dst,erosion,element);
//----------------------------------------------------------
//Smoothing the edges to get result
//----------------------------------------------------------
GaussianBlur(erosion, result, Size(5,5), 4);
//----------------------------------------------------------
// Displaying the intermediate and final resulting images
//----------------------------------------------------------
namedWindow("src", WINDOW_NORMAL);
imshow("src", src);
namedWindow("edges", WINDOW_NORMAL);
imshow("edges", edges);
namedWindow("dst", WINDOW_NORMAL);
imshow("dst", dst);
namedWindow("erosion", WINDOW_NORMAL);
imshow("erosion", erosion);
namedWindow("result", WINDOW_NORMAL);
imshow("result", result);
//----------------------------------------------------------
// Wait key press
//----------------------------------------------------------
waitKey(0);
destroyAllWindows();
return 0;
}
The code takes in an image, converts it to grayscale. Then canny edge detection is used to detect edges of all the objects in the image. This edge detected image s filled with white color starting from the bottom until an edge is detected. The process continues until the whole image is covered. The result is a binary image with white color for areas without obstacles and and black color for obstacles. The opencv function erode is then used to remove unnecessary noise.
I would really appreciate it if I get suggestions on how to improve this or use any other technique.

I suggest thresholding the image for a color range matching the ground. This approach works well, if the color of your ground does not change too much (which is the case in your src image). You might want to check out this OpenCV example (Python).

Related

C++ OpenCV - Find biggest object in an webcam stream and sort it by size

My goal is to find the biggest contour of a captured webcam frame, then after it's found, find its size and determine either to be rejected or accepted.
Just to explain the objetive of this project, i am currently working for a Hygiene product's Manufacturer. There we have, in total, 6 workers that are responsible for sorting the defective soap bars out of the production line. So in order to gain this workforce for other activities, i am trying to write an algorithm to "replace" their eyes.
I've tried several methods along the way (findcontours, SimpleBlobDetection, Canny, Object tracking), but the problem that i've been facing is that i can't seem to find a way to effectively find the biggest object in a webcam image, find its size and then determine to either discard or accept it.
Below follows my newest code to find the biggest contour in an webcam stream:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv/cv.h"
#include "opencv2\imgproc\imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, const char** argv)
{
Mat src;
Mat imgGrayScale;
Mat imgCanny;
Mat imgBlurred;
/// Load source image
VideoCapture capWebcam(0);
if (capWebcam.isOpened() == false)
{
cout << "Não foi possível abrir webcam!" << endl;
return(0);
}
while (capWebcam.isOpened())
{
bool blnframe = capWebcam.read(src);
if (!blnframe || src.empty())
{
cout << "Erro! Frame não lido!\n";
break;
}
int largest_area = 0;
int largest_contour_index = 0;
Rect bounding_rect;
Mat thr(src.rows, src.cols, CV_8UC1);
Mat dst(src.rows, src.cols, CV_8UC1, Scalar::all(0));
cvtColor(src, imgGrayScale, CV_BGR2GRAY); //Convert to gray
GaussianBlur(imgGrayScale, imgBlurred, Size(5, 5), 1.8);
Canny(imgBlurred, imgCanny, 45, 90); //Threshold the gray
vector<vector<Point>> contours; // Vector for storing contour
vector<Vec4i> hierarchy;
findContours(imgCanny, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); // Find the contours in the image
for (int i = 0; i < contours.size(); i++) // iterate through each contour.
{
double a = contourArea(contours[i], false); // Find the area of contour
if (a > largest_area)
{
largest_area = a;
largest_contour_index = i; //Store the index of largest contour
bounding_rect = boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
Scalar color(255, 255, 255);
drawContours(dst, contours, largest_contour_index, color, CV_FILLED, 8, hierarchy); // Draw the largest contour using previously stored index.
rectangle(src, bounding_rect, Scalar(0, 255, 0), 1, 8, 0);
imshow("src", src);
imshow("largest Contour", dst);
waitKey(27);
}
return(0);
}
And here are the results windows that the program generates and the image of the object that i want to detect and sort.
Thank you all in advance for any clues on how to achieve my goal.

Segmentation of foreground from background

I'm currently working on a project that uses a Lacatan Banana, and I would like to know how to further separate the foreground from the background:
I already got a segmented image of it using erosion, dilation, and thresholding only. The problem is that it is still not properly segmented.
Here is my code:
cv::Mat imggray, imgthresh, fg, bgt, bg;
cv::cvtColor(src, imggray, CV_BGR2GRAY); //Grayscaling the image from RGB color space
cv::threshold(imggray, imgthresh, 0, 255, CV_THRESH_BINARY_INV | CV_THRESH_OTSU); //Create an inverted binary image from the grayscaled image
cv::erode(imgthresh, fg, cv::Mat(), cv::Point(-1, -1), 1); //erosion of the binary image and setting it as the foreground
cv::dilate(imgthresh, bgt, cv::Mat(), cv::Point(-1, -1), 4); //dilation of the binary image to reduce the background region
cv::threshold(bgt, bg, 1, 128, CV_THRESH_BINARY); //we get the background by setting the threshold to 1
cv::Mat markers = cv::Mat::zeros(src.size(), CV_32SC1); //initializing the markers with a size same as the source image and setting its data type as 32-bit Single channel
cv::add(fg, bg, markers); //setting the foreground and background as markers
cv::Mat mask = cv::Mat::zeros(markers.size(), CV_8UC1);
markers.convertTo(mask, CV_8UC1); //converting the 32-bit single channel marker to a 8-bit single channel
cv::Mat mthresh;
cv::threshold(mask, mthresh, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU); //threshold further the mask to reduce the noise
// cv::erode(mthresh,mthresh,cv::Mat(), cv::Point(-1,-1),2);
cv::Mat result;
cv::bitwise_and(src, src, result, mthresh); //use the mask to subtrack the banana from the background
for (int x = 0; x < result.rows; x++) { //changing the black background to white
for (int y = 0; y < result.cols; y++) {
if (result.at<Vec3b>(x, y) == Vec3b(0, 0, 0)){
result.at<Vec3b>(x, y)[0] = 255;
result.at<Vec3b>(x, y)[1] = 255;
result.at<Vec3b>(x, y)[2] = 255;
}
}
}
This is my result:
As the background is near gray-color, try using Hue channel and Saturation channel instead of grayscale image.
You can get them easily.
cv::Mat hsv;
cv::cvtColor(src, hsv, CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(src, channels);
cv::Mat hue = channels[0];
cv::Mat saturation = channels[1];
// If you want to combine those channels, use this code.
cv::Mat hs = cv::Mat::zeros(src.size(), CV_8U);
for(int r=0; r<src.rows; r++) {
for(int c=0; c<src.cols; c++) {
int hp = h.at<uchar>(r,c);
int sp = s.at<uchar>(r,c);
hs.at<uchar>(r, c) = static_cast<uchar>((h+s)>>1);
}
}
adaptiveThreshold() should work better than just level-cut threshold(), because it does not consider absolute color levels, but rather a change in color in small area around the point being checked.
Try replacing your thresholding with adaptive one.
Use a top-hat instead of just erosion/dilation. It will take care of the background variations at the same time.
Then in your case a simple thresholding should be good enough to have an accurate segmentation. Else, you can couple it with a watershed.
(I will share some images asap).
Thanks guys, I tried to apply your advises and was able to come up with this
However as you can see there are still bits of the background,any ideas how to "clean" these further, i tried thresholding further but it would still have the bits.The Code I came up with is below and i apologize in advance if the variables and coding style is somewhat confusing didn't have the time to properly sort them.
#include <stdio.h>
#include <iostream>
#include <opencv2\core.hpp>
#include <opencv2\opencv.hpp>
#include <opencv2\highgui.hpp>
using namespace cv;
using namespace std;
Mat COLOR_MAX(Scalar(65, 255, 255));
Mat COLOR_MIN(Scalar(15, 45, 45));
int main(int argc, char** argv){
Mat src,hsv_img,mask,gray_img,initial_thresh;
Mat second_thresh,add_res,and_thresh,xor_thresh;
Mat result_thresh,rr_thresh,final_thresh;
// Load source Image
src = imread("sample11.jpg");
imshow("Original Image", src);
cvtColor(src,hsv_img,CV_BGR2HSV);
imshow("HSV Image",hsv_img);
//imwrite("HSV Image.jpg", hsv_img);
inRange(hsv_img,COLOR_MIN,COLOR_MAX, mask);
imshow("Mask Image",mask);
cvtColor(src,gray_img,CV_BGR2GRAY);
adaptiveThreshold(gray_img, initial_thresh, 255,ADAPTIVE_THRESH_GAUSSIAN_C,CV_THRESH_BINARY_INV,257,2);
imshow("AdaptiveThresh Image", initial_thresh);
add(mask,initial_thresh,add_res);
erode(add_res, add_res, Mat(), Point(-1, -1), 1);
dilate(add_res, add_res, Mat(), Point(-1, -1), 5);
imshow("Bitwise Res",add_res);
threshold(gray_img,second_thresh,170,255,CV_THRESH_BINARY_INV | CV_THRESH_OTSU);
imshow("TreshImge", second_thresh);
bitwise_and(add_res,second_thresh,and_thresh);
imshow("andthresh",and_thresh);
bitwise_xor(add_res, second_thresh, xor_thresh);
imshow("xorthresh",xor_thresh);
bitwise_or(and_thresh,xor_thresh,result_thresh);
imshow("Result image", result_thresh);
bitwise_and(add_res,result_thresh,final_thresh);
imshow("Final Thresh",final_thresh);
erode(final_thresh, final_thresh, Mat(), Point(-1,-1),5);
bitwise_and(src,src,rr_thresh,final_thresh);
imshow("Segmented Image", rr_thresh);
imwrite("Segmented Image.jpg", rr_thresh);
waitKey(0);
return 1;
}

Drawing Rectangle around difference area

I have a question which i am unable to resolve. I am taking difference of two images using OpenCV. I am getting output in a seperate Mat. Difference method used is the AbsDiff method. Here is the code.
char imgName[15];
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
/*cvtColor(img1, img1, CV_BGR2GRAY);
cvtColor(img2, img2, CV_BGR2GRAY);*/
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
sprintf(imgName,"D:/outputer/d.jpg");
imwrite(imgName, diffImage);
I want to bound the difference part in a rectangle. findContours is drawing too many contours. but i only need a particular portion. My diff image is
I want to draw a single rectangle around all the five dials.
Please point me to right direction.
Regards,
I would search for the highest value for i index giving a non black pixel; that's the right border.
The lowest non black i is the left border. Similar for j.
You can:
binarize the image with a threshold. Background will be 0.
Use findNonZero to retrieve all points that are not 0, i.e. all foreground points.
use boundingRect on the retrieved points.
Result:
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
// Load image (grayscale)
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
rectangle(out, box, Scalar(0,255,0), 3);
// Show
imshow("Result", out);
waitKey();
return 0;
}
Find contours, it will output a set of contours as std::vector<std::vector<cv::Point> let us call it contours:
std::vector<cv::Point> all_points;
size_t points_count{0};
for(const auto& contour:contours){
points_count+=contour.size();
all_points.reserve(all_points);
std::copy(contour.begin(), contour.end(),
std::back_inserter(all_points));
}
auto bounding_rectnagle=cv::boundingRect(all_points);

Detection of objects in nonuniform illumination in opencv C++

I am performing feature detection in a video/live stream/image using OpenCV C++. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images.
The lighting condition in a particular portion of the video also changes over the course of the video. I tried the 'Histogram equalization' function, but it didn't help.
I got a working solution in MATLAB in the following link:
http://in.mathworks.com/help/images/examples/correcting-nonuniform-illumination.html
However, most of the functions used in the above link aren't available in OpenCV.
Can you suggest the alternative of this MATLAB code in OpenCV C++?
OpenCV has the adaptive threshold paradigm available in the framework: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold
The function prototype looks like:
void adaptiveThreshold(InputArray src, OutputArray dst,
double maxValue, int adaptiveMethod,
int thresholdType, int blockSize, double C);
The first two parameters are the input image and a place to store the output thresholded image. maxValue is the thresholded value assigned to an output pixel should it pass the criteria, adaptiveMethod is the method to use for adaptive thresholding, thresholdType is the type of thresholding you want to perform (more later), blockSize is the size of the windows to examine (more later), and C is a constant to subtract from each window. I've never really needed to use this and I usually set this to 0.
The default method for adaptiveThreshold is to analyze blockSize x blockSize windows and calculate the mean intensity within this window subtracted by C. If the centre of this window is above the mean intensity, this corresponding location in the output position of the output image is set to maxValue, else the same position is set to 0. This should combat the non-uniform illumination issue where instead of applying a global threshold to the image, you are performing the thresholding on local pixel neighbourhoods.
You can read the documentation on the other methods for the other parameters, but to get your started, you can do something like this:
// Include libraries
#include <cv.h>
#include <highgui.h>
// For convenience
using namespace cv;
// Example function to adaptive threshold an image
void threshold()
{
// Load in an image - Change "image.jpg" to whatever your image is called
Mat image;
image = imread("image.jpg", 1);
// Convert image to grayscale and show the image
// Wait for user key before continuing
Mat gray_image;
cvtColor(image, gray_image, CV_BGR2GRAY);
namedWindow("Gray image", CV_WINDOW_AUTOSIZE);
imshow("Gray image", gray_image);
waitKey(0);
// Adaptive threshold the image
int maxValue = 255;
int blockSize = 25;
int C = 0;
adaptiveThreshold(gray_image, gray_image, maxValue,
CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,
blockSize, C);
// Show the thresholded image
// Wait for user key before continuing
namedWindow("Thresholded image", CV_WINDOW_AUTOSIZE);
imshow("Thresholded image", gray_image);
waitKey(0);
}
// Main function - Run the threshold function
int main( int argc, const char** argv )
{
threshold();
}
adaptiveThreshold should be your first choice.
But here I report the "translation" from Matlab to OpenCV, so you can easily port your code. As you see, most of the functions are available both in Matlab and OpenCV.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Step 1: Read Image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Step 2: Use Morphological Opening to Estimate the Background
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(15,15));
Mat1b background;
morphologyEx(img, background, MORPH_OPEN, kernel);
// Step 3: Subtract the Background Image from the Original Image
Mat1b img2;
absdiff(img, background, img2);
// Step 4: Increase the Image Contrast
// Don't needed it here, the equivalent would be cv::equalizeHist
// Step 5(1): Threshold the Image
Mat1b bw;
threshold(img2, bw, 50, 255, THRESH_BINARY);
// Step 6: Identify Objects in the Image
vector<vector<Point>> contours;
findContours(bw.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(int i=0; i<contours.size(); ++i)
{
// Step 5(2): bwareaopen
if(contours[i].size() > 50)
{
// Step 7: Examine One Object
Mat1b object(bw.size(), uchar(0));
drawContours(object, contours, i, Scalar(255), CV_FILLED);
imshow("Single Object", object);
waitKey();
}
}
return 0;
}

Image edge smoothing with opencv

I am trying to smooth output image edges using opencv framework, I am trying following steps. Steps took from here https://stackoverflow.com/a/17175381/790842
int lowThreshold = 10.0;
int ratio = 3;
int kernel_size = 3;
Mat src_gray,detected_edges,dst,blurred;
/// Convert the image to grayscale
cvtColor( result, src_gray, CV_BGR2GRAY );
/// Reduce noise with a kernel 3x3
cv::blur( src_gray, detected_edges, cv::Size(5,5) );
/// Canny detector
cv::Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
//Works fine upto here I am getting perfect edge mask
cv::dilate(detected_edges, blurred, result);
//I get Assertion failed (src.channels() == 1 && func != 0) in countNonZero ERROR while doing dilate
result.copyTo(blurred, blurred);
cv::blur(blurred, blurred, cv::Size(3.0,3.0));
blurred.copyTo(result, detected_edges);
UIImage *image = [UIImageCVMatConverter UIImageFromCVMat:result];
I want help whether if I am going in right way, or what am I missing?
Thanks for any suggestion and help.
Updated:
I have got an image like below got from grabcut algorithm, now I want to apply edge smoothening to the image, as you can see the image is not smooth.
Do you want to get something like this?
If yes, then here is the code:
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main(int argc, char **argv)
{
cv::namedWindow("result");
Mat img=imread("TestImg.png");
Mat whole_image=imread("D:\\ImagesForTest\\lena.jpg");
whole_image.convertTo(whole_image,CV_32FC3,1.0/255.0);
cv::resize(whole_image,whole_image,img.size());
img.convertTo(img,CV_32FC3,1.0/255.0);
Mat bg=Mat(img.size(),CV_32FC3);
bg=Scalar(1.0,1.0,1.0);
// Prepare mask
Mat mask;
Mat img_gray;
cv::cvtColor(img,img_gray,cv::COLOR_BGR2GRAY);
img_gray.convertTo(mask,CV_32FC1);
threshold(1.0-mask,mask,0.9,1.0,cv::THRESH_BINARY_INV);
cv::GaussianBlur(mask,mask,Size(21,21),11.0);
imshow("result",mask);
cv::waitKey(0);
// Reget the image fragment with smoothed mask
Mat res;
vector<Mat> ch_img(3);
vector<Mat> ch_bg(3);
cv::split(whole_image,ch_img);
cv::split(bg,ch_bg);
ch_img[0]=ch_img[0].mul(mask)+ch_bg[0].mul(1.0-mask);
ch_img[1]=ch_img[1].mul(mask)+ch_bg[1].mul(1.0-mask);
ch_img[2]=ch_img[2].mul(mask)+ch_bg[2].mul(1.0-mask);
cv::merge(ch_img,res);
cv::merge(ch_bg,bg);
imshow("result",res);
cv::waitKey(0);
cv::destroyAllWindows();
}
And I think this link will be interestiong for you too: Poisson Blending
I have followed the following steps to smooth the edges of the Foreground I got from GrabCut.
Create a binary image from the mask I got from GrabCut.
Find the contour of the binary image.
Create an Edge Mask by drawing the contour points. It gives the boundary edges of the Foreground image I got from GrabCut.
Then follow the steps define in https://stackoverflow.com/a/17175381/790842