Detection of objects in nonuniform illumination in opencv C++ - c++

I am performing feature detection in a video/live stream/image using OpenCV C++. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images.
The lighting condition in a particular portion of the video also changes over the course of the video. I tried the 'Histogram equalization' function, but it didn't help.
I got a working solution in MATLAB in the following link:
http://in.mathworks.com/help/images/examples/correcting-nonuniform-illumination.html
However, most of the functions used in the above link aren't available in OpenCV.
Can you suggest the alternative of this MATLAB code in OpenCV C++?

OpenCV has the adaptive threshold paradigm available in the framework: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold
The function prototype looks like:
void adaptiveThreshold(InputArray src, OutputArray dst,
double maxValue, int adaptiveMethod,
int thresholdType, int blockSize, double C);
The first two parameters are the input image and a place to store the output thresholded image. maxValue is the thresholded value assigned to an output pixel should it pass the criteria, adaptiveMethod is the method to use for adaptive thresholding, thresholdType is the type of thresholding you want to perform (more later), blockSize is the size of the windows to examine (more later), and C is a constant to subtract from each window. I've never really needed to use this and I usually set this to 0.
The default method for adaptiveThreshold is to analyze blockSize x blockSize windows and calculate the mean intensity within this window subtracted by C. If the centre of this window is above the mean intensity, this corresponding location in the output position of the output image is set to maxValue, else the same position is set to 0. This should combat the non-uniform illumination issue where instead of applying a global threshold to the image, you are performing the thresholding on local pixel neighbourhoods.
You can read the documentation on the other methods for the other parameters, but to get your started, you can do something like this:
// Include libraries
#include <cv.h>
#include <highgui.h>
// For convenience
using namespace cv;
// Example function to adaptive threshold an image
void threshold()
{
// Load in an image - Change "image.jpg" to whatever your image is called
Mat image;
image = imread("image.jpg", 1);
// Convert image to grayscale and show the image
// Wait for user key before continuing
Mat gray_image;
cvtColor(image, gray_image, CV_BGR2GRAY);
namedWindow("Gray image", CV_WINDOW_AUTOSIZE);
imshow("Gray image", gray_image);
waitKey(0);
// Adaptive threshold the image
int maxValue = 255;
int blockSize = 25;
int C = 0;
adaptiveThreshold(gray_image, gray_image, maxValue,
CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,
blockSize, C);
// Show the thresholded image
// Wait for user key before continuing
namedWindow("Thresholded image", CV_WINDOW_AUTOSIZE);
imshow("Thresholded image", gray_image);
waitKey(0);
}
// Main function - Run the threshold function
int main( int argc, const char** argv )
{
threshold();
}

adaptiveThreshold should be your first choice.
But here I report the "translation" from Matlab to OpenCV, so you can easily port your code. As you see, most of the functions are available both in Matlab and OpenCV.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Step 1: Read Image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Step 2: Use Morphological Opening to Estimate the Background
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(15,15));
Mat1b background;
morphologyEx(img, background, MORPH_OPEN, kernel);
// Step 3: Subtract the Background Image from the Original Image
Mat1b img2;
absdiff(img, background, img2);
// Step 4: Increase the Image Contrast
// Don't needed it here, the equivalent would be cv::equalizeHist
// Step 5(1): Threshold the Image
Mat1b bw;
threshold(img2, bw, 50, 255, THRESH_BINARY);
// Step 6: Identify Objects in the Image
vector<vector<Point>> contours;
findContours(bw.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(int i=0; i<contours.size(); ++i)
{
// Step 5(2): bwareaopen
if(contours[i].size() > 50)
{
// Step 7: Examine One Object
Mat1b object(bw.size(), uchar(0));
drawContours(object, contours, i, Scalar(255), CV_FILLED);
imshow("Single Object", object);
waitKey();
}
}
return 0;
}

Related

OpenCV FAST Algorithm creating skewed keypoints on only part of an image

I'm trying to use OpenCV's FAST corner detection algorithm to get an outline of an image of a ball (Not my final project, I'm using it as a simple example). For some reason, it only works on a third of the input Mat, and stretches the Keypoints across the image. I'm not sure as to what could be going wrong here to make the FAST algorithm not apply to the entire Mat.
Code:
void featureDetection(const Mat& imgIn, std::vector<KeyPoint>& pointsOut) {
int fast_threshold = 20;
bool nonmaxSuppression = true;
FAST(imgIn, pointsOut, fast_threshold, nonmaxSuppression);
}
int main(int argc, char** argv) {
Mat out = imread("ball.jpg", IMREAD_COLOR);
// Detect features
std::vector<KeyPoint> keypoints;
featureDetection(out.clone(), keypoints);
Mat out2 = out.clone();
// Draw features (Normal, missing right side)
for(KeyPoint p : keypoints) {
drawMarker(out, Point(p.pt.x / 3, p.pt.y), Scalar(0, 255, 0));
}
imwrite("out.jpg", out, std::vector<int>(0));
// Draw features (Stretched)
for(KeyPoint p : keypoints) {
drawMarker(out2, Point(p.pt.x, p.pt.y), Scalar(127, 0, 255));
}
imwrite("out2.jpg", out2, std::vector<int>(0));
}
Input image
Output 1 (keypoint.x multiplied by a factor of 1/3, but missing right side)
Output 2 (Coordinates untouched)
I'm using OpenCV 4.5.4 on MinGW.
Most keypoint detectors use grayscale images as input.
If you interpret the memory of a bgr image as grayscale, you will have 3 times the number of pixels. Y axis is still ok if the algorithm uses the width-offset per row, which most algorithms do (because this is useful when subimaging or padding is used).
I don't know whether it is a bug or a feature, that FAST doesn't check for the number of channels snd doesnt throw an exception if the wrong number of channels ist given.
You can convert the image to grayscale by cv::cvtColor with the flag cv:: COLOR_BGR2GRAY

Obstacle detection for ground robot using aerial image

I want to perform obstacle detection for a ground robot by using a picture taken by a drone of the area the ground robot will cover. Since I have limited background in image processing I am not sure how to carry this out. I tried using the following method but the result is not very accurate. It detects very small edges also and it does not work well with aerial images.
#pragma once
#include <string>
#include <iostream>
#include <vector>
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
//----------------------------------------------------------
// MAIN
//----------------------------------------------------------
int main(int argc, char* argv[])
{
// src image
Mat src;
//grayscale image
Mat gray;
// edges image
Mat edges;
//dst image
Mat dst;
//eroded image
Mat erosion;
//smoothed result
Mat result;
//----------------------------------------------------------
// Image loading
//----------------------------------------------------------
namedWindow("result");
namedWindow("src");
namedWindow("edges");
src = imread("C:/Users/HP/Desktop/SDP/obstacle detection/obstacle detection/obstacle detection/shapes.jpg");
//----------------------------------------------------------
//Specifying size and type of image
//----------------------------------------------------------
edges = Mat::zeros(src.size(), CV_8UC1);
dst = Mat::zeros(src.size(), CV_8UC1);
gray= Mat::zeros(src.size(), CV_8UC1);
erosion = Mat::zeros(src.size(), CV_8UC1);
result = Mat::zeros(src.size(), CV_8UC1);
//----------------------------------------------------------
//Converting from RGB to grayscale
//----------------------------------------------------------
cvtColor(src, gray, COLOR_BGR2GRAY);
//----------------------------------------------------------
//Edge Detetcion using OpenCV Canny Edge Detector function
//----------------------------------------------------------
Canny(gray, edges, 80, 255);
//----------------------------------------------------------
//Filling in the non-obstacle areas with white
//----------------------------------------------------------
for (int i = 0; i<edges.cols; ++i)
{
int j = edges.rows - 1;
for (j = edges.rows - 1; j>0; --j)
{
if (edges.at<uchar>(j, i)>0)
{
break;
}
}
dst(Range(j, dst.rows - 1), Range(i, i + 1)) = 255;
}
//----------------------------------------------------------
// Appying erosion function to remove noise
//----------------------------------------------------------
Mat element = getStructuringElement(MORPH_RECT, Size(10, 10));
erode(dst,erosion,element);
//----------------------------------------------------------
//Smoothing the edges to get result
//----------------------------------------------------------
GaussianBlur(erosion, result, Size(5,5), 4);
//----------------------------------------------------------
// Displaying the intermediate and final resulting images
//----------------------------------------------------------
namedWindow("src", WINDOW_NORMAL);
imshow("src", src);
namedWindow("edges", WINDOW_NORMAL);
imshow("edges", edges);
namedWindow("dst", WINDOW_NORMAL);
imshow("dst", dst);
namedWindow("erosion", WINDOW_NORMAL);
imshow("erosion", erosion);
namedWindow("result", WINDOW_NORMAL);
imshow("result", result);
//----------------------------------------------------------
// Wait key press
//----------------------------------------------------------
waitKey(0);
destroyAllWindows();
return 0;
}
The code takes in an image, converts it to grayscale. Then canny edge detection is used to detect edges of all the objects in the image. This edge detected image s filled with white color starting from the bottom until an edge is detected. The process continues until the whole image is covered. The result is a binary image with white color for areas without obstacles and and black color for obstacles. The opencv function erode is then used to remove unnecessary noise.
I would really appreciate it if I get suggestions on how to improve this or use any other technique.
I suggest thresholding the image for a color range matching the ground. This approach works well, if the color of your ground does not change too much (which is the case in your src image). You might want to check out this OpenCV example (Python).

OpenCv findcontours() too much contours

I have shape that I want to extract contours from ( I need to have number of contours right -two), but in hierarchy I get 4 or more instead of two contours. I just cant realise why ,it is obvious and there is no noise, I used diletation and erosion before.
I tried to change all parametars, and nothing. Also I tried with image of white square and didnt work. There is my line for that:
Mat I = imread("test.png", CV_LOAD_IMAGE_GRAYSCALE);
I.convertTo(B, CV_8U);
findContours(B, contour_vec, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
Why is contour so disconnected?What to do to have 2 contours in hierarchy?
In your image there are 5 contours: 2 external contours, 2 internal contours and 1 on the top right.
You can discard internal and external contours looking if they are oriented CW or CCW. You can do this with contourArea with oriented flag:
oriented – Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.
So, drawing external contours in red, and internal in green, you get:
You can then store only external contours (see externalContours) in the code below:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
// Load grayscale image
Mat1b B = imread("path_to_image", IMREAD_GRAYSCALE);
// Find contours
vector<vector<Point>> contours;
findContours(B.clone(), contours, RETR_TREE, CHAIN_APPROX_NONE);
// Create output image
Mat3b out;
cvtColor(B, out, COLOR_GRAY2BGR);
vector<vector<Point>> externalContours;
for (size_t i=0; i<contours.size(); ++i)
{
// Find orientation: CW or CCW
double area = contourArea(contours[i], true);
if (area >= 0)
{
// Internal contours
drawContours(out, contours, i, Scalar(0, 255, 0));
}
else
{
// External contours
drawContours(out, contours, i, Scalar(0, 0, 255));
// Save external contours
externalContours.push_back(contours[i]);
}
}
imshow("Out", out);
waitKey();
return 0;
}
Please remember that findContours corrupts input image (the second image you're showing is garbage). Just pass a clone of the image to findContours to avoid corruptions of the original image.

OpenCV filter2D inside non rectangular ROI

I have an image in which only a small non-rectangular portion is useful. I have a binary mask indicating the useful ROI.
How can I apply cv::filter2D in OpenCV to that ROI only, defined by the binary mask?
Edit
My pixels outside the ROI have a value of 0. The other have float values of around 300-500, so the problem with filter2D in the borders of the ROI have high value transitions.
It would also be acceptable to just set the pixel values outside the ROI as the nearest pixel inside the ROI, something similar to cv::BORDER_REPLICATE
May be like this. I think there is no problem near border mask
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
Mat m = imread("f:/lib/opencv/samples/data/lena.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat mask=Mat::zeros(m.size(), CV_8UC1),maskBlur,mc;
// mask is a disk
circle(mask, Point(200, 200), 100, Scalar(255),-1);
Mat negMask;
// neg mask
bitwise_not(mask, negMask);
circle(mask, Point(200, 200), 100, Scalar(255), -1);
Mat md,mdBlur,mdint;
m.copyTo(md);
// All pixels outside mask set to 0
md.setTo(0, negMask);
imshow("mask image", md);
// Convert image to int
md.convertTo(mdint, CV_32S);
Size fxy(13, 13);
blur(mdint, mdBlur, fxy);
mdBlur.convertTo(mc, CV_8U);
imshow("Blur without mask", mc);
imwrite("blurwithoutmask.jpg",mc);
mask.convertTo(maskBlur, CV_32S);
// blur mask
blur(maskBlur, maskBlur, fxy);
Mat mskB;
mskB.setTo(1, negMask);
divide(mdBlur,maskBlur/255,mdBlur);
mdBlur.convertTo(mc, CV_8U);
imshow("Blur with mask", mc);
imwrite("blurwithmask.jpg",mc);
waitKey();
}

OpenCV histogram of an irregular shape

I'm comparatively new to OpenCV. I was wondering if it is possible to get histogram of a contour (which can be a perfect rectangular or irregular in shape) found by findcontour.
Thanks in advance.
Edit:
This is what exactly I'm trying to achieve. I want to analyse area in contour to detect defects (by analyzing histogram of an area ?) and declare piece defective or good. Images attached.
Good sample. (Contour detected is outlined in gray color)
Defective sample. (defect around top left corner)
You may probably misuse the histogram.
Contour of an image should be a binary-valued, color-less matrix which does not represent grayscale-level of pixels, but the boundaries.
Meanwhile, histogram is a tool for analyzing how grayscale-valued of pixels distribute in your 2D image, isn't it?
Thus, why you want to profile the histogram of a binary-valued matrix which might not help you analyzing the image? Histogram is not the right t ool for the contour analysis though.
What you may get from the histogram should just be only a two-bar histogram because the contoured matrix contains only binary values. This might not be helpful for analysis.
Here's an other way, using morphological operations.
#include <string>
#include <iostream>
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
cv::Mat make_element(int morph_size, int elem_type)
{
cv::Size sz{2*morph_size+1, 2*morph_size+1};
cv::Point pt{morph_size, morph_size};
cv::Mat element{getStructuringElement(elem_type, sz, pt)};
return element;
}
int main(int argc, char **argv)
{
std::string fn{argv[1]};
cv::Mat src{cv::imread(fn)}, dst, mask[3];
if (!src.data) {
std::cerr << "No image data :(" << std::endl;
return -1;
}
// Clean out noise
cv::Mat elem1{make_element(5, cv::MORPH_RECT)};
cv::morphologyEx(src, dst, cv::MORPH_OPEN, elem1);
// Close the hole, then XOR with original
cv::Mat elem2{make_element(45, cv::MORPH_ELLIPSE)};
morphologyEx(dst, dst, cv::MORPH_CLOSE, elem2);
cv::bitwise_xor(src, dst, dst);
// Clean out noise (again)
cv::Mat elem3{make_element(1, cv::MORPH_RECT)};
cv::morphologyEx(dst, dst, cv::MORPH_OPEN, elem3);
// Mark the hole
cv::split(dst, mask);
cv::bitwise_xor(src, dst, dst, mask[0]);
// Overlay
cv::split(dst, mask);
cv::Mat empty{dst.size(), CV_8UC1};
std::vector<cv::Mat> v{empty, empty, mask[0]};
cv::merge(v, dst);
cv::bitwise_or(src, dst, dst);
cv::namedWindow("Defect (ESC to quit)", cv::WINDOW_NORMAL);
cv::startWindowThread();
cv::imshow("Defect (ESC to quit)", dst);
while (true) {
int k = cv::waitKey(100) & 0xff;
if (k == 27) {
break;
}
}
cv::destroyAllWindows();
return 0;
};
Some additional reading:
Shapiro/Stockman, Finding gear defects, Chapter 3
OpenCV morphology tutorial