Related
I have to write a program that detect 3 types of road signs (speed limit, no parking and warnings). I know how to detect a circle using HoughCircles but I have several images and the parameters for HoughCircles are different for each image. There's a general way to detect circles without changing parameters for each image?
Moreover I need to detect triangle (warning signs) so I'm searching for a general shape detector. Have you any suggestions/code that can help me in this task?
Finally for detect the number on speed limit signs I thought to use SIFT and compare the image with some templates in order to identify the number on the sign. Could it be a good approach?
Thank you for the answer!
I know this is a pretty old question but I had been through the same problem and now I show you how I solved it.
The following images show some of the most accurate results that are displayed by the opencv program.
In the following images the street signs detected are circled with three different colors that distinguish the three kinds of street signs (warning, no parking, speed limit).
Red for warning signs
Blue for no parking signs
Fuchsia for speed limit signs
The speed limit value is written in green above the speed limit signs
[![example][1]][1]
[![example][2]][2]
[![example][3]][3]
[![example][4]][4]
As you can see the program performs quite well, it is able to detect and distinguish the three kinds of sign and to recognize the speed limit value in case of speed limit signs. Everything is done without computing too many false positives when, for instance, in the image there are some signs that do not belong to one of the three categories.
In order to achieve this result the software computes the detection in three main steps.
The first step involves a color based approach where the red objects in the image are detected and their region are extract to be analyzed. This step is particularly useful in order to prevent the detection of false positives, because only a small part of the image is processed.
The second step works with a machine learning algorithm: in particular we use a Cascade Classifier to compute the detection. This operation firstly requires to train the classifiers and on a later stage to use them to detect the signs.
In the last step the speed limit values inside the speed limit signs are read, also in this case through a machine learning algorithm but using the k-nearest neighbor algorithm.
Now we are going to see in detail each step.
COLOR BASED STEP
Since the street signs are always circled by a red frame, we can afford to take out and analyze only the regions where the red objects are detected.
In order to select the red objects, we consider all the ranges of the red color: even if this may produce some false positives, they will be easily discarded in the next steps.
inRange(image, Scalar(0, 70, 50), Scalar(10, 255, 255), mask1);
inRange(image, Scalar(170, 70, 50), Scalar(180, 255, 255), mask2);
In the image below we can see an example of the red objects detected with this method.
After having found the red pixels we can gather them to find the regions using a clustering algorithm, I use the method
partition(<#_ForwardIterator __first#>, _ForwardIterator __last, <#_Predicate __pred#>)
After the execution of this method we can save all the points in the same cluster in a vector (one for each cluster) and extract the bounding boxes which represent the
regions to be analyzed in the next step.
HAAR CASCADE CLASSIFIERS FOR SIGNS DETECTION
This is the real detection step where the street signs are detected. In order to perform a cascade classifier the first step consist in building a dataset of positives and negatives images. Now I explain how I have built my own datasets of images.
The first thing to note is that we need to train three different Haar cascades in order to distinguish between the three kind of signs that we have to detect, hence we must repeat the following steps for each of the three kinds of sign.
We need two datasets: one for the positive samples (which must be a set of images that contains the road signs that we are going to detect) and another one for the negative samples which can be any kind of image without street signs.
After collecting a set of 100 images for the positive samples and a set of 200 images for the negatives in two different folders, we need to write two text files:
Signs.info which contains a list of file names like the one below,
one for each positive sample in the positive folder.
pos/image_name.png 1 0 0 50 45
Here, the numbers after the name represent respectively the number
of street signs in the image, the coordinate of the upper left
corner of the street sign, his height and his width.
Bg.txt which contains a list of file names like the one below, one
for each sign in the negative folder.
neg/street15.png
With the command line below we generate the .vect file which contains all the information that the software retrieves from the positive samples.
opencv_createsamples -info sign.info -num 100 -w 50 -h 50 -vec signs.vec
Afterwards we train the cascade classifier with the following command:
opencv_traincascade -data data -vec signs.vec -bg bg.txt -numPos 60 -numNeg 200 -numStages 15 -w 50 -h 50 -featureType LBP
where the number of stages indicates the number of classifiers that will be generated in order to build the cascade.
At the end of this process we gain a file cascade.xml which will be used from the CascadeClassifier program in order to detect the objects in the image.
Now we have trained our algorithm and we can declare a CascadeClassifier for each kind of street sign, than we detect the signs in the image through
detectMultiScale(<#InputArray image#>, <#std::vector<Rect> &objects#>)
this method creates a Rect around each object that has been detected.
It is important to note that exactly as every machine learning algorithm, in order to perform well, we need a large number of samples in the dataset. The dataset that I have built, is not extremely large, thus in some situations it is not able to detect all the signs. This mostly happens when a small part of the street sign is not visible in the image like in the warning sign below:
I have expanded my dataset up to the point where I have obtained a fairly accurate result without
too many errors.
SPEED LIMIT VALUE DETECTION
Like for the street signs detection also here I used a machine learning algorithm but with a different approach. After some work, I realized that an OCR (tesseract) solution does not perform well, so I decided to build my own ocr software.
For the machine learning algorithm I took the image below as training data which contains some speed limit values:
The amount of training data is small. But, since in speed limit signs all letters have the same font, it is not a huge problem.
To prepare the data for training, I made a small code in OpenCV. It does the following things:
It loads the image on the left;
It selects the digits (obviously by contour finding and applying constraints on area and height of letters to avoid false detections).
It draws the bounding rectangle around one letter and it waits for the key to be manually pressed. This time the user presses the digit key corresponding to the letter in box by himself.
Once the corresponding digit key is pressed, it saves 100 pixel values in an array and the correspondent manually entered digit in another array.
Eventually it saves both the arrays in separate txt files.
Following the manual digit classification all the digits in the train data( train.png) are manually labeled, and the image will look like the one below.
Now we enter into training and testing part.
For training we do as follows:
Load the txt files we already saved earlier
Create an instance of classifier that we are going to use ( KNearest)
Then we use KNearest.train function to train the data
Now the detection:
We load the image with the speed limit sign detected
Process the image as before and extract each digit using contour methods
Draw bounding box for it, then resize to 10x10, and store its pixel values in an array as done earlier.
Then we use KNearest.find_nearest() function to find the nearest item to the one we gave.
And it recognizes the correct digit.
I tested this little OCR on many images, and just with this small dataset I have obtained an accuracy of about 90%.
CODE
Below I post all my openCv c++ code in a single class, following my instruction you should be able to achive my result.
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <cmath>
#include <stdlib.h>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui.hpp"
#include <string.h>
#include <opencv2/ml/ml.hpp>
using namespace std;
using namespace cv;
std::vector<cv::Rect> getRedObjects(cv::Mat image);
vector<Mat> detectAndDisplaySpeedLimit( Mat frame );
vector<Mat> detectAndDisplayNoParking( Mat frame );
vector<Mat> detectAndDisplayWarning( Mat frame );
void trainDigitClassifier();
string getDigits(Mat image);
vector<Mat> loadAllImage();
int getSpeedLimit(string speed);
//path of the haar cascade files
String no_parking_signs_cascade = "/Users/giuliopettenuzzo/Desktop/cascade_classifiers/no_parking_cascade.xml";
String speed_signs_cascade = "/Users/giuliopettenuzzo/Desktop/cascade_classifiers/speed_limit_cascade.xml";
String warning_signs_cascade = "/Users/giuliopettenuzzo/Desktop/cascade_classifiers/warning_cascade.xml";
CascadeClassifier speed_limit_cascade;
CascadeClassifier no_parking_cascade;
CascadeClassifier warning_cascade;
int main(int argc, char** argv)
{
//train the classifier for digit recognition, this require a manually train, read the report for more details
trainDigitClassifier();
cv::Mat sceneImage;
vector<Mat> allImages = loadAllImage();
for(int i = 0;i<=allImages.size();i++){
sceneImage = allImages[i];
//load the haar cascade files
if( !speed_limit_cascade.load( speed_signs_cascade ) ){ printf("--(!)Error loading\n"); return -1; };
if( !no_parking_cascade.load( no_parking_signs_cascade ) ){ printf("--(!)Error loading\n"); return -1; };
if( !warning_cascade.load( warning_signs_cascade ) ){ printf("--(!)Error loading\n"); return -1; };
Mat scene = sceneImage.clone();
//detect the red objects
std::vector<cv::Rect> allObj = getRedObjects(scene);
//use the three cascade classifier for each object detected by the getRedObjects() method
for(int j = 0;j<allObj.size();j++){
Mat img = sceneImage(Rect(allObj[j]));
vector<Mat> warningVec = detectAndDisplayWarning(img);
if(warningVec.size()>0){
Rect box = allObj[j];
}
vector<Mat> noParkVec = detectAndDisplayNoParking(img);
if(noParkVec.size()>0){
Rect box = allObj[j];
}
vector<Mat> speedLitmitVec = detectAndDisplaySpeedLimit(img);
if(speedLitmitVec.size()>0){
Rect box = allObj[j];
for(int i = 0; i<speedLitmitVec.size();i++){
//get speed limit and skatch it in the image
int digit = getSpeedLimit(getDigits(speedLitmitVec[i]));
if(digit > 0){
Point point = box.tl();
point.y = point.y + 30;
cv::putText(sceneImage,
"SPEED LIMIT " + to_string(digit),
point,
cv::FONT_HERSHEY_COMPLEX_SMALL,
0.7,
cv::Scalar(0,255,0),
1,
cv::CV__CAP_PROP_LATEST);
}
}
}
}
imshow("currentobj",sceneImage);
waitKey(0);
}
}
/*
* detect the red object in the image given in the param,
* return a vector containing all the Rect of the red objects
*/
std::vector<cv::Rect> getRedObjects(cv::Mat image)
{
Mat3b res = image.clone();
std::vector<cv::Rect> result;
cvtColor(image, image, COLOR_BGR2HSV);
Mat1b mask1, mask2;
//ranges of red color
inRange(image, Scalar(0, 70, 50), Scalar(10, 255, 255), mask1);
inRange(image, Scalar(170, 70, 50), Scalar(180, 255, 255), mask2);
Mat1b mask = mask1 | mask2;
Mat nonZeroCoordinates;
vector<Point> pts;
findNonZero(mask, pts);
for (int i = 0; i < nonZeroCoordinates.total(); i++ ) {
cout << "Zero#" << i << ": " << nonZeroCoordinates.at<Point>(i).x << ", " << nonZeroCoordinates.at<Point>(i).y << endl;
}
int th_distance = 2; // radius tolerance
// Apply partition
// All pixels within the radius tolerance distance will belong to the same class (same label)
vector<int> labels;
// With lambda function (require C++11)
int th2 = th_distance * th_distance;
int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) {
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2;
});
// You can save all points in the same class in a vector (one for each class), just like findContours
vector<vector<Point>> contours(n_labels);
for (int i = 0; i < pts.size(); ++i){
contours[labels[i]].push_back(pts[i]);
}
// Get bounding boxes
vector<Rect> boxes;
for (int i = 0; i < contours.size(); ++i)
{
Rect box = boundingRect(contours[i]);
if(contours[i].size()>500){//prima era 1000
boxes.push_back(box);
Rect enlarged_box = box + Size(100,100);
enlarged_box -= Point(30,30);
if(enlarged_box.x<0){
enlarged_box.x = 0;
}
if(enlarged_box.y<0){
enlarged_box.y = 0;
}
if(enlarged_box.height + enlarged_box.y > res.rows){
enlarged_box.height = res.rows - enlarged_box.y;
}
if(enlarged_box.width + enlarged_box.x > res.cols){
enlarged_box.width = res.cols - enlarged_box.x;
}
Mat img = res(Rect(enlarged_box));
result.push_back(enlarged_box);
}
}
Rect largest_box = *max_element(boxes.begin(), boxes.end(), [](const Rect& lhs, const Rect& rhs) {
return lhs.area() < rhs.area();
});
//draw the rects in case you want to see them
for(int j=0;j<=boxes.size();j++){
if(boxes[j].area() > largest_box.area()/3){
rectangle(res, boxes[j], Scalar(0, 0, 255));
Rect enlarged_box = boxes[j] + Size(20,20);
enlarged_box -= Point(10,10);
rectangle(res, enlarged_box, Scalar(0, 255, 0));
}
}
rectangle(res, largest_box, Scalar(0, 0, 255));
Rect enlarged_box = largest_box + Size(20,20);
enlarged_box -= Point(10,10);
rectangle(res, enlarged_box, Scalar(0, 255, 0));
return result;
}
/*
* code for detect the speed limit sign , it draws a circle around the speed limit signs
*/
vector<Mat> detectAndDisplaySpeedLimit( Mat frame )
{
std::vector<Rect> signs;
vector<Mat> result;
Mat frame_gray;
cvtColor( frame, frame_gray, CV_BGR2GRAY );
//normalizes the brightness and increases the contrast of the image
equalizeHist( frame_gray, frame_gray );
//-- Detect signs
speed_limit_cascade.detectMultiScale( frame_gray, signs, 1.1, 3, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
cout << speed_limit_cascade.getFeatureType();
for( size_t i = 0; i < signs.size(); i++ )
{
Point center( signs[i].x + signs[i].width*0.5, signs[i].y + signs[i].height*0.5 );
ellipse( frame, center, Size( signs[i].width*0.5, signs[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
Mat resultImage = frame(Rect(center.x - signs[i].width*0.5,center.y - signs[i].height*0.5,signs[i].width,signs[i].height));
result.push_back(resultImage);
}
return result;
}
/*
* code for detect the warning sign , it draws a circle around the warning signs
*/
vector<Mat> detectAndDisplayWarning( Mat frame )
{
std::vector<Rect> signs;
vector<Mat> result;
Mat frame_gray;
cvtColor( frame, frame_gray, CV_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect signs
warning_cascade.detectMultiScale( frame_gray, signs, 1.1, 3, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
cout << warning_cascade.getFeatureType();
Rect previus;
for( size_t i = 0; i < signs.size(); i++ )
{
Point center( signs[i].x + signs[i].width*0.5, signs[i].y + signs[i].height*0.5 );
Rect newRect = Rect(center.x - signs[i].width*0.5,center.y - signs[i].height*0.5,signs[i].width,signs[i].height);
if((previus & newRect).area()>0){
previus = newRect;
}else{
ellipse( frame, center, Size( signs[i].width*0.5, signs[i].height*0.5), 0, 0, 360, Scalar( 0, 0, 255 ), 4, 8, 0 );
Mat resultImage = frame(newRect);
result.push_back(resultImage);
previus = newRect;
}
}
return result;
}
/*
* code for detect the no parking sign , it draws a circle around the no parking signs
*/
vector<Mat> detectAndDisplayNoParking( Mat frame )
{
std::vector<Rect> signs;
vector<Mat> result;
Mat frame_gray;
cvtColor( frame, frame_gray, CV_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect signs
no_parking_cascade.detectMultiScale( frame_gray, signs, 1.1, 3, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
cout << no_parking_cascade.getFeatureType();
Rect previus;
for( size_t i = 0; i < signs.size(); i++ )
{
Point center( signs[i].x + signs[i].width*0.5, signs[i].y + signs[i].height*0.5 );
Rect newRect = Rect(center.x - signs[i].width*0.5,center.y - signs[i].height*0.5,signs[i].width,signs[i].height);
if((previus & newRect).area()>0){
previus = newRect;
}else{
ellipse( frame, center, Size( signs[i].width*0.5, signs[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 0 ), 4, 8, 0 );
Mat resultImage = frame(newRect);
result.push_back(resultImage);
previus = newRect;
}
}
return result;
}
/*
* train the classifier for digit recognition, this could be done only one time, this method save the result in a file and
* it can be used in the next executions
* in order to train user must enter manually the corrisponding digit that the program shows, press space if the red box is just a point (false positive)
*/
void trainDigitClassifier(){
Mat thr,gray,con;
Mat src=imread("/Users/giuliopettenuzzo/Desktop/all_numbers.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,125,255,THRESH_BINARY_INV); //Threshold to find contour
imshow("ci",thr);
waitKey(0);
thr.copyTo(con);
// Create sample and label data
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
Mat sample;
Mat response_array;
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); //Find contour
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through first hierarchy level contours
{
Rect r= boundingRect(contours[i]); //Find bounding rect for each contour
rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,0,255),2,8,0);
Mat ROI = thr(r); //Crop the image
Mat tmp1, tmp2;
resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR ); //resize to 10X10
tmp1.convertTo(tmp2,CV_32FC1); //convert to float
imshow("src",src);
int c=waitKey(0); // Read corresponding label for contour from keyoard
c-=0x30; // Convert ascii to intiger value
response_array.push_back(c); // Store label to a mat
rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,255,0),2,8,0);
sample.push_back(tmp2.reshape(1,1)); // Store sample data
}
// Store the data to file
Mat response,tmp;
tmp=response_array.reshape(1,1); //make continuous
tmp.convertTo(response,CV_32FC1); // Convert to float
FileStorage Data("TrainingData.yml",FileStorage::WRITE); // Store the sample data in a file
Data << "data" << sample;
Data.release();
FileStorage Label("LabelData.yml",FileStorage::WRITE); // Store the label data in a file
Label << "label" << response;
Label.release();
cout<<"Training and Label data created successfully....!! "<<endl;
imshow("src",src);
waitKey(0);
}
/*
* get digit from the image given in param, using the classifier trained before
*/
string getDigits(Mat image)
{
Mat thr1,gray1,con1;
Mat src1 = image.clone();
cvtColor(src1,gray1,CV_BGR2GRAY);
threshold(gray1,thr1,125,255,THRESH_BINARY_INV); // Threshold to create input
thr1.copyTo(con1);
// Read stored sample and label for training
Mat sample1;
Mat response1,tmp1;
FileStorage Data1("TrainingData.yml",FileStorage::READ); // Read traing data to a Mat
Data1["data"] >> sample1;
Data1.release();
FileStorage Label1("LabelData.yml",FileStorage::READ); // Read label data to a Mat
Label1["label"] >> response1;
Label1.release();
Ptr<ml::KNearest> knn(ml::KNearest::create());
knn->train(sample1, ml::ROW_SAMPLE,response1); // Train with sample and responses
cout<<"Training compleated.....!!"<<endl;
vector< vector <Point> > contours1; // Vector for storing contour
vector< Vec4i > hierarchy1;
//Create input sample by contour finding and cropping
findContours( con1, contours1, hierarchy1,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
Mat dst1(src1.rows,src1.cols,CV_8UC3,Scalar::all(0));
string result;
for( int i = 0; i< contours1.size(); i=hierarchy1[i][0] ) // iterate through each contour for first hierarchy level .
{
Rect r= boundingRect(contours1[i]);
Mat ROI = thr1(r);
Mat tmp1, tmp2;
resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR );
tmp1.convertTo(tmp2,CV_32FC1);
Mat bestLabels;
float p=knn -> findNearest(tmp2.reshape(1,1),4, bestLabels);
char name[4];
sprintf(name,"%d",(int)p);
cout << "num = " << (int)p;
result = result + to_string((int)p);
putText( dst1,name,Point(r.x,r.y+r.height) ,0,1, Scalar(0, 255, 0), 2, 8 );
}
imwrite("dest.jpg",dst1);
return result ;
}
/*
* from the digits detected, it returns a speed limit if it is detected correctly, -1 otherwise
*/
int getSpeedLimit(string numbers){
if ((numbers.find("30") != std::string::npos) || (numbers.find("03") != std::string::npos)) {
return 30;
}
if ((numbers.find("50") != std::string::npos) || (numbers.find("05") != std::string::npos)) {
return 50;
}
if ((numbers.find("80") != std::string::npos) || (numbers.find("08") != std::string::npos)) {
return 80;
}
if ((numbers.find("70") != std::string::npos) || (numbers.find("07") != std::string::npos)) {
return 70;
}
if ((numbers.find("90") != std::string::npos) || (numbers.find("09") != std::string::npos)) {
return 90;
}
if ((numbers.find("100") != std::string::npos) || (numbers.find("001") != std::string::npos)) {
return 100;
}
if ((numbers.find("130") != std::string::npos) || (numbers.find("031") != std::string::npos)) {
return 130;
}
return -1;
}
/*
* load all the image in the file with the path hard coded below
*/
vector<Mat> loadAllImage(){
vector<cv::String> fn;
glob("/Users/giuliopettenuzzo/Desktop/T1/dataset/*.jpg", fn, false);
vector<Mat> images;
size_t count = fn.size(); //number of png files in images folder
for (size_t i=0; i<count; i++)
images.push_back(imread(fn[i]));
return images;
}
maybe you should try implementing the ransac algorithm, if you are using color images, migt be a good idea (if you are in europe) to get the red channel only since the speed limits are surrounded by a red cricle (or a thin white i think also).
For that you need to filter the image to get the edges, (canny filter).
Here are some useful links:
OpenCV detect partial circle with noise
https://hal.archives-ouvertes.fr/hal-00982526/document
Finally for the numbers detection i think its ok. Other approach is to use something like Viola-Jones algorithm to detect the signals, with pretrained existing models... It's up to you!
I have the following code to detect contours in an image using cvThreshold and cvFindContours:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvThreshold( processedImage, processedImage, thresh1, 255, CV_THRESH_BINARY );
nContours = cvFindContours(processedImage, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0) );
I would like to somehow extend this code to filter/ignore/remove any contours that touch the image boundaries. However I am unsure how to go about this. Should I filter the threshold image or can I filter the contours afterwards? Hope somebody knows an elegant solution, since surprisingly I could not come up with a solution by googling.
Update 2021-11-25
updates code example
fixes bugs with image borders
adds more images
adds Github repo with CMake support to build example app
Full out-of-the-box example can be found here:
C++ application with CMake
General info
I am using OpenCV 3.0.0
Using cv::findContours actually alters the input image, so make sure that you work either on a separate copy specifically for this function or do not further use the image at all
Update 2019-03-07: "Since opencv 3.2 source image is not modified by this function." (see corresponding OpenCV documentation)
General solution
All you need to know of a contour is if any of its points touches the image border. This info can be extracted easily by one of the following two procedures:
Check each point of your contour regarding its location. If it lies at the image border (x = 0 or x = width - 1 or y = 0 or y = height - 1), simply ignore it.
Create a bounding box around the contour. If the bounding box lies along the image border, you know the contour does, too.
Code for the second solution (CMake):
cmake_minimum_required(VERSION 2.8)
project(SolutionName)
find_package(OpenCV REQUIRED)
set(TARGETNAME "ProjectName")
add_executable(${TARGETNAME} ./src/main.cpp)
include_directories(${CMAKE_CURRENT_BINARY_DIR} ${OpenCV_INCLUDE_DIRS} ${OpenCV2_INCLUDE_DIR})
target_link_libraries(${TARGETNAME} ${OpenCV_LIBS})
Code for the second solution (C++):
bool contourTouchesImageBorder(const std::vector<cv::Point>& contour, const cv::Size& imageSize)
{
cv::Rect bb = cv::boundingRect(contour);
bool retval = false;
int xMin, xMax, yMin, yMax;
xMin = 0;
yMin = 0;
xMax = imageSize.width - 1;
yMax = imageSize.height - 1;
// Use less/greater comparisons to potentially support contours outside of
// image coordinates, possible future workarounds with cv::copyMakeBorder where
// contour coordinates may be shifted and just to be safe.
// However note that bounding boxes of size 1 will have their start point
// included (of course) but also their and with/height values set to 1
// but should not contain 2 pixels.
// Which is why we have to -1 the "search grid"
int bbxEnd = bb.x + bb.width - 1;
int bbyEnd = bb.y + bb.height - 1;
if (bb.x <= xMin ||
bb.y <= yMin ||
bbxEnd >= xMax ||
bbyEnd >= yMax)
{
retval = true;
}
return retval;
}
Call it via:
...
cv::Size imageSize = processedImage.size();
for (auto c: contours)
{
if(contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
...
Full C++ example:
void testContourBorderCheck()
{
std::vector<std::string> filenames =
{
"0_single_pixel_top_left.png",
"1_left_no_touch.png",
"1_left_touch.png",
"2_right_no_touch.png",
"2_right_touch.png",
"3_top_no_touch.png",
"3_top_touch.png",
"4_bot_no_touch.png",
"4_bot_touch.png"
};
// Load example image
//std::string path = "C:/Temp/!Testdata/ContourBorderDetection/test_1/";
std::string path = "../Testdata/ContourBorderDetection/test_1/";
for (int i = 0; i < filenames.size(); ++i)
{
//std::string filename = "circle3BorderDistance0.png";
std::string filename = filenames.at(i);
std::string fqn = path + filename;
cv::Mat img = cv::imread(fqn, cv::IMREAD_GRAYSCALE);
cv::Mat processedImage;
img.copyTo(processedImage);
// Create copy for contour extraction since cv::findContours alters the input image
cv::Mat workingCopyForContourExtraction;
processedImage.copyTo(workingCopyForContourExtraction);
std::vector<std::vector<cv::Point>> contours;
// Extract contours
cv::findContours(workingCopyForContourExtraction, contours, cv::RetrievalModes::RETR_EXTERNAL, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
// Prepare image for contour drawing
cv::Mat drawing;
processedImage.copyTo(drawing);
cv::cvtColor(drawing, drawing, cv::COLOR_GRAY2BGR);
// Draw contours
cv::drawContours(drawing, contours, -1, cv::Scalar(255, 255, 0), 1);
//cv::imwrite(path + "processedImage.png", processedImage);
//cv::imwrite(path + "workingCopyForContourExtraction.png", workingCopyForContourExtraction);
//cv::imwrite(path + "drawing.png", drawing);
const auto imageSize = img.size();
bool liesOnBorder = contourTouchesImageBorder(contours.at(0), imageSize);
// std::cout << "lies on border: " << std::to_string(liesOnBorder);
std::cout << filename << " lies on border: "
<< liesOnBorder;
std::cout << std::endl;
std::cout << std::endl;
cv::imshow("processedImage", processedImage);
cv::imshow("workingCopyForContourExtraction", workingCopyForContourExtraction);
cv::imshow("drawing", drawing);
cv::waitKey();
//cv::Size imageSize = workingCopyForContourExtraction.size();
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
}
}
int main(int argc, char** argv)
{
testContourBorderCheck();
return 0;
}
Problem with contour detection near image borders
OpenCV seems to have a problem with correctly finding contours near image borders.
For both objects, the detected contour is the same (see images). However, in image 2 the detected contour is not correct since a part of the object lies along x = 0, but the contour lies in x = 1.
This seem like a bug to me.
There is an open issue regarding this here: https://github.com/opencv/opencv/pull/7516
There also seems to be a workaround with cv::copyMakeBorder (https://github.com/opencv/opencv/issues/4374), however it seems a bit complicated.
If you can be a bit patient, I'd recommend waiting for the release of OpenCV 3.2 which should happen within the next 1-2 months.
New example images:
Single pixel top left, objects left, right, top, bottom, each touching and not touching (1px distance)
Example images
Object touching image border
Object not touching image border
Contour for object touching image border
Contour for object not touching image border
Although this question is in C++, the same issue affects openCV in Python. A solution to the openCV '0-pixel' border issue in Python (and which can likely be used in C++ as well) is to pad the image with 1 pixel on each border, then call openCV with the padded image, and then remove the border afterwards. Something like:
img2 = np.pad(img.copy(), ((1,1), (1,1), (0,0)), 'edge')
# call openCV with img2, it will set all the border pixels in our new pad with 0
# now get rid of our border
img = img2[1:-1,1:-1,:]
# img is now set to the original dimensions, and the contours can be at the edge of the image
If anyone needs this in MATLAB, here is the function.
function [touch] = componentTouchesImageBorder(C,im_row_max,im_col_max)
%C is a bwconncomp instance
touch=0;
S = regionprops(C,'PixelList');
c_row_max = max(S.PixelList(:,1));
c_row_min = min(S.PixelList(:,1));
c_col_max = max(S.PixelList(:,2));
c_col_min = min(S.PixelList(:,2));
if (c_row_max==im_row_max || c_row_min == 1 || c_col_max == im_col_max || c_col_min == 1)
touch = 1;
end
end
In order to gain a better understanding of computer vision, I am trying to write a canny edge detection algorithm "from scratch" (not just using the canny function from the cv lib). In order to do this, I need to apply a hysteresis threshold to the image. However, I get an error when I try to modify an element in my matrix. Here is my function:
cv::Mat hysteresisThresh(cv::Mat originalImage, int lowThresh, int highThresh){
//initialize a 2d vector to keep track of whethr or not the pixel is "in" the image
std::vector<std::vector<bool>>isIn(
originalImage.rows,
std::vector<bool>(originalImage.cols, true));
//loop through every row and col in the original image
//create a varaible to hold the element being checked
unsigned short curElementValue;
for(int curRow = 0; curRow<originalImage.rows; curRow++){
for(int curCol = 0; curCol<originalImage.cols; curCol++){
curElementValue = originalImage.at<unsigned short>(curRow, curCol);
if(curElementValue > highThresh){
isIn.at(curRow).at(curCol) = true;
//do nothing to the returnImage since the correct value is already stored
}
else if(curElementValue < highThresh && curElementValue > lowThresh){
/* i need to check that this pixel is adjacent to another edge pixel
thus, I have 8 possabilities. Think of them as:
123
4*5
678
with the * being the current pixel. However, I can cut the possabilities
in half since pixels 5-8 (inclusive) have not been checked yet and will be
checked in the future. So, I will only check pixels 1-4
*/
//TODO there may be a more efficient way to check these values. Find out
//The first stage of this if is the be sure that you are checking values in the array that actually exist
if((curRow!=0 && curCol!=0 && curRow!=originalImage.rows-1 && curCol!=originalImage.cols-1)&&
(isIn[curRow][curCol-1] || isIn[curRow-1][curCol-1]|| isIn[curRow-1][curCol] || isIn[curRow-1][curCol+1])){
isIn.at(curRow).at(curCol) = true;
//do nothing to the returnImage since the correct value is already stored
}
else{
//none of the adjacent pixels are in, so it is not an edge
isIn.at(curRow).at(curCol) = false;
originalImage.at<unsigned short>(curRow, curCol) = 0; //same as above
}
}
else{
isIn.at(curRow).at(curCol) = false;
originalImage.at<unsigned short>(curRow, curCol) = 0; //same as above
}
}
}
return originalImage;
}
The output upon running the program is a classic seg fault:
Segmentation fault (core dumped)
I've explored in gdb a bit and found that the fault occurs at the line:
originalImage.at<unsigned short>(curRow, curCol) = 0;
What am I doing wrong? Thanks!
EDIT: I was asked in the comments to provide context for the function. This is the code surrounding the function call. I am calling all of the operations on a depth map (yes I am aware canny is designed for use with color maps but after researching I think the same general principles would apply to finding edges in a depth map). Anyway, here is the code:
void displayDepthEdges(cv::VideoCapture* capture){
//initialize the matricies for holding images
cv::Mat depthImage;
cv::Mat edgeImage;
cv::Mat bgrImage;
//initialize variables for sobel operations
cv::Mat yGradient;
cv::Mat xGradient;
//double theta;
//initialize variable for hysteresis thresh operation
cv::Mat threshedImage;
int cannyLowThresh = 0;
int cannyHighThresh = 500;
//initialize variables for erosion and dilation based noise cancelation.
//CURRENTLY NOT IN USE
//int erosionFactor = 0;
//int dilationFactor = 0;
//initialize the windows and trackbars used to control everything
//cv::namedWindow("EroDilMenu", CV_WINDOW_NORMAL);
cv::namedWindow("CannyMenu", CV_WINDOW_NORMAL);
cvCreateTrackbar("Low Thresh", "CannyMenu",&cannyLowThresh, 500);
cvCreateTrackbar("High Thresh", "CannyMenu",&cannyHighThresh, 500);
//cvCreateTrackbar("Dil Fac", "EroDilMenu",&dilationFactor, 25);
//cvCreateTrackbar("Ero Fac", "EroDilMenu", &erosionFactor, 25);
cv::namedWindow("Edges", CV_WINDOW_NORMAL);
for(;;){
if(!capture->grab()){
std::cerr << "Error: Could not grab an image\n";
}
else{
//initialization here will stop totalGradient from carrying values over (hopefully)
cv::Mat totalGradient;
capture->retrieve(depthImage, CV_CAP_OPENNI_DEPTH_MAP);
//blur the image so that only major edges are detected
GaussianBlur(depthImage, depthImage, cv::Size(3,3), 1, 0, cv::BORDER_DEFAULT);
//apply the sobel operator in the x and y directions, then average to approximate gradient
//arguments are(input, output, dataType, x order derivative, y order derivative
//matrix size, scale, delta)
Sobel(depthImage, xGradient, CV_16U, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
convertScaleAbs(xGradient, xGradient);
Sobel(depthImage, yGradient, CV_16U, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
convertScaleAbs(yGradient, yGradient);
addWeighted(yGradient, 0.5, xGradient, 0.5, 0.0, totalGradient);
//it is not clear that noise canceling does anything here...
//it seemed to only blur or sharpen edges in testing
//totalGradient = noiseCancel(totalGradient ,erosionFactor, dilationFactor);
//TODO impliment direction based edge priority
//theta = atan2(xGradient, yGradient);//this is returned in radians
//times 10 since the threshes are in mm and trackbars are in cm
totalGradient = hysteresisThresh(totalGradient,cannyLowThresh*10, cannyHighThresh*10);
imshow("Edges", totalGradient);
if(cv::waitKey(30) == 27){break;}
}
}
}
Your mistake is in surrounding code, not in algorithm itself. According to documentation (it's also correct for 3.0.0 version), cv::convertScaleAbs converts result to 8-bit image, i. e. CV_8UC1 type. As a result your totalGradiend is 8-bit image, and its elements can't be accessed via unsigned short pointer.
I am still new in C++ and now I need to convert some parts from this old program of mine from C to C++ because I want to apply BackgroundSubtractorMOG2 in my program since it only available in C++. Basically this program will detect contours from a video camera based on background subtraction and choose the largest contours available.
I have a problem particularly on this part (taken from the old program):
double largestArea = 0; //Const. for the largest area
CvSeq* largest_contour = NULL; //Contour for the largest area
while (current_contour != NULL){ //If the current contour available
double area = fabs(cvContourArea(current_contour,CV_WHOLE_SEQ, false)); //Get the current contour's area as "area"
if(area > largestArea){ //If "area" is larger than the previous largest area
largestArea = area;
largest_contour = current_contour;
}
current_contour = current_contour->h_next; //Search for the next contour
}
This part is where the program will scan each contour available as current_contour, find its area and compare it to previous largest contour. My question is how to get the current_contour, its area and jump to the next contour in C++? Also, what is indicated by contours.size() in C++? Is it the number of contours scanned or the total area of the contours?
This is what I've done so far:
for(;;)
{
cap >> frame; // get a new frame from camera
if( frame.empty() )
break;
image=frame.clone();
mog(frame,foreground,-1);
threshold(foreground,foreground,lowerC,upperC,THRESH_BINARY);
medianBlur(foreground,foreground,9);
erode(foreground,foreground,Mat());
dilate(foreground,foreground,Mat());
findContours(foreground,contours,CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE);
if(contours.empty())
continue;
//Starting this part
double largest_area = 0;
for(int i= 0; i < contours.size(); i++){
double area = contourArea(contours);
if(area >= largest_area){
largest_area = area;
largest_contours = contours;
}
}
//Until this part
drawContours(image,largest_contours,-1,Scalar(0,0,255),2);
imshow( "Capture",image );
imshow("Contours",foreground);
if(waitKey(30) >= 0) break;
}
Thanks in advance.
PS: The old program got some bugs in it but the algorithm works just fine. Free to as me if you need the updated program. Currently using OpenCV 2.4.3 + VS C++ 2010 Exp.
EDIT:
Thanks to everybody who're trying to help me but I already got the answer which is from here. Still, for those how still don't know: OpenCV in C IS NOT EXACTLY THE SAME AS OpenCV in C++.
This is a part of the code, where I am finding all contours on image and calcilate their perimeter and area:
IplImage* bin = cvCreateImage( cvGetSize(_image), IPL_DEPTH_8U, 1);
cvConvertImage(_image, bin, CV_BGR2GRAY);
cvCanny(bin, bin, 50, 200);
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours=0;
//Number of all contours on image #contoursCont#
int contoursCont = cvFindContours( bin, storage,&contours,sizeof(CvContour),CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
assert(contours!=0);
// iterate through all contours --> current = current->h_next
for( CvSeq* current = contours; current != NULL; current = current->h_next )
{
//calculate perimeter and area of each contour
double area = fabs(cvContourArea(current));
double perim = cvContourPerimeter(current);
cvDrawContours(_image, current, cvScalar(0, 0, 255), cvScalar(0, 255, 0), -1, 1, 8);
//the rest code
}
From OpenCV documentation:
The function cvFindContours retrieves contours from the binary image and returns the number of retrieved contours. The pointer CvSeq* contours=0 is filled by the function. It will contain a pointer to the first outermost contour or NULL if no contours are detected (if the image is completely black). Other contours may be reached from first_contour using the h_next and v_next links.
I am using OpenCV 2.4.2 on Linux. I am writing in C++. I want to track simple objects (e.g. black rectangle on the white background). Firstly I am using goodFeaturesToTrack and then calcOpticalFlowPyrLK to find those points on another image. The problem is that calcOpticalFlowPyrLK doesn't find those points.
I have found code that does it in C, which does not work in my case: http://dasl.mem.drexel.edu/~noahKuntz/openCVTut9.html
I have converted it into C++:
int main(int, char**) {
Mat imgAgray = imread("ImageA.png", CV_LOAD_IMAGE_GRAYSCALE);
Mat imgBgray = imread("ImageB.png", CV_LOAD_IMAGE_GRAYSCALE);
Mat imgC = imread("ImageC.png", CV_LOAD_IMAGE_UNCHANGED);
vector<Point2f> cornersA;
goodFeaturesToTrack(imgAgray, cornersA, 30, 0.01, 30);
for (unsigned int i = 0; i < cornersA.size(); i++) {
drawPixel(cornersA[i], &imgC, 2, blue);
}
// I have no idea what does it do
// cornerSubPix(imgAgray, cornersA, Size(15, 15), Size(-1, -1),
// TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 20, 0.03));
vector<Point2f> cornersB;
vector<uchar> status;
vector<float> error;
// winsize has to be 11 or 13, otherwise nothing is found
int winsize = 11;
int maxlvl = 5;
calcOpticalFlowPyrLK(imgAgray, imgBgray, cornersA, cornersB, status, error,
Size(winsize, winsize), maxlvl);
for (unsigned int i = 0; i < cornersB.size(); i++) {
if (status[i] == 0 || error[i] > 0) {
drawPixel(cornersB[i], &imgC, 2, red);
continue;
}
drawPixel(cornersB[i], &imgC, 2, green);
line(imgC, cornersA[i], cornersB[i], Scalar(255, 0, 0));
}
namedWindow("window", 1);
moveWindow("window", 50, 50);
imshow("window", imgC);
cvWaitKey(0);
return 0;
}
ImageA: http://oi50.tinypic.com/14kv05v.jpg
ImageB: http://oi46.tinypic.com/4l3xom.jpg
ImageC: http://oi47.tinypic.com/35n3uox.jpg
I have found out that it works only for winsize = 11. I have tried using it on a moving rectangle to check how far it is from the origin. It hardly ever detects all four corners.
int main(int, char**) {
std::cout << "Compiled at " << __TIME__ << std::endl;
Scalar white = Scalar(255, 255, 255);
Scalar black = Scalar(0, 0, 0);
Scalar red = Scalar(0, 0, 255);
Rect rect = Rect(50, 100, 100, 150);
Mat org = Mat(Size(640, 480), CV_8UC1, white);
rectangle(org, rect, black, -1, 0, 0);
vector<Point2f> features;
goodFeaturesToTrack(org, features, 30, 0.01, 30);
std::cout << "POINTS FOUND:" << std::endl;
for (unsigned int i = 0; i < features.size(); i++) {
std::cout << "Point found: " << features[i].x;
std::cout << " " << features[i].y << std::endl;
}
bool goRight = 1;
while (1) {
if (goRight) {
rect.x += 30;
rect.y += 30;
if (rect.x >= 250) {
goRight = 0;
}
} else {
rect.x -= 30;
rect.y -= 30;
if (rect.x <= 50) {
goRight = 1;
}
}
Mat frame = Mat(Size(640, 480), CV_8UC1, white);
rectangle(frame, rect, black, -1, 0, 0);
vector<Point2f> found;
vector<uchar> status;
vector<float> error;
calcOpticalFlowPyrLK(org, frame, features, found, status, error,
Size(11, 11), 5);
Mat display;
cvtColor(frame, display, CV_GRAY2BGR);
for (unsigned int i = 0; i < found.size(); i++) {
if (status[i] == 0 || error[i] > 0) {
continue;
} else {
line(display, features[i], found[i], red);
}
}
namedWindow("window", 1);
moveWindow("window", 50, 50);
imshow("window", display);
if (cvWaitKey(300) > 0) {
break;
}
}
}
OpenCV implementation of Lucas-Kanade seems to be unable to track a rectangle on a binary image. Am I doing something wrong or does this function just not work?
The Lucas Kanade method estimates the motion of a region by using the gradients in that region. It is in a case a gradient descends methods. So if you don't have gradients in x AND y direction the method will fail. The second important note is that the Lucas Kanade equation
E = sum_{winsize} (Ix * u + Iy * v * It)²
is an first order taylor approximation of the intensity constancy constrain.
I(x,y,t) = I(x+u,y+v,t+1)
so an restriction of the method without level (image pyramids) is that the image needs to be a linear function. In practise this mean only small motions could be estimated, dependend from the winsize you choose. Thats why you use the levels, which linearise the images (It). So a level of 5 is a little bit to high 3 should be enough. The top level image has in your case a size of 640x480 / 2^5 = 20 x 15.
Finally the problem in your code is the line:
if (status[i] == 0 || error[i] > 0) {
the error you get back from the lucas kanade method is the resulting SSD that means:
error = sum(winSize) (I(x,y,0) - I(x+u,y+u,1)^2) / (winsize * winsize)
It is very unlikely that the error is 0. So finally you skip all features. I have good experiences by ignoring the error, that is just a confidence measure. There are very good alternative confidence measures as the Foreward/Backward confidence. You could also start experiments by ignoring the status flag if too much feaurtes are discard
KLT does point tracking by finding a transformation between two sets of points regarding a certain window. The window size is an area over which each point will be chased in order to match it on the other frame.
It is another algorithm based on gradient that find the good features to track.
Normally KLT uses a pyramidal approach in order to maintain tracking even with big movements. It probably uses at "maxLevel" times for the "window sized" you specified.
Never tried KLT on binary images. The problem might be on KLT implementation that begin the search in a wrong direction and then just lost the points. When you change the windows size then the search algorithm changes also. On you're picture you have only 4 interest point maximum and only on 1 pixel.
These are parameters you're interested in :
winSize – Size of the search window at each pyramid level
maxLevel – 0-based maximal pyramid level number. If 0, pyramids are not used (single level), if 1, two levels are used etc.
criteria – Specifies the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon
Suggestion :
Did you try with natural pictures ? (two photos for instance), you'll have much more features to track. 4 or less is quite hard to keep. I would try this first