ASL hand sign detection opencv - c++

I am a bit new to opencv and could use some help. I want to detect ASL hand signs.
For detecting hands, I can use either detection by skin color or a haar classifier. I already detect hands, but the problem is detecting the hand shape.
I can get the curent hand shape using the algorithm described here, so the problem is how do I compare this shape to my database of shapes?
I tried comparing them using the algorithm described here, which detects similar features images have. The problem is that this will match it with all the hands, since...well it detects them as hands. For instance, check this image, it should point only to V, but it detects features in W and R, too.
I want my final result to be like here, so how can I compare image shapes? Is my approach wrong?
I was thinking that detecting by convexity hull won't work, because most of the signs are closed fists. Check O, for instance, it has no open fingers, so I thought that trying to compare contours would be the best. How to compare them, though? FLANN doesn't seem to work. Or I'm doing it wrong.
Would a Haar cascade classifier work? Or would it detect two hands in different positions as hands as well?
Or is there another way to match shapes? That could solve my problem, but I couldn't find any example that does for custom shapes, only for ones like rectangles, circles and triangles.
Update
Ok, I've been playing a bit with matchShapes as berak told me. Here's my code below(it's a bit messy as I'm testing currently).
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
Mat src; Mat src_gray;
int thresh = 10;
int max_thresh = 300;
/// Function header
void thresh_callback(int, void* );
/** #function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGR2GRAY );
blur( src_gray, src_gray, Size(3,3) );
/// Create Window
char* source_window = "Source";
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );
thresh_callback( 0, 0 );
waitKey(0);
return(0);
}
/** #function thresh_callback */
void thresh_callback(int, void* )
{
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
double largest_area=0;
int largest_contour_index=0;
Rect bounding_rect;
/// Detect edges using canny
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
cout<<contours.size()<<endl;
/// Draw contours
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
vector<vector<Point> >hull( contours.size() );
for( int i = 0; i< contours.size(); i++ )
{ Scalar color = Scalar( 255,255,255 );
convexHull( Mat(contours[i]), hull[i], false );
// imshow("conturul"+to_string(i), drawing );
double a=contourArea( hull[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
bounding_rect=boundingRect(hull[i]);}
}
cout<<"zaindex "<<largest_contour_index<<endl;
Scalar color = Scalar( 255,255,255 );
drawContours( drawing, hull, largest_contour_index, color, 2, 8, hierarchy, 0, Point() );
namedWindow( "maxim", CV_WINDOW_AUTOSIZE );
imshow( "maxim", drawing );
Mat rects=imread( "scene.png", 1 );
rectangle(rects, bounding_rect, Scalar(0,255,0),1, 8,0);
imshow( "maxim2", rects );
/// Show in a window
}
The problem with it is the definition of a contour. These hand 'contours' are actually made of multiple contours themselves and that image that I showed earlier is actually made of these multiple contours but overlapped with eachother. matchShapes accepts arrays of Points as parameters, but the contours are arrays of arrays of Points.
So my question is, how can I add my contours vector with itself so I can pass it to matchShapes? In other words, how can I make a single contour from multiple overlapped contours?

Related

trying to detect a rectangle using find contours

I am trying to detect a rectangle using find contours, but I don't get any contours from the following image.
I cant detect any contours in the image. Is find contours is bad with the following image, or should I use hough transform.
UPDATE: I have updated the source code to use approximated polygon.
but I still I get the outlier bounding rect, I cant find the smallest rectangle that is in the screenshot.
I have another case which the current solution it doesnt work even when adding erosion or dilation.
image 2
and here is the code
using namespace cm;
using namespace cv;
using namespace std;
cv::Mat input = cv::imread("heightmap.png");
RNG rng(12345);
// convert to grayscale (you could load as grayscale instead)
cv::Mat gray;
cv::cvtColor(input,gray, CV_BGR2GRAY);
// compute mask (you could use a simple threshold if the image is always as good as the one you provided)
cv::Mat mask;
cv::threshold(gray, mask, 0, 255,CV_THRESH_OTSU);
cv::namedWindow("threshold");
cv::imshow("threshold",mask);
// find contours (if always so easy to segment as your image, you could just add the black/rect pixels to a vector)
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(mask,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
cv::Mat drawing = cv::Mat::zeros( mask.size(), CV_8UC3 );
vector<vector<cv::Point> > contours_poly( contours.size() );
vector<vector<cv::Point> > ( contours.size() );
vector<cv::Rect> boundRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( cv::Mat(contours_poly[i]) );
}
for( int i = 0; i< contours.size(); i++ )
{
cv::Scalar color = cv::Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
}
// display
cv::imshow("input", input);
cv::imshow("drawing", drawing);
cv::waitKey(0);
The code you are using looks like its from this question.
It uses BinaryInv threshold because its detecting a black shape on white background.
Your example is the opposite so you should tweak your code to use Binary threshold type instead (or negate the image).
Without this fix, FindContours will detect the perimeter of the image which will be the biggest contour.
So I don't think the code is failing to detect contours, just not the "biggest contour" you expect.
Even with that fixed, the code you posted won't fit a rectangle to the rectangle in your example image, as the most obvious rectangular feature doesn't have a clean border. The approxPolyDP suggestion in the linked question might help but you'll have to improve the source image.
See this question for a comparison of this and Hough methods for finding rectangles.
Edit
You should be able to separate the rectangle in your example image from the other blob by calling Erode (3x3) twice.
You'll have to replace selecting the biggest contour with selecting the squarest.

Get coordinates of contours in OpenCV

Let's say that I have the following output image:
Basically, I have video stream and I want to get coordinates of rectangle only in the output image. Here's my C++ code:
while(1)
{
capture >> frame;
if(frame.empty())
break;
cv::cvtColor( frame, gray, CV_BGR2GRAY ); // Grayscale image
Canny( gray, canny_output, thresh, thresh * 2, 3 );
// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
// Draw contours
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
}
cv::imshow( "w", drawing );
waitKey(20); // waits to display frame
}
Thanks.
Look at the definition of the find contours function in the opencv documentation and see the parameters (link):
void findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy, int mode, int method, Point offset=Point())
Parameters: here
Look at contours, like Rafa said each contour is stored in a vector of points and each vector of points is stored in a vector, so, by walking in the outer vector and then walking in the inner vector you'll be finding the points you wish.
However, if you want to detect only the bigger contour you might want to use CV_RETR_EXTERNAL as the mode parameter, because it'll detect only most external contour (the big rectangle).
If you still wish to maintain the smaller contours then you might use the CV_RETR_TREE and work out with the hierarchy structure: Using hierarchy contours
Looking at the documentation, the OutputArrayOfArrays contours is the key.
contours – Detected contours. Each contour is stored as a vector of points.
so, you've got a vector< vector<Point> > contours. The vector<Point>(inside) is the coordinates of a contour, and every contour is stored in a vector.
So for instance, to know the 5-th vector, it's vector<Point> fifthcontour = contours.at(4);
and you have the coordinates in that vector.
You can access to those coordinates as:
for (int i = 0; i < fifthcontour.size(); i++) {
Point coordinate_i_ofcontour = fifthcontour[i];
cout << endl << "contour with coordinates: x = " << coordinate_i_ofcontour.x << " y = " << coordinate_i_ofcontour.y;
}

Search for contours within a contour / OpenCV c++

I am trying to track a custom circular marker in an image, and I need to check that a circle contains a minimum number of other circles/objects. My code for finding circles is below:
void findMarkerContours( int, void* )
{
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
vector<Point> approx;
cv::Mat dst = src.clone();
cv::Mat src_gray;
cv::cvtColor(src, src_gray, CV_BGR2GRAY);
//Reduce noise with a 3x3 kernel
blur( src_gray, src_gray, Size(3,3));
//Convert to binary using canny
cv::Mat bw;
cv::Canny(src_gray, bw, thresh, 3*thresh, 3);
imshow("bw", bw);
findContours(bw.clone(), contours, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
Mat drawing = Mat::zeros( bw.size(), CV_8UC3 );
for (int i = 0; i < contours.size(); i++)
{
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
// contour
drawContours( drawing, contours, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
//Approximate the contour with accuracy proportional to contour perimeter
cv::approxPolyDP(cv::Mat(contours[i]), approx, cv::arcLength(cv::Mat(contours[i]), true) *0.02, true);
//Skip small or non-convex objects
if(fabs(cv::contourArea(contours[i])) < 100 || !cv::isContourConvex(approx))
continue;
if (approx.size() >= 8) //More than 6-8 vertices means its likely a circle
{
drawContours( dst, contours, i, Scalar(0,255,0), 2, 8);
}
imshow("Hopefully we should have circles! Yay!", dst);
}
namedWindow( "Contours", CV_WINDOW_AUTOSIZE );
imshow( "Contours", drawing );
}
As you can see the code to detect circles works quite well:
But now I need to filter out markers that I do not want. My marker is the bottom one. So once I have found a contour that is a circle, I want to check if there are other circular contours that exist within the region of the first circle and finally check the color of the smallest circle.
What method can I take to say if (circle contains 3+ smaller circles || smallest circle is [color] ) -> do stuff?
Take a look at the documentation for
findContours(InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy, int mode, int method, Point offset=Point())
You'll see that there's an optional hierarchy output vector which should be handy for your problem.
hierarchy – Optional output vector, containing information about the image topology. It has as many elements as the number of contours.
For each i-th contour contours[i] , the elements hierarchy[i][0] ,
hiearchyi , hiearchyi , and hiearchyi are set to
0-based indices in contours of the next and previous contours at the
same hierarchical level, the first child contour and the parent
contour, respectively. If for the contour i there are no next,
previous, parent, or nested contours, the corresponding elements of
hierarchy[i] will be negative.
When calling findCountours using CV_RETR_TREE you'll be getting the full hierarchy of each contour that was found.
This doc explains the hierarchy format pretty well.
You are already searching for circles of a certain size
//Skip small or non-convex objects
if(fabs(cv::contourArea(contours[i])) < 100 || !cv::isContourConvex(approx))
continue;
So you can use that to look for smaller circles than the one youve got, instead of looking for < 100 look for contours.size
I imagine there is the same for color also...

Transforming an Image with OpenCV

I'm new to using OpenCV and i'm testing it out trying to grab a licence plate from a car. I'm stuck on how to go about doing that. For example i will start off with an image like this:
and i want my final result to be something like:
I know how to use adaptivethreshold and things i'm confused at the steps need to go from 1 to 2. Thanks for the help!
too many hardcoded thresholds but will this work?
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace cv;
int main( int argc, char** argv )
{
Mat src = imread( "C:/test/single/license.jpg");
cvtColor(src,src,CV_BGR2GRAY);
blur( src, src, Size(3,3) );
Canny( src, src, 130, 130*4, 3 );
imshow( "edge", src );
GaussianBlur(src,src,Size(3,3),60);
threshold(src,src,0,255,CV_THRESH_OTSU);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(src, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Mat todraw=Mat::zeros(src.size(), CV_8UC1);
for(size_t i = 0; i < contours.size(); i++)
{
double area = fabs(contourArea(Mat(contours[i])));
if(area<600)
drawContours(todraw,contours,i,Scalar(255),-1);
}
imshow( "plate", todraw );
waitKey(0);
return 0;
}
This is exactly what you want - https://github.com/MasteringOpenCV/code/tree/master/Chapter5_NumberPlateRecognition
Its from the Mastering OpenCV Book. It segments number plates as well as dopes rudimentary OCR to recognise characters.

OpenCV pointer to bitmap processing

I've created a shared library for contour detection that is loaded from a Delphi/Lazarus application. The main app, passes a pointer to a bitmap to be processed by a function inside the library.
Here's the function inside the library. The parameter "img" is the pointer to my bitmap.
extern "C" {
void detect_contour(int imgWidth, int imgHeight, unsigned char * img, int &x, int &y, int &w, int &h)
{
Mat threshold_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Mat src_gray;
int thresh = 100;
int max_thresh = 255;
RNG rng(12345);
/// Load source image and convert it to gray
Mat src(imgHeight, imgWidth, CV_8UC4);
int idx;
src.data = img;
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGRA2GRAY );
blur( src_gray, src_gray, Size(10,10) );
/// Detect edges using Threshold
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
/// Find contours
findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Approximate contours to polygons + get bounding rects and circles
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
vector<Point2f>center( contours.size() );
vector<float>radius( contours.size() );
int lArea = 0;
int lBigger = -1;
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
if(lArea < boundRect[i].width * boundRect[i].height)
{
lArea = boundRect[i].width * boundRect[i].height;
lBigger = i;
}
}
if(lBigger > -1)
{
x = boundRect[lBigger].x;
y = boundRect[lBigger].y;
w = boundRect[lBigger].width;
h = boundRect[lBigger].height;
}
}
}
From the Delphi side, I'm passing a pointer to an array of this structure:
TBGRAPixel = packed record
blue, green, red, alpha: byte;
end;
I need to process the bitmap in-memory, that's why I'm not loading the file from inside the library.
The question is: Is this the right way to assign a bitmap to a cv::Mat ?
I ask this because the code works without problems in Linux, but fails on Windows compiled with Mingw.
Note: it fails with a SIGSEGV on this line:
blur( src_gray, src_gray, Size(10,10) );
EDIT: The SIGSEGV is raised only if I compile OpenCV in Release mode, in Debug mode it works ok.
Thanks in advance,
Leonardo.
So you are creating an image this way:
Mat src(imgHeight, imgWidth, CV_8UC4);
int idx;
src.data = img;
The first declaration and instantiation
Mat src(imgHeight, imgWidth, CV_8UC4) will allocate memory for a new image and a reference counter that automatically keeps track of the number of references to the allocated memory.
Then you mutate an instance variable through
src.data = img;
When the the instance src goes out of scope, the destructor is called and most likely tries to deallocate the memory at src.data, which you assigned and this might cause a segmentation fault. The right way to do it is to not change instance variable of an object, but to simply use the right constructor when you instantiate src:
Mat src(imgHeight, imgWidth, CV_8UC4, img);
This way, you just create a matrix header and no reference counter or deallocation will be performed by the destructor of src.
Good luck!
EDIT: I am not sure that the segfault is actually caused by an attempt to deallocate memory incorrectly, but it is a good practice not to break data abstraction by assigning directly to instance variables.