OpenCV pointer to bitmap processing - c++

I've created a shared library for contour detection that is loaded from a Delphi/Lazarus application. The main app, passes a pointer to a bitmap to be processed by a function inside the library.
Here's the function inside the library. The parameter "img" is the pointer to my bitmap.
extern "C" {
void detect_contour(int imgWidth, int imgHeight, unsigned char * img, int &x, int &y, int &w, int &h)
{
Mat threshold_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Mat src_gray;
int thresh = 100;
int max_thresh = 255;
RNG rng(12345);
/// Load source image and convert it to gray
Mat src(imgHeight, imgWidth, CV_8UC4);
int idx;
src.data = img;
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGRA2GRAY );
blur( src_gray, src_gray, Size(10,10) );
/// Detect edges using Threshold
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
/// Find contours
findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Approximate contours to polygons + get bounding rects and circles
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
vector<Point2f>center( contours.size() );
vector<float>radius( contours.size() );
int lArea = 0;
int lBigger = -1;
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
if(lArea < boundRect[i].width * boundRect[i].height)
{
lArea = boundRect[i].width * boundRect[i].height;
lBigger = i;
}
}
if(lBigger > -1)
{
x = boundRect[lBigger].x;
y = boundRect[lBigger].y;
w = boundRect[lBigger].width;
h = boundRect[lBigger].height;
}
}
}
From the Delphi side, I'm passing a pointer to an array of this structure:
TBGRAPixel = packed record
blue, green, red, alpha: byte;
end;
I need to process the bitmap in-memory, that's why I'm not loading the file from inside the library.
The question is: Is this the right way to assign a bitmap to a cv::Mat ?
I ask this because the code works without problems in Linux, but fails on Windows compiled with Mingw.
Note: it fails with a SIGSEGV on this line:
blur( src_gray, src_gray, Size(10,10) );
EDIT: The SIGSEGV is raised only if I compile OpenCV in Release mode, in Debug mode it works ok.
Thanks in advance,
Leonardo.

So you are creating an image this way:
Mat src(imgHeight, imgWidth, CV_8UC4);
int idx;
src.data = img;
The first declaration and instantiation
Mat src(imgHeight, imgWidth, CV_8UC4) will allocate memory for a new image and a reference counter that automatically keeps track of the number of references to the allocated memory.
Then you mutate an instance variable through
src.data = img;
When the the instance src goes out of scope, the destructor is called and most likely tries to deallocate the memory at src.data, which you assigned and this might cause a segmentation fault. The right way to do it is to not change instance variable of an object, but to simply use the right constructor when you instantiate src:
Mat src(imgHeight, imgWidth, CV_8UC4, img);
This way, you just create a matrix header and no reference counter or deallocation will be performed by the destructor of src.
Good luck!
EDIT: I am not sure that the segfault is actually caused by an attempt to deallocate memory incorrectly, but it is a good practice not to break data abstraction by assigning directly to instance variables.

Related

ASL hand sign detection opencv

I am a bit new to opencv and could use some help. I want to detect ASL hand signs.
For detecting hands, I can use either detection by skin color or a haar classifier. I already detect hands, but the problem is detecting the hand shape.
I can get the curent hand shape using the algorithm described here, so the problem is how do I compare this shape to my database of shapes?
I tried comparing them using the algorithm described here, which detects similar features images have. The problem is that this will match it with all the hands, since...well it detects them as hands. For instance, check this image, it should point only to V, but it detects features in W and R, too.
I want my final result to be like here, so how can I compare image shapes? Is my approach wrong?
I was thinking that detecting by convexity hull won't work, because most of the signs are closed fists. Check O, for instance, it has no open fingers, so I thought that trying to compare contours would be the best. How to compare them, though? FLANN doesn't seem to work. Or I'm doing it wrong.
Would a Haar cascade classifier work? Or would it detect two hands in different positions as hands as well?
Or is there another way to match shapes? That could solve my problem, but I couldn't find any example that does for custom shapes, only for ones like rectangles, circles and triangles.
Update
Ok, I've been playing a bit with matchShapes as berak told me. Here's my code below(it's a bit messy as I'm testing currently).
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
Mat src; Mat src_gray;
int thresh = 10;
int max_thresh = 300;
/// Function header
void thresh_callback(int, void* );
/** #function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGR2GRAY );
blur( src_gray, src_gray, Size(3,3) );
/// Create Window
char* source_window = "Source";
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
createTrackbar( " Canny thresh:", "Source", &thresh, max_thresh, thresh_callback );
thresh_callback( 0, 0 );
waitKey(0);
return(0);
}
/** #function thresh_callback */
void thresh_callback(int, void* )
{
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
double largest_area=0;
int largest_contour_index=0;
Rect bounding_rect;
/// Detect edges using canny
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
cout<<contours.size()<<endl;
/// Draw contours
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
vector<vector<Point> >hull( contours.size() );
for( int i = 0; i< contours.size(); i++ )
{ Scalar color = Scalar( 255,255,255 );
convexHull( Mat(contours[i]), hull[i], false );
// imshow("conturul"+to_string(i), drawing );
double a=contourArea( hull[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
bounding_rect=boundingRect(hull[i]);}
}
cout<<"zaindex "<<largest_contour_index<<endl;
Scalar color = Scalar( 255,255,255 );
drawContours( drawing, hull, largest_contour_index, color, 2, 8, hierarchy, 0, Point() );
namedWindow( "maxim", CV_WINDOW_AUTOSIZE );
imshow( "maxim", drawing );
Mat rects=imread( "scene.png", 1 );
rectangle(rects, bounding_rect, Scalar(0,255,0),1, 8,0);
imshow( "maxim2", rects );
/// Show in a window
}
The problem with it is the definition of a contour. These hand 'contours' are actually made of multiple contours themselves and that image that I showed earlier is actually made of these multiple contours but overlapped with eachother. matchShapes accepts arrays of Points as parameters, but the contours are arrays of arrays of Points.
So my question is, how can I add my contours vector with itself so I can pass it to matchShapes? In other words, how can I make a single contour from multiple overlapped contours?

How to find contours from a webcam frame using opencv and c++?

My goal is to find contours by capturing frame from a webcam. I was able to do it with static images but then I tried to use the same concept in a webcam frame and its giving me this error:
"OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN
(type0) && ((1 << type0) & fixedDepthMask) != 0)) in cv::_OutputArray::create, f
ile C:\builds\2_4_PackSlave-win64-vc11-shared\opencv\modules\core\src\matrix.cpp
, line 1486"
This is the code that I used to find the contours in my program;
Rng rng(12345);
Mat captureframe,con,threshold_output;
vector<vector<Point> > contours; //
vector<Vec4i> hierarchy;
while(true)
{
capturedevice>>captureframe;
con = captureframe.clone();
cvtColor(captureframe,con,CV_BGR2GRAY);
threshold( con, threshold_output, thresh, 255, THRESH_BINARY );
findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
drawContours( drawing, contours, i, color, 1, 8, vector<Vec4i>(), 0, Point() );
}
imshow("contour drawing",drawing);
}
I think that the problem is in the following two lines:
con = captureframe.clone();
cvtColor(captureframe,con,CV_BGR2GRAY);
In the first line, you are making con as a clone of captureFrame which means that con is a 3 channel image and in the Second line you are trying to make con a grayScale image which is 1 channel therefore you are getting the fault related to the image type.
You should try to do the following (i am not sure whether your code would run after this or not but you should not get the current error after this):
con.create(captureframe.rows , captureframe.cols, CV_8UC1);
cvtColor(captureframe,con,CV_BGR2GRAY);
Guys thank you so much for your help. I finally figured out my mistake and there was a problem in my declaration. I was looking online for some references and then i stumbled upon this code for object detection. The guy actually declared "contour" like this - "std::vector < std::vector < cv::Point > >contours; " and my declaration was "vector contours". My declaration worked for static images but it gave me this error while finding contours from webcam. Can anyone explain me the difference between the above two declarations? Also, as suggested by skm i converted my frame capture to a 1 channel depth image by using the con.create(frame.rows,frame.cols,cv_8uc1) and then converting it to grayscale image. This step is really crucial. So, here is my complete working code!! Thanks
VideoCapture capturedevice;
capturedevice.open(0);
Mat frame,con;
Mat grayframe;
std::vector < std::vector < cv::Point > >contours; //this is very important decalartion
while(true)
{
capturedevice>>frame;
con.create(frame.rows,frame.cols,CV_8UC1);
cvtColor(frame,con,CV_BGR2GRAY);
cv::findContours (con, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
cv::drawContours (frame, contours, -1, cv::Scalar (0, 0, 255), 2);
imshow("frame", frame);
waitKey(33);
}

c++ function call by value not work

I have a problem with this code:
The problem is when I see the image original, is modified by "borrarFondo()" but this function is called from "segmentarHoja" and here entry img by value, but img modifies.
void borrarFondo(Mat& img){
img = ~img;
Mat background;
medianBlur(img, background, 45);
GaussianBlur(background, background, Size(203,203),101,101);
img = img - background;
img = ~img;
}
void segmentarHoja(Mat img, Mat& imsheet){
Mat imgbw;
borrarFondo(img); //borrarFondo is called from here where img is a copy
cvtColor(img, imgbw, CV_BGR2GRAY);
threshold(imgbw, imgbw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat element = getStructuringElement(MORPH_ELLIPSE, Size(21,21));
erode(imgbw, imgbw, element);
vector<vector<Point> > contoursSheet;
findContours(imgbw, contoursSheet, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
vector<Rect> boundSheet(contoursSheet.size());
int largest_area=0;
for( int i = 0; i< contoursSheet.size(); i++ )
{
double a= contourArea( contoursSheet[i],false);
if(a>largest_area){
largest_area=a;
boundSheet[i] = boundingRect(contoursSheet[i]);
imsheet=img(boundSheet[i]).clone();
}
}
borrarFondo(imsheet);
}
int main()
{
Mat imsheet;
image= imread("c:/imagen.jpg");
segmentarHoja(image, imsheet);
imshow("imsheet",imsheet);
imshow("imagen",image); //original image by amending borrarFondo
waitKey(0);
}
I don't want to change original image
opencv Mat is a counted reference (i.e. like std::shared_ptr, except different syntax) where copy construction or assignment does not copy. use the clone method to copy. read the documentation, always a good idea.
if you're doing something like this:
Mat a;
Mat b = a;
or like this:
void func(Mat m) {...}
or :
vector<Mat> vm;
vm.push_back(m);
all of it is a shallow copy. the Mat header will be a copy, the pointers inside, too.
so, e.g. in the 1st example, b and a share the same size and data members
this might explain, why passing a Mat by value still results in pixels manipulated from the 'shallow' copy.
to avoid that you will have to do a 'deep' copy instead:
Mat c = a.clone(); // c has its own pixels now.
and again, if you don't want your Mat to be manipulated, pass it as a const Mat & be very careful about how you use it, as illustrated below.
#include <opencv2/opencv.hpp>
void foo( cv::Mat const& image )
{
cv::Mat result = image;
cv::ellipse(
result, // img
cv::Point( 300, 300 ), // center
cv::Size( 50, 50 ), // axes (bounding box size)
0.0, // angle
0.0, // startAngle
360.0, // endAngle
cv::Scalar_<int>( 0, 0, 255 ), // color
6 // thickness
);
}
auto main() -> int
{
auto window_name = "Display";
cv::Mat lenna = cv::imread( "lenna.png" );
foo( lenna );
imshow( window_name, lenna );
cv::waitKey( 0 );
}
The Mat const& lied about mutability, and Lenna’s nose is correspondingly long, here marked by a big fat circle placed by the foo function above:

OpenCV findContours - 2.4.5 Heap Corruption

I have a heap corruption happening with 100% certainty on the findContours function. When I do not use it, everything works fine.
unsigned char* UCFromMatUC(cv::Mat& input)
{
int size = input.size.p[0] * input.size.p[1];
unsigned char* result = new unsigned char[size];
memcpy(result, input.data, size);
return result;
}
unsigned char* CannyEdgeCV(unsigned char* input, int width, int height)
{
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::RNG rng(12345);
cv::Mat inp(cv::Size(width, height), CV_8UC1, input);
cv::Mat canny_output;
cv::Mat outp;
cv::blur(inp, outp, cv::Size(3,3));
cv::Canny(outp, canny_output, 4.0, 8.0);
if(canny_output.type()!=CV_8UC1){
return NULL;
}
cv::findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
cv::Mat drawing = cv::Mat::zeros( canny_output.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
cv::Scalar color = cv::Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, cv::Point() );
}
cv::imwrite( "contours.jpg", drawing );
unsigned char* result = UCFromMatUC(canny_output);
return result;
}
Originally I was only using the canny edge map, but later on I wanted to test the results of the contour functionality.
The Canny Edge works fine, and I get an image as expected, but the findContours (both the one in code and the commented version) fail with a heap corruption error. What causes this?
The entry point for this is the CannyEdgeCV(), and is called with a 640x480 8bit grey image.
Edit: updated code.
Edit2: when I tried to create a minimal example to reproduce this, my code failed at even imread("imagename.bmp"); which was really odd, so I began investigating what might cause this. Now someone else wrote in a corresponding SO question that you cannot mix debug / release mode libraries, so if you are in debug, you have to use debug DLLs, and that worked for me now, I get the expected results.
The main culprit is your use of cv::Mat* and new. This is a Bad Idea. It is unnecessary and often problematic (as you have discovered) to dynamically allocate cv::Mat objects. A better solution is to pass them by value or const reference, since the underlying image data is refcounted, and cv::Mat are shallow copied.
The first specific problem is that you manually assign the data member in CVMatFromUC():
resultMat->data = input
You should not do this. cv::Mat has other members which also reference the data location, and you are asking for trouble. If you need to create a cv::Mat header for external data, you should create a cv::Mat like so:
cv::Mat inp(cv::Size(width, height), CV_8UC1, input); //Create cv::Mat header, no memory copied
Also, your type check for canny_output is incorrect. !canny_output.type() is evaluated first, and implicitly converts to true, as does CV_8UC1. So the expression is always true. The condition you want is: canny_output.type() != CV_8UC1
Given this, it turns out that the CVMatFromUC() function is unnecessary. An improved version of your function follows:
uchar* CannyEdgeCV(uchar* input, int width, int height)
{
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::Mat inp(cv::Size(width, height), CV_8UC1, input); //Create cv::Mat header, no memory copied
cv::Mat canny_output;
cv::Mat outp;
cv::blur(inp, outp, cv::Size(3,3));
cv::Canny(outp, canny_output, 10.0, 15.0);
if(canny_output.type()!=CV_8UC1){
return NULL;
}
cv::findContours(canny_output, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE );
//cv::findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
unsigned char* result = UCFromMatUC(&inp);
return result;
}
I should note that it appears that CannyEdgeCV returns the same data that it receives, so it may be possible to remove the call to UCFromMatUC() and the associated data copy entirely. However, I tried and got memory errors, so there might be other problems lurking elsewhere.
-Change findContours as follows
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
If it doesn't work again, your problem is probabely uninitialized memory. You can try to create new Mat outP2 and clone the original one into this.
Then use outP2 from the following step:
cv::Canny(*outP2, canny_output, 10.0, 15.0);
...
...

Drawing contours retrieved from the binary image

I want use findContours with the binary image, but the callback function causes an error:
Invalid address specified to RtlFreeHeap
when returning.
When i want to use clear() to free the vector<vector<Point> > value, it causes the same exception and the code crashed in free.c at the line:
if (retval == 0) errno = _get_errno_from_oserr(GetLastError());
For example:
void onChangeContourMode(int, void *)
{
Mat m_frB = imread("3.jpg", 0);
vector<vector<Point>> contours
vector<Vec4i> hierarchy;
findContours(m_frB, contours, hierarchy, g_contour_mode, CV_CHAIN_APPROX_SIMPLE);
for( int idx = 0 ; idx >= 0; idx = hierarchy[idx][0] )
drawContours( m_frB, contours, idx, Scalar(255,255,255),
CV_FILLED, 8, hierarchy );
imshow( "Contours", m_frB );
}
Can anyone help me? Thank you very much!
Mat m_frB = imread("3.jpg", CV_LOAD_IMAGE_GRAYSCALE);
loads 3.jpg as a 8bpp grayscale image, so it's not binary image. It is specific for findContours function that "non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary". Also note that this "function modifies the image while extracting the contours".
The actual problem here is that although the destination image is 8bpp, you should make sure that it has 3 channels by using CV_8UC3 before you draw RGB contours into it. Try this:
// find contours:
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(m_frB, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
// draw contours:
Mat imgWithContours = Mat::zeros(m_frB.rows, m_frB.cols, CV_8UC3);
RNG rng(12345);
for (int i = 0; i < contours.size(); i++)
{
Scalar color = Scalar(rng.uniform(50, 255), rng.uniform(50,255), rng.uniform(50,255));
drawContours(imgWithContours, contours, i, color, 1, 8, hierarchy, 0);
}
imshow("Contours", imgWithContours);