OpenCV3 Finger Detection - c++

I am quite new to OpenCV and have just played around with it for a while doing the basic things like threshold images, etc. I'm using Visual Studio 2015 in C++ with OpenCV3. I'm trying to detect the number of fingers of my hand that are being held up using the camera. For example, if I hold up 4 fingers, I would like the program to tell me that 4 fingers are detected. So far I have been able to detect the edges of objects such as my entire hand in the camera using contours. Here is the code:
#include "opencv2\opencv.hpp"
using namespace cv;
void on_trackbar(int, void*) {
// Dummy function
}
int main(int argv, char** argc) {
Mat frame;
Mat grayFrame;
Mat hsvFrame;
Mat thesholdFrame;
VideoCapture capture;
//Trackbar variables (H,S,V)
int H_MIN = 0;
int H_MAX = 180;
int S_MIN = 0;
int S_MAX = 255;
int V_MIN = 0;
int V_MAX = 255;
namedWindow("trackbar", 0);
//create memory to store trackbar name on window
char TrackbarName[50];
sprintf(TrackbarName, "H_MIN");
sprintf(TrackbarName, "H_MAX");
sprintf(TrackbarName, "S_MIN");
sprintf(TrackbarName, "S_MAX");
sprintf(TrackbarName, "V_MIN");
sprintf(TrackbarName, "V_MAX");
createTrackbar("H_MIN", "trackbar", &H_MIN, H_MAX, on_trackbar);
createTrackbar("H_MAX", "trackbar", &H_MAX, H_MAX, on_trackbar);
createTrackbar("S_MIN", "trackbar", &S_MIN, S_MAX, on_trackbar);
createTrackbar("S_MAX", "trackbar", &S_MAX, S_MAX, on_trackbar);
createTrackbar("V_MIN", "trackbar", &V_MIN, V_MAX, on_trackbar);
createTrackbar("V_MAX", "trackbar", &V_MAX, V_MAX, on_trackbar);
capture.open(0);
std::vector<std::vector<cv::Point> > contours;
while (true){
capture >> frame;
waitKey(10);
cvtColor(frame, hsvFrame, COLOR_BGR2HSV);
//imshow("HSV", hsvFrame);
inRange(hsvFrame, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), thesholdFrame);
findContours(thesholdFrame, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
int largest_contour_area = 0;
int largest_contour_area_index = 1;
/*for (int i = 0; i < contours.size(); i++) {
double contour_area = contourArea(contours[i], false);
if (contour_area > largest_contour_area) {
largest_contour_area = contour_area;
largest_contour_area_index = i;
}
}*/
drawContours(frame, contours, -1, (0, 255, 0), 3);
putText(frame, "NO DETECTION", Point(25, 40), 2, 1, CV_RGB(255, 255, 0), 1, 8, false);
imshow("Threshold", thesholdFrame);
imshow("Camera", frame);
}
}
After searching around for a while, I realised that it might be useful to isolate the contours of the hand only. The 'for' loop that is commented out is my attempt at implementing that. However, it doesn't seem to work. I do realise that once I uncomment that section I have to change
drawContours(frame, contours, -1, (0, 255, 0), 3);
to
drawContours(frame, contours[largest_contour_area_index], -1, (0, 255, 0), 3);
This does not seem to work. I keep getting this error(when I uncomment the for loop and replace the drawContours command with the one above):
Unhandled exception at 0x00007FF8C1537788 in FingerDetection.exe: Microsoft C++ exception: cv::Exception at memory location 0x0000000C4CAFDB50.
Furthermore, if I did somehow manage to get the program to find the edges of the largest contour in the frame(i.e the hand), how would I proceed to detect the number of fingers? I've heard things about convex hulls, etc. however I cannot find any good explanation for those exactly are. Any clarification or advice as to what to do? Keep in mind that I am very new to openCV.

Your second question ("how would I proceed to detect the number of fingers") might be a bit too broad to be answered here.
But the first one (i.e. why you are getting the exception) seems to have an easy answer, which I derive from the OpenCV documentation (version 2.4) for the drawContours function I found here.
According to that, the second parameter must be an array (of contours), so you cannot simply pass a single element contours[largest_contour_area_index].
If you want to draw one contour only, you still have to pass the entire array, and then the index of the contour you want to draw as third parameter (instead of -1):
drawContours(frame, contours, largest_contour_area_index, (0, 255, 0), 3);
As an additional note, it would be a good to idea to ensure that largest_contour_area_index has a value that is smaller than the number of contours in the array. Given your current code, there could be situations where the body of the for-loop is never executed, and largest_contour_area_index, which is initialized at a value of 1 could be too large.

Related

C++ OpenCV - Find biggest object in an webcam stream and sort it by size

My goal is to find the biggest contour of a captured webcam frame, then after it's found, find its size and determine either to be rejected or accepted.
Just to explain the objetive of this project, i am currently working for a Hygiene product's Manufacturer. There we have, in total, 6 workers that are responsible for sorting the defective soap bars out of the production line. So in order to gain this workforce for other activities, i am trying to write an algorithm to "replace" their eyes.
I've tried several methods along the way (findcontours, SimpleBlobDetection, Canny, Object tracking), but the problem that i've been facing is that i can't seem to find a way to effectively find the biggest object in a webcam image, find its size and then determine to either discard or accept it.
Below follows my newest code to find the biggest contour in an webcam stream:
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv/cv.h"
#include "opencv2\imgproc\imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, const char** argv)
{
Mat src;
Mat imgGrayScale;
Mat imgCanny;
Mat imgBlurred;
/// Load source image
VideoCapture capWebcam(0);
if (capWebcam.isOpened() == false)
{
cout << "Não foi possível abrir webcam!" << endl;
return(0);
}
while (capWebcam.isOpened())
{
bool blnframe = capWebcam.read(src);
if (!blnframe || src.empty())
{
cout << "Erro! Frame não lido!\n";
break;
}
int largest_area = 0;
int largest_contour_index = 0;
Rect bounding_rect;
Mat thr(src.rows, src.cols, CV_8UC1);
Mat dst(src.rows, src.cols, CV_8UC1, Scalar::all(0));
cvtColor(src, imgGrayScale, CV_BGR2GRAY); //Convert to gray
GaussianBlur(imgGrayScale, imgBlurred, Size(5, 5), 1.8);
Canny(imgBlurred, imgCanny, 45, 90); //Threshold the gray
vector<vector<Point>> contours; // Vector for storing contour
vector<Vec4i> hierarchy;
findContours(imgCanny, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); // Find the contours in the image
for (int i = 0; i < contours.size(); i++) // iterate through each contour.
{
double a = contourArea(contours[i], false); // Find the area of contour
if (a > largest_area)
{
largest_area = a;
largest_contour_index = i; //Store the index of largest contour
bounding_rect = boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
Scalar color(255, 255, 255);
drawContours(dst, contours, largest_contour_index, color, CV_FILLED, 8, hierarchy); // Draw the largest contour using previously stored index.
rectangle(src, bounding_rect, Scalar(0, 255, 0), 1, 8, 0);
imshow("src", src);
imshow("largest Contour", dst);
waitKey(27);
}
return(0);
}
And here are the results windows that the program generates and the image of the object that i want to detect and sort.
Thank you all in advance for any clues on how to achieve my goal.

Perfomance Issues while capturing and processing a video

I'm currently working on a project where I need to display a processed live video capture. Therefore, I'm using something similar to this:
cv::VideoCapture cap(0);
if (!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
cv::namedWindow("Current Capture");
for (;;)
{
cv::Mat frame;
cap >> frame;
cv::Mat mirrored;
cv::flip(frame, mirrored, 1);
cv::imshow("Current Capture", process_image(mirrored));
if (cv::waitKey(30) >= 0) break;
}
The problem I have is, that process_image, which perfomes a circle detection in the image, needs some time to finish and causes the displaying to be rather a slideshow then a video.
My Question is: How can I speed up the processing without manipulating the process_image function?
I thought about performing the image processing in another thread, but I'm not really sure how to start. Do you have any other idea than this?
PS.: I'm not expecting you to write code for me, I only need a point to start from ;)
EDIT:
Ok, if there is nothing i can do about the performance while capturing, I will need to change the process_image function.
cv::Mat process_image(cv::Mat img)
{
cv::Mat hsv;
cv::medianBlur(img, img, 7);
cv::cvtColor(img, hsv, cv::COLOR_BGR2HSV);
cv::Mat lower_hue_range; // lower and upper hue range in case of red color
cv::Mat upper_hue_range;
cv::inRange(hsv, cv::Scalar(LOWER_HUE1, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), lower_hue_range);
cv::inRange(hsv, cv::Scalar(LOWER_HUE2, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), upper_hue_range);
/// Combine the above two images
cv::Mat hue_image;
cv::addWeighted(lower_hue_range, 1.0, upper_hue_range, 1.0, 0.0, hue_image);
/// Reduce the noise so we avoid false circle detection
cv::GaussianBlur(hue_image, hue_image, cv::Size(13, 13), 2, 2);
/// store all found circles here
std::vector<cv::Vec3f> circles;
cv::HoughCircles(hue_image, circles, CV_HOUGH_GRADIENT, 1, hue_image.rows / 8, 100, 20, 0, 0);
for (size_t i = 0; i < circles.size(); i++)
{
/// circle center
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), 3, cv::Scalar(0, 255, 0), -1, 8, 0);
/// circle outline
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), circles[i][2], cv::Scalar(0, 0, 255), 3, 8, 0);
}
cv::Mat newI;
cv::cvtColor(hsv, newI, cv::COLOR_HSV2BGR);
return newI;
}
Is there a huge perfomance issue I can do anything about?
If you are sure that the process_image function is what is causing the bottle neck in your program, but you can't modify it, then there's not really a lot you can do. If that function takes longer to execute than the duration of a video frame then you will never get what you need.
How about reducing the quality of the video capture or reducing the size? At the moment I can see you have it set to 1280*720. If the process_image function has less data to work with it should execute faster.

Find Color inside an shape C++ OpenCV

I have a code that find the countours of a video and track the shape that I choose. In this code I am searching for triangle and rectangle looking at 3 or 4 contours.
I need help in 2 questions:
1- Using this method, how can I detect circles?
2- How can I search the color ( the shape is done, so in my "if" I need to verify the color too, but how ? If I for example want to find a red triangle )
Thank you so much
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp""
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv\cv.h>
#include <math.h>
using namespace cv;
using namespace std;
int iLastX = -1;
int iLastY = -1;
IplImage* imgTracking = 0;
int lastX1 = -1;
int lastY1 = -1;
int lastX2 = -1;
int lastY2 = -1;
void trackObject(IplImage* imgThresh){
CvSeq* contour; //hold the pointer to a contour
CvSeq* result; //hold sequence of points of a contour
CvMemStorage *storage = cvCreateMemStorage(0); //storage area for all contours
//finding all contours in the image
cvFindContours(imgThresh, storage, &contour, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0, 0));
//iterating through each contour
while (contour)
{
//obtain a sequence of points of the countour, pointed by the variable 'countour'
result = cvApproxPoly(contour, sizeof(CvContour), storage, CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0);
//if there are 3 vertices in the contour and the area of the triangle is more than 100 pixels
if (result->total == 3 && fabs(cvContourArea(result, CV_WHOLE_SEQ))>100)
{
//iterating through each point
CvPoint *pt[3];
for (int i = 0; i < 3; i++){
pt[i] = (CvPoint*)cvGetSeqElem(result, i);
}
//drawing lines around the triangle
cvLine(imgTracking, *pt[0], *pt[1], cvScalar(255, 0, 0), 4);
cvLine(imgTracking, *pt[1], *pt[2], cvScalar(255, 0, 0), 4);
cvLine(imgTracking, *pt[2], *pt[0], cvScalar(255, 0, 0), 4);
}
else if (result->total == 4 && fabs(cvContourArea(result, CV_WHOLE_SEQ))>100)
{
//iterating through each point
CvPoint *pt[4];
for (int i = 0; i < 4; i++){
pt[i] = (CvPoint*)cvGetSeqElem(result, i);
}
//drawing lines around the quadrilateral
cvLine(imgTracking, *pt[0], *pt[1], cvScalar(0, 255, 0), 4);
cvLine(imgTracking, *pt[1], *pt[2], cvScalar(0, 255, 0), 4);
cvLine(imgTracking, *pt[2], *pt[3], cvScalar(0, 255, 0), 4);
cvLine(imgTracking, *pt[3], *pt[0], cvScalar(0, 255, 0), 4);
}
else if (CIRCLE???)
{
}
//obtain the next contour
contour = contour->h_next;
}
cvReleaseMemStorage(&storage);
}
int main(){
//load the video file to the memory
CvCapture *capture = cvCaptureFromAVI("F:/TCC/b2.avi");
if (!capture){
printf("Capture failure\n");
return -1;
}
IplImage* frame = 0;
frame = cvQueryFrame(capture);
if (!frame) return -1;
//create a blank image and assigned to 'imgTracking' which has the same size of original video
imgTracking = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 3);
cvZero(imgTracking); //covert the image, 'imgTracking' to black
cvNamedWindow("Video");
//iterate through each frames of the video
while (true){
cvSet(imgTracking, cvScalar(0, 0, 0));
frame = cvQueryFrame(capture);
if (!frame) break;
frame = cvCloneImage(frame);
//smooth the original image using Gaussian kernel
cvSmooth(frame, frame, CV_GAUSSIAN, 3, 3);
//converting the original image into grayscale
IplImage* imgGrayScale = cvCreateImage(cvGetSize(frame), 8, 1);
cvCvtColor(frame, imgGrayScale, CV_BGR2GRAY);
//thresholding the grayscale image to get better results
cvThreshold(imgGrayScale, imgGrayScale, 100, 255, CV_THRESH_BINARY_INV);
//track the possition of the ball
trackObject(imgGrayScale);
// Add the tracking image and the frame
cvAdd(frame, imgTracking, frame);
cvShowImage("Video", frame);
//Clean up used images
cvReleaseImage(&imgGrayScale);
cvReleaseImage(&frame);
//Wait 10mS
int c = cvWaitKey(10);
//If 'ESC' is pressed, break the loop
if ((char)c == 27) break;
}
cvDestroyAllWindows();
cvReleaseImage(&imgTracking);
cvReleaseCapture(&capture);
return 0;
}
Afaik, this will be a relatively complex project, not answerable in a single question, but here is my 2 cents. I'll try to sketch the big painting that comes to my mind right now and just hope it helps you somehow.
Basically, the process can be decoupled in:
Thresholding the image to get promising pixels out. As you say, you need to specify the color, so you can use that to set the threshold, i.e.: if you're looking for red objects, you can use a threshold that has a high R channel level and low on others.
Once you have promising figures out, you need a labeling algorithm to separate all figures. You can use this OpenCV addon library for that. This way, you will have all figures you found separated.
Now comes the hardest part imho. You can use a classifier to classify figures (there are a lot of docs on OpenCV classifiers out there, like this one). When you use a classifier for this task, you're comparing an input shape (in this case each figure you labeled previously) with several class of shapes (triangles and circles in your case), and decide which class fits better your input shape. The comparation is what the classifier does for you. Having told that, you need two things:
Define what properties you'll get from the shapes to be able to compare them. You can use Hu moments for that.
A dataset extracted from a training set of figures that you already know which class are. This is, you basically will get a bunch of well-known triangles, circles, squares and whatever shapes come to your mind, and get the properties at step 3.1 out of them.
This way, to check what class a figure fits on, you'll feed the classifier with the trained dataset with properties of well known classes, and the properties (i.e.: Hu moments) of an input shape (you got those from labeling).
Note that when using a classifier, if you detect a shape that you didn't provide data for at the training dataset, the classifier will try to fit it the better it can with the classes you did provide. For example, if you made the trained dataset for triangles and squares, when you provide a circle shape, the classifier will tell you it's a square or a triangle.
Sorry for not providing details, but I'm sure you can get info on the net on how to use a classifier, and check the OpenCV docs. Maybe also come later with more specific issues, but afaik, this is the thing I can tell.

Derivatives in OpenCV

I'm writing a program using opencv that does text detection and extraction.
Im using the Sobel derivative in order to do edge detection and have gotten the following result:
But I wish to get the following result:
(I appologize for the blurry image.)
The problem I'm having is the "blank areas" inside the edges "confuse" the algorithem I'm using so when the algorithem detects the "blank part" seperating between two lines from the lines themselves it gets confused and start running into the letter themselves instead of keepeing between two lines. This error, I believe would be solves by achieving the second result.
Anyone knows what changes i need to make? in the soble derivative? maybe use a different derivative?
Code:
Mat ProfileSeamTextLineExtractor::computeDerivative(){
Mat img = _image;
Mat gradiant_mat;
int scale = 2;
int delta = 0;
int ddepth = CV_16S;
GaussianBlur(img, img, Size(3, 3), 0, 0, BORDER_DEFAULT);
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
Sobel(img, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_x, abs_grad_x);
Sobel(img, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_y, abs_grad_y);
/// Total Gradient (approximate)
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, gradiant_mat);
return gradiant_mat;
}
Regards,
Try using the second sobel derivative, add, normalize (this may do the same as addWeighted), and then thresholding optimally. I had results similar to yours with different threshold values.
Here's an example:
cv::Mat result;
cvtColor(image, gray, CV_BGR2GRAY);
cv::medianBlur(gray, gray, 3);
cv::Mat sobel_x, sobel_y, result;
cv::Sobel(gray, sobel_x, CV_32FC1, 2, 0, 5);
cv::Sobel(gray, sobel_y, CV_32FC1, 0, 2, 5);
cv::Mat sum = sobel_x + sobel_y;
cv::normalize(sum, result, 0, 255, CV_MINMAX, CV_8UC1);
//Determine optimal threshold value using THRESH_OTSU.
// This didn't give me optimal results, but was a good starting point.
cv::Mat temp, final;
double threshold = cv::threshold(result, temp, 0, 255, CV_THRESH_BINARY+CV_THRESH_OTSU);
cv::threshold(result, final, threshold*.9, 255, CV_THRESH_BINARY);
I was able to clearly extract both light text on a dark background, and dark text on a light background.
If you need the final image to consistently be white background with black text, you can do this:
cv::Scalar avgPixelIntensity = cv::mean( final );
if(avgPixelIntensity[0] < 127.0)
cv::bitwise_not(final, final);
I tried a lot of different text extraction methods and couldn't find any that worked across the board, but this seems to. This took a lot of trial and error to figure out, so I hope this helps.
I don't really understand what your final aim is. Do you eventually want a nice filled in version of the text so you can recognise the characters? I can give that a shot if that's what you are looking for.
This is what I did while trying to remove inner holes:
For this one I didn't bother:
It fails at the edges where the text is cut off.
Obviously, I had to work with the image that had already gone through some processing. I might be able to give you more help if I had the original and produce a better output. You might not even need to use derivatives at all if the background is clean enough.
Here is the code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
void printInnerContours (int contourPos, Mat &filled, vector<vector<Point2i > > &contours, vector<Vec4i> &hierarchy, int area);
int main() {
int areaThresh;
vector<vector<Point2i > > contours;
vector<Vec4i> hierarchy;
Mat text = imread ("../wHWHA.jpg", 0); //write greyscale
threshold (text, text, 50, 255, THRESH_BINARY);
imwrite ("../text1.jpg", text);
areaThresh = (0.01 * text.rows * text.cols) / 100;
findContours (text, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
Mat filled = Mat::zeros(text.rows, text.cols, CV_8U);
cout << contours.size() << endl;
for (int i = 0; i < contours.size(); i++) {
int area = contourArea(contours[i]);
if (area > areaThresh) {
if ((hierarchy[i][2] != -1) && (hierarchy[i][3] == -1)) {
drawContours (filled, contours, i, 255, -1);
if (hierarchy[i][2] != -1) {
printInnerContours (hierarchy[i][2], filled, contours, hierarchy, area);
}
}
}
}
imwrite("../output.jpg", filled);
return 0;
}
void printInnerContours (int contourPos, Mat &filled, vector<vector<Point2i > > &contours, vector<Vec4i> &hierarchy, int area) {
int areaFrac = 5;
if (((contourArea (contours[contourPos]) * 100) / area) < areaFrac) {
//drawContours (filled, contours, contourPos, 0, -1);
}
if (hierarchy[contourPos][2] != -1) {
printInnerContours (hierarchy[contourPos][2], filled, contours, hierarchy, area);
}
if (hierarchy[contourPos][0] != -1) {
printInnerContours (hierarchy[contourPos][0], filled, contours, hierarchy, area);
}
}

Detect triangles an rectangles from image with opencv

I have this image:
And i want take out the triangle and rectangles from the image. I have 2 algorithmes, one for triangles and another for rectangles in the code below. But they are very similar. But in this way i only can take out the triangle more bright. Can anyone help me please.
IplImage* DetectAndDrawTriang(IplImage* img){
CvSeq* contours;
CvSeq* result;
CvMemStorage *storage = cvCreateMemStorage(0);
int d=30;
IplImage* ret = cvCreateImage(cvGetSize(img), 8, 3);
IplImage* temp = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
cvSet(ret,cvScalar(0,0,0));
cvCvtColor(img, temp, CV_BGR2GRAY);
cvThreshold( temp, temp, 180, 255, CV_THRESH_BINARY );
//cvSmooth(temp, temp, CV_GAUSSIAN, 9, 9, 0,0);
cvNamedWindow("thre");
cvShowImage("thre", temp);
cvFindContours(temp, storage, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
while(contours)
{
result = cvApproxPoly(contours, sizeof(CvContour), storage, CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.1, 0);
if(result->total==3)
{
CvPoint *pt[3];
for(int i=0;i<3;i++)
pt[i] = (CvPoint*)cvGetSeqElem(result, i);
if((int)sqrt((pt[0]->x-pt[2]->x)*(pt[0]->x-pt[2]->x)+(pt[0]->y-pt[2]->y)*(pt[0]->y-pt[2]->y))>=d && (int)sqrt((pt[0]->x-pt[1]->x)*(pt[0]->x-pt[1]->x)+(pt[0]->y-pt[1]->y)*(pt[0]->y-pt[1]->y))>=d && (int)sqrt((pt[1]->x-pt[2]->x)*(pt[1]->x-pt[2]->x)+(pt[1]->y-pt[2]->y)*(pt[1]->y-pt[2]->y))>=d)
{
cvLine(ret, *pt[0], *pt[1], cvScalar(255,255,255));
cvLine(ret, *pt[1], *pt[2], cvScalar(255,255,255));
cvLine(ret, *pt[2], *pt[0], cvScalar(255,255,255));
}
}
contours = contours->h_next;
}
cvReleaseImage(&temp);
cvReleaseMemStorage(&storage);
return ret;
}
One Idea I can think of is using the cv::matchShapes function (I suggest using the cv2 library with Mat instead of Ipl images). matchShapes takes a Mat of the object you want to detect and a Mat of the object you want to compare it against. So in your case you can make a Mat of the contours of a triangle and square and compare those images with each contour in the image your are searching through.
You may also consider simply doing template matching since your objects are static. Check out cv::matchTemplate and its pretty much the same idea as the above paragraph.
ApproxPoly is a good solution if you can be sure that your contours are complete.
If the contour is a square but doesn't close then it will be a line with four segments and 3 corners after approximating.
Another solution is to fit a box around the contour points (there is a function to do this) and check the width/height ratio. You can then test the individual line segments in the contour list to see if they match the box sides.