Find Color inside an shape C++ OpenCV - c++

I have a code that find the countours of a video and track the shape that I choose. In this code I am searching for triangle and rectangle looking at 3 or 4 contours.
I need help in 2 questions:
1- Using this method, how can I detect circles?
2- How can I search the color ( the shape is done, so in my "if" I need to verify the color too, but how ? If I for example want to find a red triangle )
Thank you so much
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp""
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv\cv.h>
#include <math.h>
using namespace cv;
using namespace std;
int iLastX = -1;
int iLastY = -1;
IplImage* imgTracking = 0;
int lastX1 = -1;
int lastY1 = -1;
int lastX2 = -1;
int lastY2 = -1;
void trackObject(IplImage* imgThresh){
CvSeq* contour; //hold the pointer to a contour
CvSeq* result; //hold sequence of points of a contour
CvMemStorage *storage = cvCreateMemStorage(0); //storage area for all contours
//finding all contours in the image
cvFindContours(imgThresh, storage, &contour, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0, 0));
//iterating through each contour
while (contour)
{
//obtain a sequence of points of the countour, pointed by the variable 'countour'
result = cvApproxPoly(contour, sizeof(CvContour), storage, CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0);
//if there are 3 vertices in the contour and the area of the triangle is more than 100 pixels
if (result->total == 3 && fabs(cvContourArea(result, CV_WHOLE_SEQ))>100)
{
//iterating through each point
CvPoint *pt[3];
for (int i = 0; i < 3; i++){
pt[i] = (CvPoint*)cvGetSeqElem(result, i);
}
//drawing lines around the triangle
cvLine(imgTracking, *pt[0], *pt[1], cvScalar(255, 0, 0), 4);
cvLine(imgTracking, *pt[1], *pt[2], cvScalar(255, 0, 0), 4);
cvLine(imgTracking, *pt[2], *pt[0], cvScalar(255, 0, 0), 4);
}
else if (result->total == 4 && fabs(cvContourArea(result, CV_WHOLE_SEQ))>100)
{
//iterating through each point
CvPoint *pt[4];
for (int i = 0; i < 4; i++){
pt[i] = (CvPoint*)cvGetSeqElem(result, i);
}
//drawing lines around the quadrilateral
cvLine(imgTracking, *pt[0], *pt[1], cvScalar(0, 255, 0), 4);
cvLine(imgTracking, *pt[1], *pt[2], cvScalar(0, 255, 0), 4);
cvLine(imgTracking, *pt[2], *pt[3], cvScalar(0, 255, 0), 4);
cvLine(imgTracking, *pt[3], *pt[0], cvScalar(0, 255, 0), 4);
}
else if (CIRCLE???)
{
}
//obtain the next contour
contour = contour->h_next;
}
cvReleaseMemStorage(&storage);
}
int main(){
//load the video file to the memory
CvCapture *capture = cvCaptureFromAVI("F:/TCC/b2.avi");
if (!capture){
printf("Capture failure\n");
return -1;
}
IplImage* frame = 0;
frame = cvQueryFrame(capture);
if (!frame) return -1;
//create a blank image and assigned to 'imgTracking' which has the same size of original video
imgTracking = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 3);
cvZero(imgTracking); //covert the image, 'imgTracking' to black
cvNamedWindow("Video");
//iterate through each frames of the video
while (true){
cvSet(imgTracking, cvScalar(0, 0, 0));
frame = cvQueryFrame(capture);
if (!frame) break;
frame = cvCloneImage(frame);
//smooth the original image using Gaussian kernel
cvSmooth(frame, frame, CV_GAUSSIAN, 3, 3);
//converting the original image into grayscale
IplImage* imgGrayScale = cvCreateImage(cvGetSize(frame), 8, 1);
cvCvtColor(frame, imgGrayScale, CV_BGR2GRAY);
//thresholding the grayscale image to get better results
cvThreshold(imgGrayScale, imgGrayScale, 100, 255, CV_THRESH_BINARY_INV);
//track the possition of the ball
trackObject(imgGrayScale);
// Add the tracking image and the frame
cvAdd(frame, imgTracking, frame);
cvShowImage("Video", frame);
//Clean up used images
cvReleaseImage(&imgGrayScale);
cvReleaseImage(&frame);
//Wait 10mS
int c = cvWaitKey(10);
//If 'ESC' is pressed, break the loop
if ((char)c == 27) break;
}
cvDestroyAllWindows();
cvReleaseImage(&imgTracking);
cvReleaseCapture(&capture);
return 0;
}

Afaik, this will be a relatively complex project, not answerable in a single question, but here is my 2 cents. I'll try to sketch the big painting that comes to my mind right now and just hope it helps you somehow.
Basically, the process can be decoupled in:
Thresholding the image to get promising pixels out. As you say, you need to specify the color, so you can use that to set the threshold, i.e.: if you're looking for red objects, you can use a threshold that has a high R channel level and low on others.
Once you have promising figures out, you need a labeling algorithm to separate all figures. You can use this OpenCV addon library for that. This way, you will have all figures you found separated.
Now comes the hardest part imho. You can use a classifier to classify figures (there are a lot of docs on OpenCV classifiers out there, like this one). When you use a classifier for this task, you're comparing an input shape (in this case each figure you labeled previously) with several class of shapes (triangles and circles in your case), and decide which class fits better your input shape. The comparation is what the classifier does for you. Having told that, you need two things:
Define what properties you'll get from the shapes to be able to compare them. You can use Hu moments for that.
A dataset extracted from a training set of figures that you already know which class are. This is, you basically will get a bunch of well-known triangles, circles, squares and whatever shapes come to your mind, and get the properties at step 3.1 out of them.
This way, to check what class a figure fits on, you'll feed the classifier with the trained dataset with properties of well known classes, and the properties (i.e.: Hu moments) of an input shape (you got those from labeling).
Note that when using a classifier, if you detect a shape that you didn't provide data for at the training dataset, the classifier will try to fit it the better it can with the classes you did provide. For example, if you made the trained dataset for triangles and squares, when you provide a circle shape, the classifier will tell you it's a square or a triangle.
Sorry for not providing details, but I'm sure you can get info on the net on how to use a classifier, and check the OpenCV docs. Maybe also come later with more specific issues, but afaik, this is the thing I can tell.

Related

OpenCV3 Finger Detection

I am quite new to OpenCV and have just played around with it for a while doing the basic things like threshold images, etc. I'm using Visual Studio 2015 in C++ with OpenCV3. I'm trying to detect the number of fingers of my hand that are being held up using the camera. For example, if I hold up 4 fingers, I would like the program to tell me that 4 fingers are detected. So far I have been able to detect the edges of objects such as my entire hand in the camera using contours. Here is the code:
#include "opencv2\opencv.hpp"
using namespace cv;
void on_trackbar(int, void*) {
// Dummy function
}
int main(int argv, char** argc) {
Mat frame;
Mat grayFrame;
Mat hsvFrame;
Mat thesholdFrame;
VideoCapture capture;
//Trackbar variables (H,S,V)
int H_MIN = 0;
int H_MAX = 180;
int S_MIN = 0;
int S_MAX = 255;
int V_MIN = 0;
int V_MAX = 255;
namedWindow("trackbar", 0);
//create memory to store trackbar name on window
char TrackbarName[50];
sprintf(TrackbarName, "H_MIN");
sprintf(TrackbarName, "H_MAX");
sprintf(TrackbarName, "S_MIN");
sprintf(TrackbarName, "S_MAX");
sprintf(TrackbarName, "V_MIN");
sprintf(TrackbarName, "V_MAX");
createTrackbar("H_MIN", "trackbar", &H_MIN, H_MAX, on_trackbar);
createTrackbar("H_MAX", "trackbar", &H_MAX, H_MAX, on_trackbar);
createTrackbar("S_MIN", "trackbar", &S_MIN, S_MAX, on_trackbar);
createTrackbar("S_MAX", "trackbar", &S_MAX, S_MAX, on_trackbar);
createTrackbar("V_MIN", "trackbar", &V_MIN, V_MAX, on_trackbar);
createTrackbar("V_MAX", "trackbar", &V_MAX, V_MAX, on_trackbar);
capture.open(0);
std::vector<std::vector<cv::Point> > contours;
while (true){
capture >> frame;
waitKey(10);
cvtColor(frame, hsvFrame, COLOR_BGR2HSV);
//imshow("HSV", hsvFrame);
inRange(hsvFrame, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), thesholdFrame);
findContours(thesholdFrame, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
int largest_contour_area = 0;
int largest_contour_area_index = 1;
/*for (int i = 0; i < contours.size(); i++) {
double contour_area = contourArea(contours[i], false);
if (contour_area > largest_contour_area) {
largest_contour_area = contour_area;
largest_contour_area_index = i;
}
}*/
drawContours(frame, contours, -1, (0, 255, 0), 3);
putText(frame, "NO DETECTION", Point(25, 40), 2, 1, CV_RGB(255, 255, 0), 1, 8, false);
imshow("Threshold", thesholdFrame);
imshow("Camera", frame);
}
}
After searching around for a while, I realised that it might be useful to isolate the contours of the hand only. The 'for' loop that is commented out is my attempt at implementing that. However, it doesn't seem to work. I do realise that once I uncomment that section I have to change
drawContours(frame, contours, -1, (0, 255, 0), 3);
to
drawContours(frame, contours[largest_contour_area_index], -1, (0, 255, 0), 3);
This does not seem to work. I keep getting this error(when I uncomment the for loop and replace the drawContours command with the one above):
Unhandled exception at 0x00007FF8C1537788 in FingerDetection.exe: Microsoft C++ exception: cv::Exception at memory location 0x0000000C4CAFDB50.
Furthermore, if I did somehow manage to get the program to find the edges of the largest contour in the frame(i.e the hand), how would I proceed to detect the number of fingers? I've heard things about convex hulls, etc. however I cannot find any good explanation for those exactly are. Any clarification or advice as to what to do? Keep in mind that I am very new to openCV.
Your second question ("how would I proceed to detect the number of fingers") might be a bit too broad to be answered here.
But the first one (i.e. why you are getting the exception) seems to have an easy answer, which I derive from the OpenCV documentation (version 2.4) for the drawContours function I found here.
According to that, the second parameter must be an array (of contours), so you cannot simply pass a single element contours[largest_contour_area_index].
If you want to draw one contour only, you still have to pass the entire array, and then the index of the contour you want to draw as third parameter (instead of -1):
drawContours(frame, contours, largest_contour_area_index, (0, 255, 0), 3);
As an additional note, it would be a good to idea to ensure that largest_contour_area_index has a value that is smaller than the number of contours in the array. Given your current code, there could be situations where the body of the for-loop is never executed, and largest_contour_area_index, which is initialized at a value of 1 could be too large.

Perfomance Issues while capturing and processing a video

I'm currently working on a project where I need to display a processed live video capture. Therefore, I'm using something similar to this:
cv::VideoCapture cap(0);
if (!cap.isOpened())
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
cv::namedWindow("Current Capture");
for (;;)
{
cv::Mat frame;
cap >> frame;
cv::Mat mirrored;
cv::flip(frame, mirrored, 1);
cv::imshow("Current Capture", process_image(mirrored));
if (cv::waitKey(30) >= 0) break;
}
The problem I have is, that process_image, which perfomes a circle detection in the image, needs some time to finish and causes the displaying to be rather a slideshow then a video.
My Question is: How can I speed up the processing without manipulating the process_image function?
I thought about performing the image processing in another thread, but I'm not really sure how to start. Do you have any other idea than this?
PS.: I'm not expecting you to write code for me, I only need a point to start from ;)
EDIT:
Ok, if there is nothing i can do about the performance while capturing, I will need to change the process_image function.
cv::Mat process_image(cv::Mat img)
{
cv::Mat hsv;
cv::medianBlur(img, img, 7);
cv::cvtColor(img, hsv, cv::COLOR_BGR2HSV);
cv::Mat lower_hue_range; // lower and upper hue range in case of red color
cv::Mat upper_hue_range;
cv::inRange(hsv, cv::Scalar(LOWER_HUE1, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), lower_hue_range);
cv::inRange(hsv, cv::Scalar(LOWER_HUE2, 100, 100), cv::Scalar(UPPER_HUE1, 255, 255), upper_hue_range);
/// Combine the above two images
cv::Mat hue_image;
cv::addWeighted(lower_hue_range, 1.0, upper_hue_range, 1.0, 0.0, hue_image);
/// Reduce the noise so we avoid false circle detection
cv::GaussianBlur(hue_image, hue_image, cv::Size(13, 13), 2, 2);
/// store all found circles here
std::vector<cv::Vec3f> circles;
cv::HoughCircles(hue_image, circles, CV_HOUGH_GRADIENT, 1, hue_image.rows / 8, 100, 20, 0, 0);
for (size_t i = 0; i < circles.size(); i++)
{
/// circle center
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), 3, cv::Scalar(0, 255, 0), -1, 8, 0);
/// circle outline
cv::circle(hsv, cv::Point(circles[i][0], circles[i][1]), circles[i][2], cv::Scalar(0, 0, 255), 3, 8, 0);
}
cv::Mat newI;
cv::cvtColor(hsv, newI, cv::COLOR_HSV2BGR);
return newI;
}
Is there a huge perfomance issue I can do anything about?
If you are sure that the process_image function is what is causing the bottle neck in your program, but you can't modify it, then there's not really a lot you can do. If that function takes longer to execute than the duration of a video frame then you will never get what you need.
How about reducing the quality of the video capture or reducing the size? At the moment I can see you have it set to 1280*720. If the process_image function has less data to work with it should execute faster.

Find 4 specific corner pixels and use them with warp perspective

I'm playing around with OpenCV and I want to know how you would build a simple version of a perspective transform program. I have a image of a parallelogram and each corner of it consists of a pixel with a specific color, which is nowhere else in the image. I want to iterate through all pixels and find these 4 pixels. Then I want to use them as corner points in a new image in order to warp the perspective of the original image. In the end I should have a zoomed on square.
Point2f src[4]; //Is this the right datatype to use here?
int lineNumber=0;
//iterating through the pixels
for(int y = 0; y < image.rows; y++)
{
for(int x = 0; x < image.cols; x++)
{
Vec3b colour = image.at<Vec3b>(Point(x, y));
if(color.val[1]==245 && color.val[2]==111 && color.val[0]==10) {
src[lineNumber]=this pixel // something like Point2f(x,y) I guess
lineNumber++;
}
}
}
/* I also need to get the dst points for getPerspectiveTransform
and afterwards warpPerspective, how do I get those? Take the other
points, check the biggest distance somehow and use it as the maxlength to calculate
the rest? */
How should you use OpenCV in order to solve the problem? (I just guess I'm not doing it the "normal and clever way") Also how do I do the next step, which would be using more than one pixel as a "marker" and calculate the average point in the middle of multiple points. Is there something more efficient than running through each pixel?
Something like this basically:
Starting from an image with colored circles as markers, like:
Note that is a png image, i.e. with a loss-less compression which preserves the actual color. If you use a lossy compression like jpeg the colors will change a little, and you cannot segment them with an exact match, as done here.
You need to find the center of each marker.
Segment the (known) color, using inRange
Find all connected components with the given color, with findContours
Find the largest blob, here done with max_element with a lambda function, and distance. You can use a for loop for this.
Find the center of mass of the largest blob, here done with moments. You can use a loop also here, eventually.
Add the center to your source vertices.
Your destination vertices are just the four corners of the destination image.
You can then use getPerspectiveTransform and warpPerspective to find and apply the warping.
The resulting image is:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
int main()
{
// Load image
Mat3b img = imread("path_to_image");
// Create a black output image
Mat3b out(300,300,Vec3b(0,0,0));
// The color of your markers, in order
vector<Scalar> colors{ Scalar(0, 0, 255), Scalar(0, 255, 0), Scalar(255, 0, 0), Scalar(0, 255, 255) }; // red, green, blue, yellow
vector<Point2f> src_vertices(colors.size());
vector<Point2f> dst_vertices = { Point2f(0, 0), Point2f(0, out.rows - 1), Point2f(out.cols - 1, out.rows - 1), Point2f(out.cols - 1, 0) };
for (int idx_color = 0; idx_color < colors.size(); ++idx_color)
{
// Detect color
Mat1b mask;
inRange(img, colors[idx_color], colors[idx_color], mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Find largest
int idx_largest = distance(contours.begin(), max_element(contours.begin(), contours.end(), [](const vector<Point>& lhs, const vector<Point>& rhs) {
return lhs.size() < rhs.size();
}));
// Find centroid of largest component
Moments m = moments(contours[idx_largest]);
Point2f center(m.m10 / m.m00, m.m01 / m.m00);
// Found marker center, add to source vertices
src_vertices[idx_color] = center;
}
// Find transformation
Mat M = getPerspectiveTransform(src_vertices, dst_vertices);
// Apply transformation
warpPerspective(img, out, M, out.size());
imshow("Image", img);
imshow("Warped", out);
waitKey();
return 0;
}

Image differencing: How to find minor differences between images?

i want to find hwo to get diff b/w 2 similar grayscale images for implementation in system for security purposes. I want to check whether any difference has occurred between them. For object tracking, i have implementd canny detection in the program below. I get outline of structured objects easily.. which cn later be subtracted to give only the outline of the difference in the delta image....but what if there's a non structural difference such as smoke or fire in the second image? i have increased the contrast for clearer edge detection as well have modified threshold vals in the canny fn parameters..yet got no suitable results.
also canny edge detects shadows edges too. if my two similar image were taken at different times during the day, the shadows will vary, so the edges will vary and will give undesirable false alarm
how should i work around this? Can anyone help? thanks!
Using c language api in enter code hereopencv 2.4 in visual studio 2010
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include "cxcore.h"
#include <math.h>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
int main()
{
IplImage* img1 = NULL;
if ((img1 = cvLoadImage("libertyH1.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray1 = cvCreateImage(cvGetSize(img1), IPL_DEPTH_8U, 1); //contains greyscale //image
CvMemStorage* storage1 = cvCreateMemStorage(0); //struct for storage
cvCvtColor(img1, gray1, CV_BGR2GRAY); //convert to greyscale
cvSmooth(gray1, gray1, CV_GAUSSIAN, 7, 7); // This is done so as to //prevent a lot of false circles from being detected
IplImage* canny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,1);
IplImage* rgbcanny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,3);
cvCanny(gray1, canny1, 50, 100, 3); //cvCanny( const //CvArr* image, CvArr* edges(output edge map), double threshold1, double threshold2, int //aperture_size CV_DEFAULT(3) );
cvNamedWindow("Canny before hough");
cvShowImage("Canny before hough", canny1);
CvSeq* circles1 = cvHoughCircles(gray1, storage1, CV_HOUGH_GRADIENT, 1, gray1->height/3, 250, 100);
cvCvtColor(canny1, rgbcanny1, CV_GRAY2BGR);
cvNamedWindow("Canny after hough");
cvShowImage("Canny after hough", rgbcanny1);
for (size_t i = 0; i < circles1->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles1, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny1, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny1, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
IplImage* img2 = NULL;
if ((img2 = cvLoadImage("liberty_wth_obj.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray2 = cvCreateImage(cvGetSize(img2), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor(img2, gray2, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray2, gray2, CV_GAUSSIAN, 7, 7);
IplImage* canny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,1);
IplImage* rgbcanny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,3);
cvCanny(gray2, canny2, 50, 100, 3);
CvSeq* circles2 = cvHoughCircles(gray2, storage, CV_HOUGH_GRADIENT, 1, gray2->height/3, 250, 100);
cvCvtColor(canny2, rgbcanny2, CV_GRAY2BGR);
for (size_t i = 0; i < circles2->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles2, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny2, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny2, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
You want code help here? This is not an easy task. There are few algorithms available in internet or you can try to invent new one. A lot of research is going on this. I have some idea about a process. You can find the edges by Y from YCbCr color system. Deduct this Y value from blurred image's Y value. Then you will get the edge. Now make an array representation. You have to divide the image in blocks. Now check the block with blocks. It may slide, rotated, twisted etc. Compare with array matching. Object tracking is difficult due to background. Take care/omit unnecessary objects carefully.
I think the way to go could be Background subtraction. It lets you cope with lighting conditions changes.
See wikipedia entry for an intro. The basic idea is you have to build a model for the scene background, then all differences are computed relative to the background.
I have done some analysis on Image Differencing but the code was written for java. Kindly look into the below link that may come to help
How to find rectangle of difference between two images
Cheers !

Detect triangles an rectangles from image with opencv

I have this image:
And i want take out the triangle and rectangles from the image. I have 2 algorithmes, one for triangles and another for rectangles in the code below. But they are very similar. But in this way i only can take out the triangle more bright. Can anyone help me please.
IplImage* DetectAndDrawTriang(IplImage* img){
CvSeq* contours;
CvSeq* result;
CvMemStorage *storage = cvCreateMemStorage(0);
int d=30;
IplImage* ret = cvCreateImage(cvGetSize(img), 8, 3);
IplImage* temp = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
cvSet(ret,cvScalar(0,0,0));
cvCvtColor(img, temp, CV_BGR2GRAY);
cvThreshold( temp, temp, 180, 255, CV_THRESH_BINARY );
//cvSmooth(temp, temp, CV_GAUSSIAN, 9, 9, 0,0);
cvNamedWindow("thre");
cvShowImage("thre", temp);
cvFindContours(temp, storage, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
while(contours)
{
result = cvApproxPoly(contours, sizeof(CvContour), storage, CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.1, 0);
if(result->total==3)
{
CvPoint *pt[3];
for(int i=0;i<3;i++)
pt[i] = (CvPoint*)cvGetSeqElem(result, i);
if((int)sqrt((pt[0]->x-pt[2]->x)*(pt[0]->x-pt[2]->x)+(pt[0]->y-pt[2]->y)*(pt[0]->y-pt[2]->y))>=d && (int)sqrt((pt[0]->x-pt[1]->x)*(pt[0]->x-pt[1]->x)+(pt[0]->y-pt[1]->y)*(pt[0]->y-pt[1]->y))>=d && (int)sqrt((pt[1]->x-pt[2]->x)*(pt[1]->x-pt[2]->x)+(pt[1]->y-pt[2]->y)*(pt[1]->y-pt[2]->y))>=d)
{
cvLine(ret, *pt[0], *pt[1], cvScalar(255,255,255));
cvLine(ret, *pt[1], *pt[2], cvScalar(255,255,255));
cvLine(ret, *pt[2], *pt[0], cvScalar(255,255,255));
}
}
contours = contours->h_next;
}
cvReleaseImage(&temp);
cvReleaseMemStorage(&storage);
return ret;
}
One Idea I can think of is using the cv::matchShapes function (I suggest using the cv2 library with Mat instead of Ipl images). matchShapes takes a Mat of the object you want to detect and a Mat of the object you want to compare it against. So in your case you can make a Mat of the contours of a triangle and square and compare those images with each contour in the image your are searching through.
You may also consider simply doing template matching since your objects are static. Check out cv::matchTemplate and its pretty much the same idea as the above paragraph.
ApproxPoly is a good solution if you can be sure that your contours are complete.
If the contour is a square but doesn't close then it will be a line with four segments and 3 corners after approximating.
Another solution is to fit a box around the contour points (there is a function to do this) and check the width/height ratio. You can then test the individual line segments in the contour list to see if they match the box sides.