Plot Centroid of Specific Blob in C++ OpenCV - c++

I'm trying to plot the centroid of a specific blob detected using contour techniques. I don't wish to loop through all the blob detected in an image - I only want to plot the centroid of one (i.e. contour[2]). Ideally I'd like to accomplish this using the most efficient / fastest method.
Here's my code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/opencv.hpp"
#include <iostream>
#define _USE_MATH_DEFINES
#include <math.h>
using namespace cv;
using namespace std;
int main(int argc, const char** argv)
{
cv::Mat src = cv::imread("frame-1.jpg");
if (src.empty())
return -1;
cv::Mat gray;
cv::cvtColor(~src, gray, CV_BGR2GRAY);
cv::threshold(gray, gray, 160, 255, cv::THRESH_BINARY);
// Find all contours
std::vector<std::vector<cv::Point> > contours;
cv::findContours(gray.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Fill holes in each contour
cv::drawContours(gray, contours, -1, CV_RGB(255, 255, 255), -1);
cout << contours.size();
double avg_x(0), avg_y(0); // average of contour points
for (int j = 0; j < contours[2].size(); ++j)
{
avg_x += contours[2][j].x;
avg_y += contours[2][j].y;
}
avg_x /= contours[2].size();
avg_y /= contours[2].size();
cout << avg_x << " " << avg_y << endl;
cv::circle(gray, {avg_x, avg_y}, 5, CV_RGB(5, 100, 100), 5);
namedWindow("MyWindow", CV_WINDOW_AUTOSIZE);
imshow("MyWindow", gray);
waitKey(0);
destroyWindow("MyWindow");
return 0;
}
However, plotting the circle using the coordinates (avg_x, avg_y) results in a 'no instance of constructor "cv::Point_<Tp>::Point[with_Tp=int]" matches the argument list - argument types are: (double, double)' error.

Use min enclosing circle
float radius ;
Point2f center ;
minEnclosingCircle ( contours[i] , center , radius ) ;
cv::circle(gray, center, 5, CV_RGB(5, 100, 100), 5);

Related

Splitting individual contour points into it's HSV channels to perform additional operations

I am currently playing around idea of calculating an of average HSV for points in a contour. I did some research and came across the split function which allows for a mat of an image to be broken into it's channels, However the contour datatype is a vector of points. Here is an example of code.
findcontours(detected_edges,contours,CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE);
vector<vector<Point>> ContourHsvChannels(3);
split(contours,ContourHsvChannels);
Basically the goal is to split each point of a contour into its HSV channels so I can perform operations on them. Any guidance would be appreciated.
You can simply draw the contours onto a blank image the same size as your original image to create a mask, and then use that to mask your image (in HSV or whatever colorspace you want). The mean() function takes in a mask parameter so that you only get the mean of the values highlighted by the mask.
If you also want the standard deviation you can use the meanStdDev() function, it also accepts a mask.
Here's an example in Python:
import cv2
import numpy as np
# read image, ensure binary
img = cv2.imread('fg.png', 0)
img[img>0] = 255
# find contours in the image
contours = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)[1]
# create an array of blank images to draw contours on
n_contours = len(contours)
contour_imgs = [np.zeros_like(img) for i in range(n_contours)]
# draw each contour on a new image
for i in range(n_contours):
cv2.drawContours(contour_imgs[i], contours, i, 255)
# color image of where the HSV values are coming from
color_img = cv2.imread('image.png')
hsv = cv2.cvtColor(color_img, cv2.COLOR_BGR2HSV)
# find the means and standard deviations of the HSV values for each contour
means = []
stddevs = []
for cnt_img in contour_imgs:
mean, stddev = cv2.meanStdDev(hsv, mask=cnt_img)
means.append(mean)
stddevs.append(stddev)
print('First mean:')
print(means[0])
print('First stddev:')
print(stddevs[0])
First mean:
[[ 146.3908046 ]
[ 51.2183908 ]
[ 202.95402299]]
First stddev:
[[ 7.92835204]
[ 11.78682811]
[ 9.61549043]]
There's three values; one for each channel.
The other option is to just look up all the values; a contour is an array of points, so you can index the image with those points for each contour in your contour array and store them in individual arrays, and then find the meanStdDev() or mean() over those (and not bother with the mask). For e.g. (again in Python, sorry about that):
# color image of where the HSV values are coming from
color_img = cv2.imread('image.png')
hsv = cv2.cvtColor(color_img, cv2.COLOR_BGR2HSV)
# read image, ensure binary
img = cv2.imread('fg.png', 0)
img[img>0] = 255
# find contours in the image
contours = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)[1]
means = []
stddevs = []
for contour in contours:
contour_colors = []
n_points = len(contour)
for point in contour:
x, y = point[0]
contour_colors.append(hsv[y, x])
contour_colors = np.array(contour_colors).reshape(1, n_points, 3)
mean, stddev = cv2.meanStdDev(contour_colors)
means.append(mean)
stddevs.append(stddev)
print('First mean:')
print(means[0])
print('First stddev:')
print(stddevs[0])
First mean:
[[ 146.3908046 ]
[ 51.2183908 ]
[ 202.95402299]]
First stddev:
[[ 7.92835204]
[ 11.78682811]
[ 9.61549043]]
So this gives the same values. In Python I just simply created blank lists for the means and standard deviations and appended to them. In C++ you can create a std::vector<cv::Vec3b> (assuming uint8 image, otherwise Vec3f or whatever is appropriate) for each. Then inside the loop I create another blank list to hold the colors for each contour; again this would be a std::vector<cv::Vec3b>, and then run the meanStdDev() on that vector in each loop, and append the value to the means and standard deviations vectors. You don't have to append, you can easily grab the number of contours and the number of points in each contour and preallocate for speed, and then just index into those vectors instead of appending.
In Python there's virtually no speed difference between either method. Of course there's better memory efficiency in the second example; instead of storing a whole blank Mat we just store a few of the values. However the backend OpenCV methods work really quickly for masking operations, so you'll have to test the speed difference yourself in C++ and see which way is better. As the number of contours increases I imagine the benefits of the second method increases. If you do time both approaches, please let us know your results!
Here is the solution written in c++
#include <opencv2\opencv.hpp>
#include <iostream>
#include <vector>
#include <cmath>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
// Mat Declarations
// Mat img = imread("white.jpg");
// Mat src = imread("Rainbro.png");
Mat src = imread("multi.jpg");
// Mat src = imread("DarkRed.png");
Mat Hist;
Mat HSV;
Mat Edges;
Mat Grey;
vector<vector<Vec3b>> hueMEAN;
vector<vector<Point>> contours;
// Variables
int edgeThreshold = 1;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
int lowThreshold = 0;
// Windows
namedWindow("img", WINDOW_NORMAL);
namedWindow("HSV", WINDOW_AUTOSIZE);
namedWindow("Edges", WINDOW_AUTOSIZE);
namedWindow("contours", WINDOW_AUTOSIZE);
// Color Transforms
cvtColor(src, HSV, CV_BGR2HSV);
cvtColor(src, Grey, CV_BGR2GRAY);
// Perform Hist Equalization to help equalize Red hues so they stand out for
// better Edge Detection
equalizeHist(Grey, Grey);
// Image Transforms
blur(Grey, Edges, Size(3, 3));
Canny(Edges, Edges, max_lowThreshold, lowThreshold * ratio, kernel_size);
findContours(Edges, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
//Rainbro MAT
//Mat drawing = Mat::zeros(432, 700, CV_8UC1);
//Multi MAT
Mat drawing = Mat::zeros(630, 1200, CV_8UC1);
//Red variation Mat
//Mat drawing = Mat::zeros(600, 900, CV_8UC1);
vector <vector<Point>> ContourPoints;
/* This code for loops through all contours and assigns the value of the y coordinate as a parameter
for the row pointer in the HSV mat. The value vec3b pointer pointing to the pixel in the mat is accessed
and stored for any Hue value that is between 0-10 and 165-179 as Red only contours.*/
for (int i = 0; i < contours.size(); i++) {
vector<Vec3b> vf;
vector<Point> points;
bool isContourRed = false;
for (int j = 0; j < contours[i].size(); j++) {
//Row Y-Coordinate of Mat from Y-Coordinate of Contour
int MatRow = int(contours[i][j].y);
//Row X-Coordinate of Mat from X-Coordinate of Contour
int MatCol = int(contours[i][j].x);
Vec3b *HsvRow = HSV.ptr <Vec3b>(MatRow);
int h = int(HsvRow[int(MatCol)][0]);
int s = int(HsvRow[int(MatCol)][1]);
int v = int(HsvRow[int(MatCol)][2]);
cout << "Coordinate: ";
cout << contours[i][j].x;
cout << ",";
cout << contours[i][j].y << endl;
cout << "Hue: " << h << endl;
// Get contours that are only in the red spectrum Hue 0-10, 165-179
if ((h <= 10 || h >= 165 && h <= 180) && ((s > 0) && (v > 0))) {
cout << "Coordinate: ";
cout << contours[i][j].x;
cout << ",";
cout << contours[i][j].y << endl;
cout << "Hue: " << h << endl;
vf.push_back(Vec3b(h, s, v));
points.push_back(contours[i][j]);
isContourRed = true;
}
}
if (isContourRed == true) {
hueMEAN.push_back(vf);
ContourPoints.push_back(points);
}
}
drawContours(drawing, ContourPoints, -1, Scalar(255, 255, 255), 2, 8);
// Calculate Mean and STD for each Contour
cout << "contour Means & STD of Vec3b:" << endl;
for (int i = 0; i < hueMEAN.size(); i++) {
Scalar meanTemp = mean(hueMEAN.at(i));
Scalar sdTemp;
cout << i << ": " << endl;
cout << meanTemp << endl;
cout << " " << endl;
meanStdDev(hueMEAN.at(i), meanTemp, sdTemp);
cout << sdTemp << endl;
cout << " " << endl;
}
cout << "Actual Contours: " << contours.size() << endl;
cout << "# Contours: " << hueMEAN.size() << endl;
imshow("img", src);
imshow("HSV", HSV);
imshow("Edges", Edges);
imshow("contours", drawing);
waitKey(0);
return 0;
}

How to publish a message of type vector<Point2d> to topic in ROS?

I've written C++ code to essentially take video feed and divide it up into frames, run HSV segmentation, find its contours, and then perform a PCA analysis on it which yields two eigen vectors and their corresponding eigen values.
I then found the largest of the two eigen values and took the eigen vector that corresponds with it and placed it in a vector of type vector<Point2d>. It all runs great, but I can't seem to publish the vector to a ROS topic.
My question is how do I publish this vector of type vector<Point2d> to a ROS topic? Since I do all the calculations in my cpp file I don't think I can make a msg file for the vector.
The ROS code will be in the main().
#include<opencv2/highgui/highgui.hpp>
#include "opencv2/core/core_c.h"
#include "opencv2/core/core.hpp"
#include "opencv2/flann/miniflann.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/video.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/ml/ml.hpp"
#include "opencv2/highgui/highgui_c.h"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/opencv.hpp"
#include <unistd.h>
#include <iostream>
#include <algorithm>
#include "ros/ros.h"
#include "std_msgs/String.h"
#include <vector>
using namespace cv;
using namespace std;
vector<Point2d> getOrientation(vector<Point> &pts, Mat &img) //Taking address of pointers from main() as arugments and storing them
{
//if (pts.size() == 0) return false;
//First the data need to be arranged in a matrix with size n x 2, where n is the number of data points we have. Then we can perform that PCA analysis. The calculated mean (i.e. center of mass) is stored in the "pos" variable and the eigenvectors and eigenvalues are stored in the corresponding std::vector’s.
//Construct a buffer called data_pts used by the pca analysis
Mat data_pts = Mat(pts.size(), 2, CV_64FC1); //pts.size() rows, 2 columns, matrix type will be CV_64FC1(Type to hold inputs of "double")
for (int i = 0; i < data_pts.rows; ++i)
{
data_pts.at<double>(i, 0) = pts[i].x;
data_pts.at<double>(i, 1) = pts[i].y;
}
//Perform PCA analysis. Principal Component Analysis allows us to find the direction along which our data varies the most. In fact, the result of running PCA on the set of points in the diagram consist of 2 vectors called eigenvectors which are the principal components of the data set.
PCA pca_analysis(data_pts, Mat(), CV_PCA_DATA_AS_ROW);
//Store the position of the object
Point pos = Point(pca_analysis.mean.at<double>(0, 0),
pca_analysis.mean.at<double>(0, 1));
//Store the eigenvalues and eigenvectors
vector<Point2d> eigen_vecs(2);
vector<double> eigen_val(2);
for (int i = 0; i < 2; ++i)
{
eigen_vecs[i] = Point2d(pca_analysis.eigenvectors.at<double>(i, 0), pca_analysis.eigenvectors.at<double>(i, 1));
eigen_val[i] = pca_analysis.eigenvalues.at<double>(0, i);
cout << "Eigen Vector: "<< eigen_vecs[i] << endl;
cout << "Eigen Value: " << eigen_val[i] << endl;
}
// Find a way to acquire highest Eigen Value and the Vector Associated with it and find a way to pass it on to the motor controller(The Pic 24)
double valueMAX = *max_element(eigen_val.begin(), eigen_val.end());
double index = find(eigen_val.begin(), eigen_val.end(), valueMAX) - eigen_val.begin();
cout << "\nMax value is: " << valueMAX << endl;
cout << "\nThe index of the largest value is: " << index << endl;
vector<Point2d> correctVector(2);
for( int i = 0; i < 2; i++)
{
if(i == index)
{
correctVector[0] = eigen_vecs[i];
}
}
cout <<" \nThe vector that corresponds with the value is: " << correctVector[0] << endl;
float degrees = ((atan2(eigen_vecs[0].y, eigen_vecs[0].x) * 180) / 3.14159265);
cout <<" \nThe degrees using ArcTangent of the vector(x,y) is: " << degrees << endl;
//ros::Publisher vector_pub = node.advertise<std_msgs::String>("vector", 1000);
// Draw the principal components, each eigenvector is multiplied by its eigenvalue and translated to the mean position.
circle(img, pos, 3, CV_RGB(255, 0, 255), 2);
line(img, pos, pos + 0.02 * Point(eigen_vecs[0].x * eigen_val[0], eigen_vecs[0].y * eigen_val[0]) , CV_RGB(255, 255, 0));
line(img, pos, pos + 0.02 * Point(eigen_vecs[1].x * eigen_val[1], eigen_vecs[1].y * eigen_val[1]) , CV_RGB(0, 255, 255));
//return degrees;
return correctVector;
}
int main(int argc, char **argv)
{
VideoCapture cap(0); //capture the video from web cam/USB cam. (0) for webcam, (1) for USB.
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
//Creating values for Hue, Saturation, and Value. Found that the color Red will be near 110-180 Hue, 60-255 Saturation, and 0-255 Value. This combination seems to pick up any red object i give it, as well as a few pink ones too.
int count = 0;
int iLowH = 113;
int iHighH = 179;
int iLowS = 60;
int iHighS = 255;
int iLowV = 0;
int iHighV = 255;
/**
* The ros::init() function needs to see argc and argv so that it can perform
* any ROS arguments and name remapping that were provided at the command line.
* For programmatic remappings you can use a different version of init() which takes
* remappings directly, but for most command-line programs, passing argc and argv is
* the easiest way to do it. The third argument to init() is the name of the node.
*
* You must call one of the versions of ros::init() before using any other
* part of the ROS system.
*/
// Initiate a new ROS node named "talker"
ros::init(argc, argv,"OpenCV");
/**
* NodeHandle is the main access point to communications with the ROS system.
* The first NodeHandle constructed will fully initialize this node, and the last
* NodeHandle destructed will close down the node.
*/
// creating a node handle: It is the reference assigned to a new node. EVERY NODE MUST HAVE A REFERENCE!
ros::NodeHandle node;
while (true)
{
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
Mat imgHSV;
cvtColor(frame, imgHSV, COLOR_BGR2HSV); //Convert the captured frame from RBG to HSV
Mat imgThresholded;
inRange(imgHSV, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded); //Threshold the image
//morphological opening (remove small objects from the foreground)
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
//morphological closing (fill small holes in the foreground)
dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
// Find all objects of interest, find all contours in the threshold image
vector<vector<Point> > contours; //vector of vector points
vector<Vec4i> hierarchy; // Vector of 4 interges
vector<Point2d> result;
//findContours essentially is tracing all continuous points caputured by the thresholding, I feel this gives accuracy to the upcoming Eigen analysis and also helps in detecting what is actually an object versus what is not. The arguments for findingContours() is the following( binary image, contour retrieval mode, contour approximation method)
findContours(imgThresholded, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
ros::Publisher vector_pub = node.advertise<std_msgs::UInt8MultiArray("vector", 1000); // THIS IS THE ERROR IN MY CODE, HOW DO I PUBLISH THE VECTOR OF TYPE vector<Point2d>!?!?
// For each object
for (size_t i = 0; i < contours.size(); ++i)
{
// Calculate its area of each countour
double area = contourArea(contours[i]);
// Ignore if too small or too large
if (area < 1e2 || 1e5 < area) continue;
// Draw the contour for visualisation purposes
drawContours(frame, contours, i, CV_RGB(255, 0, 0), 2, 8, hierarchy, 0);
count++;
cout << "This is frame: " << count <<endl;
//usleep(500000);
result = getOrientation(contours[i], frame);
cout<< "THE VECTOR BEING PASSED TO THE TOPIC IS: " << result << endl;
}
imshow("Thresholded Image", imgThresholded); //show the thresholded image with Contours.
imshow("Original", frame); //show the original image
if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}

Speeding up processing speed for HoughCircles

i am working on Hough Circles function.
When I use this function, the processing speed is quite slow. For example, the real time video feeding will have lags of 1 seconds when i move the camera. Or even 10 seconds.
The code is as follows
#include <sstream>
#include <string>
#include <iostream>
#include <vector>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <stdlib.h>
#include <stdio.h>
using namespace std;
using namespace cv;
int main(int argc, char** argv) {
//Create a window for trackbars
namedWindow("Trackbar Window", CV_WINDOW_AUTOSIZE);
//Create trackbar to change brightness
int iSliderValue1 = 50;
createTrackbar("Brightness", "Trackbar Window", &iSliderValue1, 100);
//Create trackbar to change contrast
int iSliderValue2 = 50;
createTrackbar("Contrast", "Trackbar Window", &iSliderValue2, 100);
//Create trackbar to change param1 in HoughCircle
int param1 = 150;
createTrackbar("param1", "Trackbar Window", &param1, 300);
//Create trackbar to change param2 in HoughCircle
int param2 = 200;
createTrackbar("param2", "Trackbar Window", &param2, 300);
//Create trackbar to change min Radius in HoughCircle
int minR = 0;
createTrackbar("minRadius", "Trackbar Window", &minR, 300);
//Create trackbar to change max Radius in HoughCircle
int maxR = 0;
createTrackbar("maxRadius", "Trackbar Window", &maxR, 300);
//Debugging purpose
cout << "All trackbars created" << endl;
//Create a variable to store image
Mat src;
//Create video capture
VideoCapture capture;
//open video from either a file or webcam
//capture.open("C:\\Users\\Student-ID\\Downloads\\SPACe Mission IIIA\\GOPR0503.mp4");
capture.open(0);
//store whatever is captured to src
capture.read(src);
//set frame height
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 240);
capture.set(CV_CAP_PROP_FRAME_WIDTH, 320);
//Debugging purpose
cout << "Vidoe opened" << endl;
if (!src.data) {
std::cout << "ERROR:\topening image" << std::endl;
return -1;
}
//Create window to display videos
cv::namedWindow("image1", CV_WINDOW_AUTOSIZE);
while (true){
capture.read(src);
//Code for changing Brightness and contrast
int iBrightness = iSliderValue1 - 50;
double dContrast = iSliderValue2 / 50.0;
src.convertTo(src, -1, dContrast, iBrightness);
//Debugging purpose
cout << "1" << endl;
//Create variable to store the processed image
Mat src_gray2;
//Convert the colour to grayscale
cvtColor(src, src_gray2, CV_BGR2GRAY);
//Smooth and blur the image to reduce noise
GaussianBlur(src_gray2, src_gray2, cv::Size(9, 9), 2, 2);
vector<Vec3f> circles;
//Change the param1 and 2 to double from integer
double dparam1 = param1 / 1.0;
double dparam2 = param2 / 1.0;
//Debugging purpose
cout << "2" << endl;
//Apply HoughCircle function
HoughCircles(src_gray2, circles, CV_HOUGH_GRADIENT,
2, // accumulator resolution (size of the image / 2)
5, // minimum distance between two circles
dparam1, // Canny high threshold
dparam2, // minimum number of votes
minR, maxR); // min and max radius
//Debugging purpose
cout << "3" << endl;
//Draw the circle
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle(src, center, 3, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(0, 0, 255), 3, 8, 0);
//Display words on top left hand corner
putText(src, "Circle Found", Point(0, 50), 1,2,Scalar(0, 255, 0),2);
}
//display video
imshow("image1", src);
//debugging purpose
cout << "5" << endl;
//delay to refresh the pic
cvWaitKey(33);
}
return 0;
}
So the debugging numbering stops at "2" for like 10 seconds before jump to 3.
I was told before to increase the param1 and param2, but if i max both to 300 or even 200, no circles will be detected.
Note: my circles are of various sizes, so we can forget about min and max radius
This amount of lag is too much for me, is there a way to improve the speed of processing with respect to the coding and others?
I am using OPENCV 3.0.0 C++ on Microsoft Visual Studio 2013, running Win 8 64bit system.

HoughCircle with trackbar opencv

There are a few parameters in the HoughCirclefunction.
Is there a way to use the trackbars to change these parameters?
so that I would not need to run the program every time i want to change them.
Thank you.
using opencv 3.0.0 C++ on Ms VS 2013 on Win 8 laptop
#include <sstream>
#include <string>
#include <iostream>
#include <vector>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <stdlib.h>
#include <stdio.h>
using namespace std;
using namespace cv;
int main(int argc, char** argv) {
//Create a window for trackbars
namedWindow("Trackbar Window", CV_WINDOW_AUTOSIZE);
//Create trackbar to change brightness
int iSliderValue1 = 50;
createTrackbar("Brightness", "Trackbar Window", &iSliderValue1, 100);
//Create trackbar to change contrast
int iSliderValue2 = 50;
createTrackbar("Contrast", "Trackbar Window", &iSliderValue2, 100);
int param1 = 10;
createTrackbar("param1", "Trackbar Window", &param1, 300);
int param2 = 10;
createTrackbar("param2", "Trackbar Window", &param2, 300);
cout << "All trackbars created" << endl;
Mat src;
VideoCapture capture;
capture.open("movingBall.wmv");
capture.read(src);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cout << "Vidoe opened" << endl;
if (!src.data) {
std::cout << "ERROR:\topening image" << std::endl;
return -1;
}
cv::namedWindow("image1", CV_WINDOW_AUTOSIZE);
cv::namedWindow("image2", CV_WINDOW_AUTOSIZE);
while (true){
capture.read(src);
Mat dst;
int iBrightness = iSliderValue1 - 50;
double dContrast = iSliderValue2 / 50.0;
src.convertTo(src, -1, dContrast, iBrightness);
cout << "1" << endl;
cv::imshow("image1", src);
Mat src_gray2;
cvtColor(src, src_gray2, CV_BGR2GRAY);
GaussianBlur(src_gray2, src_gray2, cv::Size(9, 9), 2, 2);
vector<Vec3f> circles;
cout << "2" << endl;
double dparam1 = param1 / 1.0;
double dparam2 = param2 / 1.0;
HoughCircles(src_gray2, circles, CV_HOUGH_GRADIENT,
2, // accumulator resolution (size of the image / 2)
5, // minimum distance between two circles
dparam1, // Canny high threshold
dparam2, // minimum number of votes
0, 0); // min and max radius
cout << "3" << endl;
std::cout << circles.size() << std::endl;
std::cout << "end of test" << std::endl;
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle(src, center, 3, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(0, 0, 255), 3, 8, 0);
}
/*std::vector<cv::Vec3f>::
const_iterator itc = circles.begin();
while (itc != circles.end()) {
cv::circle(src_gray2,
cv::Point((*itc)[0], (*itc)[1]), // circle centre
(*itc)[2], // circle radius
cv::Scalar(0,0,0), // color
2); // thickness
++itc;
}*/
cv::imshow("image2", src_gray2);
cout << "5" << endl;
cvWaitKey(33);
}
return 0;
}
I have the cout expression to help me with locating the bug.
In the window, it is showing like this
And there is a problem with the HoughCircle function.
The code stops at number 2, and after like 10 seconds, it jump to 3 and loop over. I can't see what is the problem. The code itself(without talking about the detection part) is working fine before adding the trackbar. Or somehow the program just not responding.
What could be the problem
Thank you
I think there is no problem in houghCircle...but the parameters you selected gives a lot of circles which takes time to process this is the reason why it waits for 10 seconds before going to 3. try increasing the parameter 2, it will give faster results.
Here i have tried it using images in Python you can try to port from it...
import cv2
import numpy as np
img = cv2.imread('C:/Python34/images/2.jpg',0)
cv2.namedWindow('image')
def nothing(x):
pass
cv2.createTrackbar('Param 1','image',0,100,nothing)
cv2.createTrackbar('Param 2','image',0,100,nothing)
switch = '0 : OFF \n1 : ON'
cv2.createTrackbar(switch, 'image',0,1,nothing)
while(1):
cv2.imshow('image',img)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
#To Get Parameter values from Trackbar Values
para1 = cv2.getTrackbarPos('Param 1','image')
para2 = cv2.getTrackbarPos('Param 2','image')
s = cv2.getTrackbarPos(switch,'image')
if s == 0:
cv2.imshow('image', img)
else:
#For finding Hough Circles according to trackbar parameters
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,para1,para2,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
#For drawing Hough Circles
for i in circles[0,:]:
cv2.circle(img,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(img,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('image', img)
cv2.waitKey(0)
img = cv2.imread('C:/Python34/images/2.jpg',0)
cv2.destroyAllWindows()
You can use the above code as your refrence, firstly it creates a window and trackbars for switch and two parameter for hough circle.
then in the while loop para1 and para2 will store position of trackbars as value of canny parameter.
this is then used in cv2.HoughCircles function and the circles are drawn.
the image is again loaded so that every time you change parameter the output is given on fresh image to avoid confusing.
hope this might be useful.
You have missed a library math.h, add #include <math.h> to the header.

Square detection doesn't find squares

I'm using the program squares.c available in the samples of OpenCV libraries. It works well with every image, but I really can't figure it out why it doesn't recognize the square drawn in that image
http://desmond.imageshack.us/Himg12/scaled.php?server=12&filename=26725680.jpg&res=medium
After CANNY:
After DILATE:
The RESULT image (in red)
http://img267.imageshack.us/img267/8016/resultuq.jpg
As you can see, the square is NOT detected.
After the detection I need to extract the area contained in the square...How is it possible without a ROI?
The source code below presents a small variation of the Square Detector program. It's not perfect, but it illustrates one way to approach your problem.
You can diff this code to the original and check all the changes that were made, but the main ones are:
Decrease the number of threshold levels to 2.
In the beginning of findSquares(), dilate the image to detect the thin white square, and then blur the entire image so the algorithm doesn't detect the sea and the sky as individual squares.
Once compiled, run the application with the following syntax: ./app <image>
// The "Square Detector" program.
// It loads several images sequentially and tries to find squares in
// each image
#include "highgui.h"
#include "cv.h"
#include <iostream>
#include <math.h>
#include <string.h>
using namespace cv;
using namespace std;
void help()
{
cout <<
"\nA program using pyramid scaling, Canny, contours, contour simpification and\n"
"memory storage (it's got it all folks) to find\n"
"squares in a list of images pic1-6.png\n"
"Returns sequence of squares detected on the image.\n"
"the sequence is stored in the specified memory storage\n"
"Call:\n"
"./squares\n"
"Using OpenCV version %s\n" << CV_VERSION << "\n" << endl;
}
int thresh = 50, N = 2; // karlphillip: decreased N to 2, was 11.
const char* wndname = "Square Detection Demo";
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
double angle( Point pt1, Point pt2, Point pt0 )
{
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
// returns sequence of squares detected on the image.
// the sequence is stored in the specified memory storage
void findSquares( const Mat& image, vector<vector<Point> >& squares )
{
squares.clear();
Mat pyr, timg, gray0(image.size(), CV_8U), gray;
// karlphillip: dilate the image so this technique can detect the white square,
Mat out(image);
dilate(out, out, Mat(), Point(-1,-1));
// then blur it so that the ocean/sea become one big segment to avoid detecting them as 2 big squares.
medianBlur(out, out, 7);
// down-scale and upscale the image to filter out the noise
pyrDown(out, pyr, Size(out.cols/2, out.rows/2));
pyrUp(pyr, timg, out.size());
vector<vector<Point> > contours;
// find squares in every color plane of the image
for( int c = 0; c < 3; c++ )
{
int ch[] = {c, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
// try several threshold levels
for( int l = 0; l < N; l++ )
{
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if( l == 0 )
{
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
Canny(gray0, gray, 0, thresh, 5);
// dilate canny output to remove potential
// holes between edge segments
dilate(gray, gray, Mat(), Point(-1,-1));
}
else
{
// apply threshold if l!=0:
// tgray(x,y) = gray(x,y) < (l+1)*255/N ? 255 : 0
gray = gray0 >= (l+1)*255/N;
}
// find contours and store them all as a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
vector<Point> approx;
// test each contour
for( size_t i = 0; i < contours.size(); i++ )
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// square contours should have 4 vertices after approximation
// relatively large area (to filter out noisy contours)
// and be convex.
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if( approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)) )
{
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
// find the maximum cosine of the angle between joint edges
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
// if cosines of all angles are small
// (all angles are ~90 degree) then write quandrange
// vertices to resultant sequence
if( maxCosine < 0.3 )
squares.push_back(approx);
}
}
}
}
}
// the function draws all the squares in the image
void drawSquares( Mat& image, const vector<vector<Point> >& squares )
{
for( size_t i = 0; i < squares.size(); i++ )
{
const Point* p = &squares[i][0];
int n = (int)squares[i].size();
polylines(image, &p, &n, 1, true, Scalar(0,255,0), 3, CV_AA);
}
imshow(wndname, image);
}
int main(int argc, char** argv)
{
if (argc < 2)
{
cout << "Usage: ./program <file>" << endl;
return -1;
}
// static const char* names[] = { "pic1.png", "pic2.png", "pic3.png",
// "pic4.png", "pic5.png", "pic6.png", 0 };
static const char* names[] = { argv[1], 0 };
help();
namedWindow( wndname, 1 );
vector<vector<Point> > squares;
for( int i = 0; names[i] != 0; i++ )
{
Mat image = imread(names[i], 1);
if( image.empty() )
{
cout << "Couldn't load " << names[i] << endl;
continue;
}
findSquares(image, squares);
drawSquares(image, squares);
imwrite("out.jpg", image);
int c = waitKey();
if( (char)c == 27 )
break;
}
return 0;
}
Outputs:
I would suggest that your square in this image is too thin. The first step in squares.c is to scale the image down and back up to reduce noise before passing to the Canny edge detector.
The scaling convolves with a 5x5 kernel, so in your case this could result in losing any gradient in such a thin edge.
Try making your square's edges at least 5 pixels if you are going to overlay them on a continuous background.