C++ Vector Subscript out of range with opencv - c++

I have problems regarding vector subscript out of range.Eyes[0] refers to left eye while eyes[1] refer to right eye.The function of the code below allows an OpenCV function called eyecascade to track eyes and I have edited it to output the coordinates of the eyes,however for the right eye of eyes[1],I believe it causes the vector to be out of range.
vector<Rect> eyes;
eyeCascade.detectMultiScale(faceROI, eyes);
unsigned int x = eyes.size();
{
Point eye_center(eyes[0].x + eyes[0].width / 2, eyes[0].y + eyes[0].height / 2);
int radius = cvRound((eyes[0].width + eyes[0].height)*0.25);
circle(frame, eye_center, radius, Scalar(255, 0, 0), 5);
printf("eyes0_x;%d", eyes[0].x);
printf(" eyes0_y;%d\n",eyes[0].y );
circle(frame, eye_center, radius, Scalar(255, 0, 0), 5);
printf("eyes1_x;%d", eyes[(1)].x);
printf(" eyes1_y;%d\n", eyes[(1)].y);

Related

Opencv 3D point to 2D image point projection

I'm converting LiDAR points to a camera image.
The LiDAR points and also the camera image are from a simulator.
For simplicity I placed them at the exact same location, facing the same direction without any roll, pitch, yaw. (So camera coordinate system is the same as the lidar coordinate system)
If I understood correctly I can then just use an empty t_vec/r_vec/d_vec (as there is also no distortion in the camera image).
The Image is 785x785.
// added for debugging purpose
std::vector<cv::Point3d> points_lidar {
cv::Point3d {0,0,4.5},
cv::Point3d {1.88,-0.42,4.50},
cv::Point3d {1.85,-0.42,4.49},
cv::Point3d {1.84,-0.42,4.49},
cv::Point3d {1.83,-0.42,4.51},
cv::Point3d {1.82,-0.42,4.52},
cv::Point3d {1.81,-0.41,4.52},
};
cv::Mat d_vec = cv::Mat::zeros(4, 1, cv::DataType<double>::type);
cv::Mat r_vec = cv::Mat::zeros(3, 1, cv::DataType<double>::type);
cv::Mat t_vec = cv::Mat::zeros(3, 1, cv::DataType<double>::type);
double camera_mat[3][3] = {
{785,0,0},
{0,785,0},
{0,0,1}
};
cv::Mat camera (3, 3, cv::DataType<double>::type, camera_mat);
std::vector<cv::Point2i> points_camera {};
cv::projectPoints(points_lidar, r_vec, t_vec, camera, d_vec, points_camera);
for (const auto& p : points_camera) {
cv::Point2i pp;
pp.x = (int)(p.x + 785/2);
pp.y = (int)((1 - p.y) + 785/2);
cv::circle(image->image, pp, 5, cv::Scalar(0, 0, 255), -1);
}
Unfortunately the projected points don't nearly match the 3d points and are too far to the right / bottom of the image.
Does anyone see an issue?
Edit:
I tested a simple solution without opencv
after the comment of #ema I tested a really simple solution without opencv. Which actually results in the correct image pixels but is far slower (~5ms compared to ~1ms with opencv).
for (const auto& p_lidar: points_lidar) {
cv::Point2d p {p_lidar.x / p_lidar.z, p_lidar.y / p_lidar.z};
p.x = (p.x + cam.canvas_width / 2) / 2;
p.y = (p.y + cam.canvas_height / 2) / 2;
cv::Point2i p_raster;
p_raster.x = std::floor(p.x * cam.image_width);
p_raster.y = std::floor((1 - p.y) * cam.image_height);
cv::circle(image->image, p_raster, 3, cv::Scalar(0, 0, 255), -1);
}
As the pipeline has about 100ms until the next iteration of processing begins I would still prefer to use opencv for optimized calculations.

Dlib not detecting face in kurento opencv filter

I have created an opencv filter that can detect if a person blinks for Kurento the WebRTC framework. My code works in a standalone opencv app. However, once I converted to the opencv filter for Kurento it started playing up. When the module/filter was compiled without optimisation flags it would briefly detect the face and draw contours around the eyes. However, after compiling the module/filter with optimisation flags, performance improved, but no face was being detected. Here's the code I have in the filter:
void BlinkDetectorOpenCVImpl::process(cv::Mat &mat) {
std::vector <dlib::rectangle> faces;
// Just resize input image if you want
resize(mat, mat, Size(800, 450));
cv_image <rgb_alpha_pixel> cimg(mat);
dlib::array2d<unsigned char> img_gray;
dlib::assign_image(img_gray, cimg);
faces = detector(img_gray);
std::cout << "XXXXXXXXXXXXXXXXXXXXX FACES: " << faces.size() << std::endl;
std::vector <full_object_detection> shapes;
for (unsigned long i = 0; i < faces.size(); ++i) {
full_object_detection shape = pose_model(cimg, faces[i]);
std::vector <Point> left_eye_points = get_points_for_eye(shape, LEFT_EYE_START, LEFT_EYE_END);
std::vector <Point> right_eye_points = get_points_for_eye(shape, RIGHT_EYE_START, RIGHT_EYE_END);
double left_eye_ear = get_eye_aspect_ratio(left_eye_points);
double right_eye_ear = get_eye_aspect_ratio(right_eye_points);
double ear = (left_eye_ear + right_eye_ear) / 2.0;
// Draw left eye
std::vector <std::vector<Point>> contours;
contours.push_back(left_eye_points);
std::vector <std::vector<Point>> hull(1);
convexHull(contours[0], hull[0]);
drawContours(mat, hull, -1, Scalar(0, 255, 0));
// Draw right eye
contours[0] = right_eye_points;
convexHull(contours[0], hull[0]);
drawContours(mat, hull, -1, Scalar(0, 255, 0));
if (ear < EYE_AR_THRESH) {
counter++;
} else {
if (counter >= EYE_AR_CONSEC_FRAMES) {
total++;
/* std::string sJson = "{\"blink\": \"blink\"}";
try
{
onResult event(getSharedFromThis(), onResult::getName(), sJson);
signalonResult(event);
}
catch (std::bad_weak_ptr &e)
{
}*/
}
counter = 0;
}
cv::putText(mat, (boost::format{"Blinks: %d"} % total).str(), cv::Point(10, 30),
cv::FONT_HERSHEY_SIMPLEX,
0.7, Scalar(0, 0, 255), 2);
cv::putText(mat, (boost::format{"EAR: %.2f"} % ear).str(), cv::Point(300, 30),
cv::FONT_HERSHEY_SIMPLEX,
0.7, Scalar(0, 0, 255), 2);
}
}
} /* blinkdetector */
I was able to fix my own problem. I found that instead of resizing the image to an arbitrary resolution you should resize it by half the width and half the height of the actual image resolution. Resizing an image to a lower size makes Dlib face detection fast. So here's what I did to solve the issue:
Mat tmpMat = mat.clone();
resize(tmpMat, tmpMat, Size(tmpMat.size().width / 2, tmpMat.size().height / 2));
I had to clone the image sent by Kurento to my method because for some odd reason the original Mat doesn't show the contours when turned into a Dlib image with cv_image.

circle-detection issue

1.Some Information: I would like to develop a kind of circle recognition with the help of openCV. I successfully set up a connection between Swift, objc-c++, but strangely I have some problems with the circle recognition algorithm: Not all of the circles in my image gets detected!
2.Have a look at my code:
+(UIImage *)ConvertImage:(UIImage *)image {
cv::Mat matImage;
UIImageToMat(image, matImage);
cv::Mat modImage;
cv::medianBlur(matImage, matImage, 5);
cv::cvtColor(matImage, modImage, CV_RGB2GRAY);
cv::GaussianBlur(modImage, modImage, cv::Size(9, 9), 2, 2);
vector<Vec3f> circles;
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
for (auto i = circles.begin(); i != circles.end(); ++i)
std::cout << *i << ' ';
for( size_t i = 0; i < circles.size(); i++ )
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( matImage, center, 3, Scalar(0,255,0), -1, 8, 0 );
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
UIImage *binImg = MatToUIImage(matImage);
return binImg;
}
As you can see in the image [click] there appears this issue :
Only 3 of 7 circles gets detected!
So in the docs I found the parameters explanation for this line:
cv::HoughCircles(modImage, circles, CV_HOUGH_GRADIENT, 1, 1, 100, 50, 0, 0);
dp = 1: The inverse ratio of resolution.
min_dist = modImage.rows/8: Minimum distance between detected centers.
param_1 = 200: Upper threshold for the internal Canny edge detector.
param_2 = 100*: Threshold for center detection.
min_radius = 0: Minimum radio to be detected. If unknown, put zero as default.
max_radius = 0: Maximum radius to be detected. If unknown, put zero as default.
3.My question
How to get rid of the issue mentioned above?
Any help would be very appreciated :)
For issue number 2 : The outline should be colored, not white!
What color should it be? At any rate you draw that circle in your code with this line.
circle( matImage, center, radius, Scalar(0,0,255), 3, 8, 0 );
If you want to change the color you can change the values you have declared in Scalar(0,0,255).
If you dont want the circle there at all you can remove that line of code.
Your images seems to be noise free. If the image is to contain circle always, You can extract the contours and fit circles using Least Squares
You can get the circle fit equations here. It is a straightforward implementation. Create a structure for the circle parameters (center and radius), fit circle and store the parameters in the structure and use it to draw circle using OpenCV.
You can also generate points on the circle using "ellipse2poly" function.

Edge Extraction Suggections OpenCV

Im looking for suggestions to improve my algorithm to search for parts in the following image
so far I have the following
GaussianBlur(canny, canny, Size(5, 5), 2, 2);
Canny(canny, canny, 100, 200, 5);
HoughCircles(canny, Part_Centroids, CV_HOUGH_GRADIENT, 2, 30, 100, 50, 50, 60);
My edge detect output looks like this
and Im using a HoughCircle to try to find the parts. I havent been having great success though because the HoughCircle seems very fussy and often returns a circle that isnt really the best match for a part.
Any suggestions on improving this search algorithm
EDIT:
I have tried the suggestions in the comments below. The normalization made some improvements but removing the canny before hough circles altered the required settings but not the stability.
I think now that I need to do something like the hough circles with very open thresholds and then find a way to score the results. Are there any good methods to score the results of hough circle or correlate the results with the canny output for percentage of match
I thought I would post my solution as someone may find my lessons learned valuable.
I started by taking several frames and averaging them out. This solved some of the noise issues I was having while preserving the strong edges. Next I did a basic filter and canny edge to extract a decent edge map.
Scalar cannyThreshold = mean(filter);
// Canny Edge Detection
Canny(filter, canny, cannyThreshold[0]*(2/3), cannyThreshold[0]*(1+(1/3)), 3);
Next I use a cross correlation with increasing diametered templates and store matches that score over a threshold
// Iterate through diameter ranges
for (int r = 40; r < 70; r++)
{
Mat _mask, _template(Size((r * 2) + 4, (r * 2) + 4), CV_8U);
_template = Scalar(0, 0, 0);
_mask = _template.clone();
_mask = Scalar(0, 0, 0);
circle(_template, Point(r + 4, r + 4), r, Scalar(255, 255, 255), 2, CV_AA);
circle(_template, Point(r + 4, r + 4), r / 3.592, Scalar(255, 255, 255), 2, CV_AA);
circle(_mask, Point(r + 4, r + 4), r + 4, Scalar(255, 255, 255), -1);
Mat res_32f(canny.rows, canny.cols, CV_32FC1);
matchTemplate(canny, _template, res_32f, CV_TM_CCORR_NORMED, _mask);
Mat resize(canny.rows, canny.cols, CV_32FC1);
resize = Scalar(0, 0, 0);
res_32f.copyTo(resize(Rect((resize.cols - res_32f.cols) / 2, (resize.rows - res_32f.rows) / 2, res_32f.cols, res_32f.rows)));
// Strore Well Scoring Results
double minVal, maxVal;
double threshold = .25;
do
{
Point minLoc, maxLoc;
minMaxLoc(resize, &minVal, &maxVal, &minLoc, &maxLoc);
if (maxVal > threshold)
{
matches.push_back(CircleScore(maxLoc.x, maxLoc.y, r, maxVal,1));
circle(resize, maxLoc, 30, Scalar(0, 0, 0), -1);
}
} while (maxVal > threshold);
}
I filter out circles for the best match in each zone
// Sort Matches For Best Match
for (size_t i = 0; i < matches.size(); i++)
{
size_t j = i + 1;
while (j < matches.size())
{
if (norm(Point2f(matches[i].X, matches[i].Y) - Point2f(matches[j].X, matches[j].Y)) - abs(matches[i].Radius - matches[j].Radius) < 15)
{
if (matches[j].Score > matches[i].Score)
{
matches[i] = matches[j];
}
matches[j] = matches[matches.size() - 1];
matches.pop_back();
j = i + 1;
}
else j++;
}
}
Next was the tricky one. I wanted to see which part was likely to be on top. I did this by examining every set of parts that are closer then the sum of there radii, then seeing if the edges in the overlap zone are a stronger match for one over the other. Any covered circle should have little strong edges in the overlap zone.
// Layer Sort On Intersection
for (size_t i = 0; i < matches.size(); i++)
{
size_t j = i + 1;
while (j < matches.size())
{
double distance = norm(Point2f(matches[i].X, matches[i].Y) - Point2f(matches[j].X, matches[j].Y));
// Potential Overlapping Part
if (distance < ((matches[i].Radius+matches[j].Radius) - 10))
{
int score_i = 0, score_j = 0;
Mat intersect_a(canny.rows, canny.cols, CV_8UC1);
Mat intersect_b(canny.rows, canny.cols, CV_8UC1);
intersect_a = Scalar(0, 0, 0);
intersect_b = Scalar(0, 0, 0);
circle(intersect_a, Point(cvRound(matches[i].X), cvRound(matches[i].Y)), cvRound(matches[i].Radius) +4, Scalar(255, 255, 255), -1);
circle(intersect_a, Point(cvRound(matches[i].X), cvRound(matches[i].Y)), cvRound(matches[i].Radius / 3.592-4), Scalar(0, 0, 0), -1);
circle(intersect_b, Point(cvRound(matches[j].X), cvRound(matches[j].Y)), cvRound(matches[j].Radius) + 4, Scalar(255, 255, 255), -1);
circle(intersect_b, Point(cvRound(matches[j].X), cvRound(matches[j].Y)), cvRound(matches[j].Radius / 3.592-4), Scalar(0, 0, 0), -1);
bitwise_and(intersect_a, intersect_b, intersect_a);
double a, h;
a = (matches[i].Radius*matches[i].Radius - matches[j].Radius*matches[j].Radius + distance*distance) / (2 * distance);
h = sqrt(matches[i].Radius*matches[i].Radius - a*a);
Point2f p0((matches[j].X - matches[i].X)*(a / distance) + matches[i].X, (matches[j].Y - matches[i].Y)*(a / distance) + matches[i].Y);
circle(intersect_a, Point2f(p0.x + h*(matches[j].Y - matches[i].Y) / distance, p0.y - h*(matches[j].X - matches[i].X) / distance), 6, Scalar(0, 0, 0), -1);
circle(intersect_a, Point2f(p0.x - h*(matches[j].Y - matches[i].Y) / distance, p0.y + h*(matches[j].X - matches[i].X) / distance), 6, Scalar(0, 0, 0), -1);
bitwise_and(intersect_a, canny, intersect_a);
intersect_b = Scalar(0, 0, 0);
circle(intersect_b, Point(cvRound(matches[i].X), cvRound(matches[i].Y)), cvRound(matches[i].Radius), Scalar(255, 255, 255), 6);
bitwise_and(intersect_a, intersect_b, intersect_b);
score_i = countNonZero(intersect_b);
intersect_b = Scalar(0, 0, 0);
circle(intersect_b, Point(cvRound(matches[j].X), cvRound(matches[j].Y)), cvRound(matches[j].Radius), Scalar(255, 255, 255), 6);
bitwise_and(intersect_a, intersect_b, intersect_b);
score_j = countNonZero(intersect_b);
if (score_i < score_j)matches[i].Layer = matches[j].Layer + 1;
if (score_j < score_i)matches[j].Layer = matches[i].Layer + 1;
}
j++;
}
}
After that it was easy to extract the best part to pick(Im correlating to depth data as well
The blue circles are parts, the green circle is the tallest stack and red circles are part that are under other parts.
I hope this may help someone else working on similar problems

OpenCV Drawing a Line from a set of points

I am trying to draw a line that will link up center points of a bounding box, The points are stored in a vector as the center moves from frame to frame.
Now I am trying to use a CvLine to linke these points together with a line. I am following This Opencv Documentation . But CvLine function isynt happy with the parameters I give it.
Here is the code:
vector<Point> Rightarm(20);
vector<Point> Leftarm(20);
vector<Point>::const_iterator RightIter;
vector<Point>::const_iterator LeftIter;
Point center = Point(oko[0].x + (oko[0].width/2), oko[0].y + (oko[0].height/2));
cout<<"Center Point of Box: 0 is: " <<center<<endl;
double area = (oko[0].width * oko[0].height);
cout<<"The Area of Box: 0 is: " <<area<<endl;
Point center1 = Point(oko[1].x + (oko[1].width/2), oko[1].y + (oko[1].height/2));
cout<<"Center Point of Box: 1 is: " <<center1<<endl;
double area1 = (oko[1].width * oko[1].height);
cout<<"The Area of Box: 1 is: " <<area1<<endl;
Rightarm.push_back(center);
Leftarm.push_back(center1);
if(oko[0].x > oko[1].x)
{
}
else
{
}
for(RightIter = Rightarm.begin(); RightIter != Rightarm.end(); ++RightIter)
{
circle(drawing, *RightIter, 3, Scalar(0,0,255), CV_FILLED);
}
if(Rightarm.size() == 20)
{
Rightarm.clear();
}
for(LeftIter = Leftarm.begin(); LeftIter != Leftarm.end(); ++LeftIter)
{
circle(drawing, *LeftIter, 3, Scalar(0,255,0), CV_FILLED);
}
if(Rightarm.size() == 20)
{
Leftarm.clear();
}
cvLine(drawing, center.x, center.y, Scalar(255,255,255),1 ,8 ,CV_AA);
imshow(window_Input, frame);
imshow(window_Output, drawing);
Can anyone see where I am going wrong with this...?
You are giving wrong arguments and one extra argument to line funciton. The documentation you pointed to is for Python interface, that too the older one using cv. Assuming that you have a recent version of OpenCV it is better if you use the new C++ interface or cv2 interface in Python.
you have to use line function like
cvLine(
img, // image to draw on
center, // starting end point of line segment of type cv::Point
center1, // other end of line segment
Scalar(0, 255, 0), //green colour
1 // thickness of line
CV_AA // anti aliased line type
);
documentation is here
Maybe like this:
struct centerpoint {
int x;
int y;
} center1,center2;
(...) //Define values for centers.
cvLine( drawing,
cvPoint(center1.x, center1.y),
cvPoint(center2.x, center2.y),
Scalar(255,255,255), 1, 8 , CV_AA);
Don't forget to vote all the answers you like and accept the one that works.
cvLine draws line between two points, you should give it two cv::Point but not center.x nad center.y