I need explanation of the following loop for face detection in opencv
VideoCapture capture("DSC_0772.avi"); //-1, 0, 1 device id
Mat cap_img,gray_img;
vector<Rect> faces, eyes;
while(1)
{
capture >> cap_img;
waitKey(10);
cvtColor(cap_img, gray_img, CV_BGR2GRAY);
cv::equalizeHist(gray_img,gray_img);
face_cascade.detectMultiScale(gray_img, faces, 1.1, 5, CV_HAAR_SCALE_IMAGE | CV_HAAR_DO_CANNY_PRUNING, cvSize(0,0), cvSize(300,300));
for(int i=0; i < faces.size();i++)
{
Point pt1(faces[i].x+faces[i].width, faces[i].y+faces[i].height);
Point pt2(faces[i].x,faces[i].y);
rectangle(cap_img, pt1, pt2, cvScalar(0,255,0), 2, 8, 0);
}
I don't understand faces[i].x and the other for loop parameters
how they are selected for face detection
Thanks for help
faces is a std::vector of Rect. So the for loop is going through each Rect in the vector and it is creating two points. Rect stores not only an x and y(of the top left corner) but also the height and width of the rectangle. So faces[i].x+faces[i].width is taking the x coordinate of the rectangle plus its width and faces[i].y+faces[i].height is taking the y coordinate of the rectangle plus its height. This is getting the opposite corner of the rectangle. You are then feeding those points plus the image into the rectangle() function.
Related
How is it possible to calculate the black / white ratio of the pixels inside the outline of a contour (not the bounding box)?
The image is pre-processed with cv::threshold(src, img, 0, 255, cv::THRESH_BINARY | cv::THRESH_OTSU); and then inverted img = 255 - img;
I look for the retangular outline of the table (contour) via cv::RETR_EXTERNAL.. I want to calculate the black pixels inside the contour
There can be other components in the image so I can't just count all non-zero pixels
This is the original image before binarized and inverted
I think there's some confusion about terminology. A contour is simply a sequence of points. If you draw them as a closed polygon (e.g. with cv::drawContours), all the points inside the polygon will be white.
You can however use this mask to count the white or black pixels on your thresholded image:
cv::Mat1b bw_image = ...
std::vector<std::vector<cv::Point>> contours;
cv::findContours(bw_image, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
for(size_t i=0; i<contours.size(); ++i)
{
cv::Mat1b contour_mask(bw_image.rows, bw_image.cols, uchar(0));
cv::drawContours(contour_mask, contours, i, Scalar(255), cv::FILLED);
int total_white_inside_contour = cv::countNonZero(mask);
int white_on_image_inside_contour = cv::countNonZero(bw_image & mask);
int black_on_image_inside_contour = total_white_inside_contour - white_on_image_inside_contour;
}
You cannot calculate the white and black ratio of a contour, because what is a contour? A group of white pixels which are connected which each other calls contour, so a contour does not contain any black pixel if it does, it calls hole inside the contour.
And also a contour does not have a specific shape.
So you can do it by Bounding Rectangle the rectangle around the contour then you will be to calculate the black and white ratio inside the rectangle.
I am using aruco markers to get the location of a robot. After getting the pose from the estimatePoseSingleMarkers i obtain the rvecs and tvecs for a given marker. From this how could i obtain the rotated angle of the marker about each axis.
i used the code below to detect and draw the aruco markers along with its axis
while(true)
{
vector< vector<Point2f>> corners; //All the Marker corners
vector<int> ids;
cap >> frame;
cvtColor(frame, gray, CV_BGR2GRAY);
aruco::detectMarkers(gray, dictionary, corners, ids);
aruco::drawDetectedMarkers(frame,corners,ids);
aruco::estimatePoseSingleMarkers(corners, arucoMarkerLength, cameraMatrix, distanceCoefficients, rvecs, tvecs);
for(int i = 0; i < ids.size(); i++)
{
aruco::drawAxis(frame, cameraMatrix, distanceCoefficients, rvecs[i], tvecs[i], 0.1f);
}
imshow("Markers", frame);
int key = waitKey(10);
if((char)key == 'q')
break;
}
The rotation of the marker with respect to the camera was obtained by first taking the rotation matrix from the rotation vector(rvec) and then by taking the euler angle.
Converting Rotation matrix to Eurler angles are given here
Currently I am using OpenCV to process images from an AVCaptureSession. The app right now takes these images and draws cv::Circles on the blobs. The tracking is working but when I draw the circle, it comes out as this gray, distorted circle when it should be green. Is it that OpenCV drawing functions don't work properly with iOS apps? Or is there something I can do to fix it?
Any help would be appreciated.
Here is a screen shot: (Ignore that giant green circle on the bottom)
The cv::Circle is around the outside of the black circle.
Here is where I converted the CMSampleBuffer into a cv::Mat:
enter code here CVImageBufferRef pixelBuff = CMSampleBufferGetImageBuffer(sampleBuffer);
cv::Mat cvMat;
CVPixelBufferLockBaseAddress(pixelBuff, 0);
int bufferWidth = CVPixelBufferGetWidth(pixelBuff);
int bufferHeight = CVPixelBufferGetHeight(pixelBuff);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuff);
cvMat = cv::Mat(bufferHeight, bufferWidth, CV_8UC4, pixel);
cv::Mat grayMat;
cv::cvtColor(cvMat, grayMat, CV_BGR2GRAY);
CVPixelBufferUnlockBaseAddress(pixelBuff, 0);
This is the cv::Circle command:
if (keypoints.size() > 0) {
cv::Point p(keypoints[0].pt.x, keypoints[0].pt.y);
printf("x: %f, y: %f\n",keypoints[0].pt.x, keypoints[0].pt.y);
cv::circle(cvMat, p, keypoints[0].size/2, cv::Scalar(0,255,0), 2, 8, 0);
}
Keypoints is the vector of blobs that have been detected.
I am currently working on face detection and thereafter eyes, mouth, nose and other facial features.For above detection I have used haarcascade( frontal face, eyes, right_ear, left_ear and mouth).Now, everything works perfectly, if the face is frontal and straight. But I am not getting good result if the face is in side view or it is rotated. For side view, I have used lbscascade_profile.xml( it works only for right side of face). But for rotated face, I am not able to detect face.Can anyone help me out in above context.I am adding my code here for better understanding.
P.S : Thanks in advance and Pardon me for childish question( it might be because I am very new to programming).
void detectAndDisplay( Mat frame)
{
// create a vector array to store the face found
std::vector<Rect> faces;
Mat frame_gray;
bool mirror_image = false;
// convert the frame image into gray image file
cvtColor( frame, frame_gray, CV_BGR2GRAY);
//equalize the gray image file
equalizeHist( frame_gray, frame_gray);
//find the frontal faces and store them in vector array
face_cascade1.detectMultiScale(frame_gray,
faces,
1.1, 2,
0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT,
Size(40, 40),
Size(200, 200));
// find the right side face and store that in the face vector
if(!(faces.size()))
{
profileface_cascade.detectMultiScale( frame_gray,
faces,
1.2, 3,
0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT,
Size(40, 40),
Size(200, 200));
}
// find whether left face exist or not by flipping the frame and checking through lbsprofile
if(!faces.size())
{
cv::flip(frame_gray, frame_gray, 1);
profileface_cascade.detectMultiScale( frame_gray,
faces,
1.2, 3,
0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT,
Size(40, 40),
Size(200, 200));
mirror_image = true;
}
// if the frame is not flipped then the it could be directly drawn into frame
if(mirror_image and faces.size())
{
// flip the frame
cv::flip(frame_gray, frame_gray, 1);
}
if(faces.size())
{
//draw rectangle for the faces detected
rectangle(frame, faces[0], cvScalar(0, 255, 0, 0), 1, 8, 0);
}
// check whether any face is present in frame or not
else
image_not_found++;
imshow("Face Detection", frame);
}
Flandmark will be your friend, then! I've been using it recently quite often, and it turned out to be a successful tool in head pose estimation hence particular in detecting "rotated" face. It works quite reasonable in range of angles: tilt (rotation around axis parallel to image's width) from -30 to +30 degrees, pan (rotation around axis parallel to image's height) from -45 to +45 degrees. Also it is a robust solution.
assume that I have a detected circle with coordinate of (center.x and center.y) detected by using this circle function:
GaussianBlur( dis, dis, Size(3, 3), 2, 2 );
vector<Vec3f> circles;
HoughCircles( dis, circles, CV_HOUGH_GRADIENT, 1, dis.rows/8, 200, 100);
for( size_t i = 0; i < circles.size(); i++ ){
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
cout << "center" << center.x << ", " << center.y << endl;
// coordinates of center points
V.push_back(std::make_pair(center.x,center.y));
int radius = cvRound(circles[i][2]);
// circle center
circle( dis, center, 3, 1, -1, 8, 0 );
// circle outline
circle( dis, center, radius, 1, 3, 8, 0 );
}
how do I draw a rectangle around this circle which the center of the circle locates in middle of the rectangle and the distance between the center and each side is "radius + x" ?
I am completely new in image processing, sorry for the simple question.
I would appreciate any help..
...............Edit the code..................
cv::rectangle( diatence, cvPoint((center.x)-(radius+10),(center.y)-(radius+10)), cvPoint((center.x)+(radius+10),(center.y)+(radius+10)), 1, 1, 8 );
assuming the centre is at x,y you need to draw a rectangle with the following specifications:
top left corner : x-(radius+a),y-(radius+a)
bottom right corner : x+(radius+a),y+(radius+a)
where a is an arbitrary value that you want to add to the radius.
More generally:
given a centre point x,y and a known size LxH of a rectangle, you can draw the rectangle by specifiying the top-left point as x-(L/2),y-(H/2) and the bottom-right point as x+(L/2),y+(H/2)