I am using Opencv "findChessboardCorners" function to find corners of chess board, but I am getting false as a returned value from "findChessboardCorners" function.
Following is my code:
int main(int argc, char* argv[])
{
vector<vector<Point2f>> imagePoints;
Mat view;
bool found;
vector<Point2f> pointBuf;
Size boardSize; // The size of the board -> Number of items by width and height
boardSize.width = 75;
boardSize.height = 49;
view = cv::imread("FraunhoferChessBoard.jpeg");
namedWindow("Original Image", WINDOW_NORMAL);// Create a window for display.
imshow("Original Image", view);
found = findChessboardCorners(view, boardSize, pointBuf,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE);
if (found)
{
cout << "Corners of chess board detected";
}
else
{
cout << "Corners of chess board not detected";
}
waitKey(0);
return 0;
}
I expect return value from "findChessboardCorners" function to be true whereas I am getting false.
Please explain me where have I made mistake ?
Many thanks :)
The function didn't find the pattern in your image and this is why it returns false. Maybe the exact same code works with a different image.
I cannot directly answer to why this function did not find the pattern inside your image, but I would recommend different approaches to be less sensitive to the noise, so that the algorithm could detect properly your corners:
- Use findChessboardCornersSB instead of findChessboardCorners. According to the documentation it is more robust to noise and works faster for large images like yours. That's probably what you are looking for. I tried and with python it works properly with the image you posted. See the result below.
- Change the pattern shapes as shown in the doc for findChessboardCornersSB.
- Use less and bigger squares in your pattern. It's not helping to have so many squares.
For the next step you will need to use a non-symmetrical pattern. If your top-left square is white then the bottom right has to be black.
If you have additional problems with the square pattern, you could also change your approach using corners and switch to the circle pattern. All functions are available in opencv. In my case it worked better. See findCirclesGrid. If you use this method, you can run the "BlobDetector" to check how each circle is detected and configure some parameters to improve the accuracy.
Hope this helps!
EDIT:
Here is the python code to make it work from the downloaded image.
import cv2
import matplotlib.pyplot as plt
img = cv2.imread('img.jpg')
img_small = cv2.resize(img, (img.shape[1], img.shape[0]))
found, corners = cv2.findChessboardCornersSB(img_small, (75, 49), flags=0)
plt.imshow(cv2.cvtColor(img_small, cv2.COLOR_BGR2RGB), cmap='gray')
plt.scatter(corners[:, 0, 0], corners[:, 0, 1])
plt.show()
I'm currently using opencv library with c++, and my goal is to cancel a fisheye effect on an image ("make it plane")
I'm using the function "undistortImage" to cancel the effect but I need before to perform camera calibration in order to find the parameters K, Knew, and D, but I didn't understand exactly the documentation ( link: http://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gga37375a2741e88052ce346884dfc9c6a0a0899eaa2f96d6eed9927c4b4f4464e05).
From my understanding, I should give two lists of points and the function "calibrate" is supposed to return the arrays I need. So my question is the following: given a fisheye image, how am I supposed to pick the two lists of points to get the result ? This is for the moment my code, very basic, just takes the picture, display it, performs the undistortion and displays the new image. The elements in the matrix are random, so currently the result is not as expected. Thanks for the answers.
#include "opencv2\core\core.hpp"
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\calib3d\calib3d.hpp"
#include <stdio.h>
#include <iostream>
using namespace std;
using namespace cv;
int main(){
cout << " Usage: display_image ImageToLoadAndDisplay" << endl;
Mat image;
image = imread("C:/Users/Administrator/Downloads/eiffel.jpg", CV_LOAD_IMAGE_COLOR); // Read the file
if (!image.data) // Check for invalid input
{
cout << "Could not open or find the image" << endl;
return -1;
}
cout << "Input image depth: " << image.depth() << endl;
namedWindow("Display window", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Display window", image); // Show our image inside it.
Mat Ka = Mat::eye(3, 3, CV_64F); // Creating distortion matrix
Mat Da = Mat::ones(1, 4, CV_64F);
Mat dstImage(image.rows, image.cols, CV_32F);
cout << "K matrix depth: " << Ka.depth() << endl;
cout << "D matrix depth: " << Da.depth() << endl;
Mat Knew = Mat::eye(3, 3, CV_64F);
std::vector<cv::Vec3d> rvec;
std::vector<cv::Vec3d> tvec;
int flag = 0;
std::vector<Point3d> objectPoints1 = { Point3d(0,0,0), Point3d(1,1,0), Point3d(2,2,0), Point3d(3,3,0), Point3d(4,4,0), Point3d(5,5,0),
Point3d(6,6,0), Point3d(7,7,0), Point3d(3,0,0), Point3d(4,1,0), Point3d(5,2,0), Point3d(6,3,0), Point3d(7,4,0), Point3d(8,5,0), Point3d(5,4,0), Point3d(0,7,0), Point3d(9,7,0), Point3d(9,0,0), Point3d(4,3,0), Point3d(7,2,0)};
std::vector<Point2d> imagePoints1 = { Point(107,84), Point(110,90), Point(116,96), Point(126,107), Point(142,123), Point(168,147),
Point(202,173), Point(232,192), Point(135,69), Point(148,73), Point(165,81), Point(189,93), Point(219,112), Point(248,133), Point(166,119), Point(96,183), Point(270,174), Point(226,56), Point(144,102), Point(206,75) };
std::vector<std::vector<cv::Point2d> > imagePoints(1);
imagePoints[0] = imagePoints1;
std::vector<std::vector<cv::Point3d> > objectPoints(1);
objectPoints[0] = objectPoints1;
fisheye::calibrate(objectPoints, imagePoints, image.size(), Ka, Da, rvec, tvec, flag); // Calibration
cout << Ka<< endl;
cout << Da << endl;
fisheye::undistortImage(image, dstImage, Ka, Da, Knew); // Performing distortion
namedWindow("Display window 2", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Display window 2", dstImage); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
}
For calibration with cv::fisheye::calibrate you must provide
objectPoints vector of vectors of calibration pattern points in the calibration pattern coordinate space.
This means to provide KNOWN real-world coordinates of the points (must be corresponding points to the ones in imagePoints), but you can choose the coordinate system positon arbitrarily (but carthesian), so you must know your object - e.g. a planar test pattern.
imagePoints vector of vectors of the projections of calibration pattern points
These must be the same points as in objectPoints, but given in image coordinates, so where the projection of the object points hit your image (read/extract the coordinates from your image).
For example, if your camera did capture this image (taken from here ):
you must know the dimension of your testpattern (up to a scale), for example you could choose the top-left corner of the top-left square to be position (0,0,0), the top-right corner of the top-left square to be (1,0,0), and the bottom-left corner of the top-left square to be (1,1,0), so your whole testpattern would be placed on the xy-plane.
Then you could extract these correspondences:
pixel real-world
(144,103) (4,3,0)
(206,75) (7,2,0)
(109,151) (2,5,0)
(253,159) (8,6,0)
for these points (marked red):
The pixel position could be your imagePoints list while the real-world positions could be your objectPoints list.
Does this answer your question?
I am trying to write each frame from a camera into a video. Till here it is fine. However, I want my video to include the shape_predictor too at each frame, so when it is reproduced it also appears on the image. So far I have got this... Any ideas? Thank you
cap >> frame;
cv::VideoWriter oVideoWriter;
// . . .
cv_image<bgr_pixel> cimg(frame); //Mat to something dlib can deal with
frontal_face_detector detector = get_frontal_face_detector();
std::vector<rectangle> faces = detector(cimg);
pose_model(cimg, faces[0]);
oVideoWriter.write(dlib::toMat(cimg)); //Turn it into an Opencv Mat
The shape predictor is not the face detector. You have to first call the face detector, then the shape predictor.
See this example program: http://dlib.net/face_landmark_detection_ex.cpp.html
You initialized the face detector properly..then you have to initialize the tracker. Something like this:
shape_predictor sp;
deserialize("shape_predictor_68_face_landmarks.dat") >> sp;
The model can be found here: http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2
The rest of the way, you can just follow the example program I linked above. Here's the portion where the tracker is run. You have to pass to the tracker the output (bounding box) return by the detector for it to work. The code below iterates through all the boxes returned by the detector.
// Now tell the face detector to give us a list of bounding boxes
// around all the faces in the image.
std::vector<rectangle> dets = detector(img);
cout << "Number of faces detected: " << dets.size() << endl;
// Now we will go ask the shape_predictor to tell us the pose of
// each face we detected.
std::vector<full_object_detection> shapes;
for (unsigned long j = 0; j < dets.size(); ++j)
{
full_object_detection shape = sp(img, dets[j]);
cout << "number of parts: "<< shape.num_parts() << endl;
cout << "pixel position of first part: " << shape.part(0) << endl;
cout << "pixel position of second part: " << shape.part(1) << endl;
// You get the idea, you can get all the face part locations if
// you want them. Here we just store them in shapes so we can
// put them on the screen.
shapes.push_back(shape);
}
I am a beginner with OpenCV and I have read some tutorials and manuals but I couldn't quite make sense of some things.
Currently, I am trying to crop a binary image into two sections. I want to know which row has the most number of white pixels and then crop out the row and everything above it and then redraw the image with just the data below the row with the most number of white pixels.
What I've done so far is to find the coordinates of the white pixels using findNonZero and then store it into a Mat. The next step is where I get confused. I am unsure of how to access the elements in the Mat and figuring out which row occurs the most in the array.
I have used a test image with my code below. It gave me the pixel locations of [2,0; 1,1; 2,1; 3,1; 0,2; 1,2; 2,2; 3,2; 4,2; 1,3; 2,3; 3,3; 2,4]. Each element has a x and y coordinate of the white pixel. First of all how do I access each element and then only poll the y-coordinate in each element to determine the row that occurs the most? I have tried using the at<>() method but I don't think I've been using it right.
Is this method a good way of doing this or is there a better and/or faster way? I have read a different method here using L1-norm but I couldn't make sense of it and would this method be faster than mine?
Any help would be greatly appreciated.
Below is the code I have so far.
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
int Number_Of_Elements;
Mat Grayscale_Image, Binary_Image, NonZero_Locations;
Grayscale_Image = imread("Test Image 6 (640x480px).png", 0);
if(!Grayscale_Image.data)
{
cout << "Could not open or find the image" << endl;
return -1;
}
Binary_Image = Grayscale_Image > 128;
findNonZero(Binary_Image, NonZero_Locations);
cout << "Non-Zero Locations = " << NonZero_Locations << endl << endl;
Number_Of_Elements = NonZero_Locations.total();
cout << "Total Number Of Array Elements = " << Number_Of_Elements << endl << endl;
namedWindow("Test Image",CV_WINDOW_AUTOSIZE);
moveWindow("Test Image", 100, 100);
imshow ("Test Image", Binary_Image);
waitKey(0);
return(0);
}
I expect the following to work:
Point loc_i = NonZero_Locations.at<Point>(i);
I have previously asked a question Marking an interest point in an image using c++. I used the same solution and got the required point using the adaptive threshold and Blob Detection Algorithm(Growing Regions). I have the original source figure where I want to detect the rectangular region at the center
Original Image:
.But after I used the algorithm, I got something like this(details are visible if you open it in a new tab)
Marked Image:
where apart from the rectangular region the bright day light illuminated spots are also visible. I have used bilateral filtering but still I am not able to detect the rectangular region.But this algorithm works for the Night image where the background is more darker as expected.
Can someone suggest me whether the same algorithm with some modifications is sufficient or any other efficient ways are available..
Thanks
Using a simple combination of blur & threshold I managed to get this result (resized for viewing purposes):
After that, applying erosion & the squares.cpp technique (which is a sample from OpenCV) outputs:
which is almost the result you are looking for: the bottom part of the rectangle was successfully detected. All you need to do is increase the height of the detected rectangle (red square) to fit your area of interest.
Code:
Mat img = imread(argv[1]);
// Blur
Mat new_img = img.clone();
medianBlur(new_img, new_img, 5);
// Perform threshold
double thres = 210;
double color = 255;
threshold(new_img, new_img, thres, color, CV_THRESH_BINARY);
imwrite("thres.png", new_img);
// Execute erosion to improve the detection
int erosion_size = 4;
Mat element = getStructuringElement(MORPH_CROSS,
Size(2 * erosion_size + 1, 2 * erosion_size + 1),
Point(erosion_size, erosion_size) );
erode(new_img, new_img, element);
imwrite("erode.png", new_img);
vector<vector<Point> > squares;
find_squares(new_img, squares);
std::cout << "squares: " << squares.size() << std::endl;
draw_squares(img, squares);
imwrite("area.png", img);
EDIT:
The find_squares() function returns a vector with all the squares found in the image. Because it iterates on every channel of the image, on your example it successfully detects the rectangular region in each of them, so printing squares.size() outputs 3.
As a square can be seen as a vector of 4 (X,Y) coordinates, OpenCV express this concept as vector<Point> allowing you to access the X and Y part the coordinate.
Now, printing squares revelead that the points were detected in a counterclockwise direction:
1st ------ 4th
| |
| |
| |
2nd ------ 3rd
Following this example, its fairly obvious that if you need to increase the height of the rectangle you need to change the Y of the 1st and 4th points:
for (int i = 0; i < squares.size(); i++)
{
for (int j = 0; j < squares[i].size(); j++)
{
// std::cout << "# " << i << " " << squares[i][j].x << ","<< squares[i][j].y << std::endl;
if (j == 0 || j == 3)
squares[i][j].y = 0;
}
}
In the image shown above, I would suggest
either a normal thresholding operation which should work pretty well or
a line-wise chain-code "calculation" or
finding gradients in your histogram.
There would be plenty of other solutions.
I would consider to subtract the background shading if this is consistent.