Using cv::rgbd::Odometry::compute - c++

I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.

A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.

Related

C++ and OpenCV 4.5.3 - (-215: Assertion failed)

Problem : Watershed algorithm
I started app project, for image processing, using OpenCv 4.5.3 and Swift ( with C++ ). I'm fighting with watershaded alg. for a really long time... And i have no clue what did i do wrong. Just don't know...
Error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161:
error: (-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161: error:
(-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
In the definition of openCv's watershed we can find :
#param image Input 8-bit 3-channel image.
#param markers Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .
Code
+(UIImage *) watershed:(UIImage *)src{
cv::Mat img, mask;
UIImageToMat(src, img);
// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
cv::inRange(img, cv::Scalar(255,255,255), cv::Scalar(255,255,255), mask);
img.setTo(cv::Scalar(0,0,0), mask);
// Create a kernel that we will use to sharpen our image
// an approximation of second derivative, a quite strong kernel
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat lapl;
cv::filter2D(img, lapl, CV_32F, kernel);
cv::Mat sharp;
img.convertTo(sharp, CV_32F);
cv::Mat result = sharp - lapl;
// convert back to 8bits gray scale
result.convertTo(result, CV_8UC3);
lapl.convertTo(lapl, CV_8UC3);
cv::Mat bw;
cv::cvtColor(result, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, cv::DIST_MASK_3);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1.0, cv::NORM_MINMAX);
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, 0.4, 1.0, cv::THRESH_BINARY);
// Dilate a bit the dist image
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8U);
dilate(dist, dist, kernel1);
// Create the CV_8U version of the distance image
// It is needed for findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i)+1), -1);
}
// Draw the background marker
circle(markers, cv::Point(5,5), 3, cv::Scalar(255), -1);
cv::Mat markers8u;
markers.convertTo(markers8u, CV_8U, 10);
// Perform the watershed algorithm
watershed(result, markers);
return MatToUIImage(result);
}
You can clearly see, that variables has proper type, as in descr. of function:
result.convertTo(result, CV_8UC3);
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
The convertTo can not add channels as well can not reduce/convert image to image with smaller amount of channels.
The key in this case is to use :
cvtColor(src, src, COLOR_BGRA2BGR); // change 4 to 3 channels

Opencv error: assertion failed in wrapPerspective

i'm trying to make an AR app, using aruco and Opencv (i'm a newbie). It detects aruco marker, and puts an image on it. I have tried to use wrapPerstective() function, however somethig is wrong, it returns Opencv error assertion failed ((m0.type() == cv_32f m0.type() == cv_64f) in wrapPerspective. Please give me a way to solve it
int main() {
cv::VideoCapture inputVideo;
inputVideo.open("gal.mp4");
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
cv::Mat sq = imread("zhuz.jpg", CV_LOAD_IMAGE_UNCHANGED);
while (inputVideo.grab()) {
vector<Point2f> sqPoints;
vector<Point2f> p;
sqPoints.push_back(Point2f(0, 0));
sqPoints.push_back(Point2f(sq.cols, 0));
sqPoints.push_back(Point2f(sq.cols, sq.rows));
sqPoints.push_back(Point2f(0, sq.rows));
cv::Mat image, warp_matrix;
inputVideo.retrieve(image);
Mat cpy_img(image.rows, image.cols, image.type());
Mat neg_img(image.rows, image.cols, image.type());
Mat gray;
Mat blank(sq.rows, sq.cols, sq.type());
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
if (ids.size() > 0) {
p.push_back(corners[0][0]);
p.push_back(corners[0][1]);
p.push_back(corners[0][2]);
p.push_back(corners[0][3]);
Mat wrap_matrix = getPerspectiveTransform(sqPoints, p);
blank = Scalar(0);
neg_img = Scalar(0); // Image is white when pixel values are zero
cpy_img = Scalar(0); // Image is white when pixel values are zero
bitwise_not(blank, blank);
warpPerspective(sq, neg_img, warp_matrix, Size(neg_img.cols, neg_img.rows)); // Transform overlay Image to the position - [ITEM1]
warpPerspective(blank, cpy_img, warp_matrix, Size(cpy_img.cols, neg_img.rows)); // Transform a blank overlay image to position
bitwise_not(cpy_img, cpy_img); // Invert the copy paper image from white to black
bitwise_and(cpy_img, image, cpy_img); // Create a "hole" in the Image to create a "clipping" mask - [ITEM2]
bitwise_or(cpy_img, neg_img, image); // Finally merge both items [ITEM1 & ITEM2]
}
cv::imshow("out", image);
}
}

Why does cv::drawKeypoints resize my image?

I'm using OpenCV in C++ to process a cv::Mat before printing it to a ROS topic. For some reason cv::drawKeypoints messes up my result by virtually stretching it over the width beyond the image frame:
. The blob in the right topic represents the one on the top left in the left topic.
Here's my code:
image_transport::Publisher pubthresh;
image_transport::Publisher pubkps;
cv::SimpleBlobDetector detector;
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
cv::Mat mat = cv_bridge::toCvShare(msg, "bgr8")->image;
cv::cvtColor(mat,mat, CV_BGR2GRAY );
cv::threshold(mat,mat,35,255,0);
std::vector<cv::KeyPoint> keypoints;
detector.detect(mat, keypoints);
cv::Mat kps;
cv::drawKeypoints( mat, keypoints, kps, cv::Scalar(0,0,255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
sensor_msgs::ImageConstPtr ithresh,ikps;
ithresh = cv_bridge::CvImage(std_msgs::Header(), "mono8", mat).toImageMsg();
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
pubthresh.publish(ithresh);
pubkps.publish(ikps);
}
int main(int argc, char **argv)
{
...
image_transport::Subscriber sub = it.subscribe("/saliency_map", 1, imageCallback);
...
}
After the cv::drawKeypoints operation both cv::Mat are treated the same. According to the documentation the image shouldn't get resized either. What am I missing here?
Looks like your result image isn't grayscale but color image.
Stretching here means, that each pixel becomes implicitly 3x the size in horizontal direction, because of having 3 channels, which are interpreted as grayscale values.
So try to convert kps to grayscale before using your publishing stuff.
cv::cvtColor(kps,kps, CV_BGR2GRAY );
Or adjust the line
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
to publish a bgr color image instead of "mono8". But I don't know how to use that code.

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)

Calibrate single camera using OpenCV 2.3.1 and C++

I'm trying to calibrate a webcam using OpenCV 2.3.1 and Visual Studio 2010 (c++ console app). I'm using this class:
class CameraCalibrator{
private:
std::vector<std::vector<cv::Point3f>> objectPoints;
std::vector<std::vector<cv::Point2f>> imagePoints;
//Square Lenght
float squareLenght;
//output Matrices
cv::Mat cameraMatrix; //intrinsic
cv::Mat distCoeffs;
//flag to specify how calibration is done
int flag;
//used in image undistortion
cv::Mat map1,map2;
bool mustInitUndistort;
public:
CameraCalibrator(): flag(0), squareLenght(36.0), mustInitUndistort(true){};
int addChessboardPoints(const std::vector<std::string>& filelist,cv::Size& boardSize){
std::vector<std::string>::const_iterator itImg;
std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point3f> objectCorners;
//initialize the chessboard corners in the chessboard reference frame
//3d scene points
for(int i = 0; i<boardSize.height; i++){
for(int j=0;j<boardSize.width;j++){
objectCorners.push_back(cv::Point3f(float(i)*squareLenght,float(j)*squareLenght,0.0f));
}
}
//2D Image points:
cv::Mat image; //to contain chessboard image
int successes = 0;
//cv::namedWindow("Chess");
for(itImg=filelist.begin(); itImg!=filelist.end(); itImg++){
image = cv::imread(*itImg,0);
bool found = cv::findChessboardCorners(image, boardSize, imageCorners);
//cv::drawChessboardCorners(image, boardSize, imageCorners, found);
//cv::imshow("Chess",image);
//cv::waitKey(1000);
cv::cornerSubPix(image, imageCorners, cv::Size(5,5),cv::Size(-1,-1),
cv::TermCriteria(cv::TermCriteria::MAX_ITER+cv::TermCriteria::EPS,30,0.1));
//if we have a good board, add it to our data
if(imageCorners.size() == boardSize.area()){
addPoints(imageCorners,objectCorners);
successes++;
}
}
return successes;
}
void addPoints(const std::vector<cv::Point2f>& imageCorners,const std::vector<cv::Point3f>& objectCorners){
//2D image point from one view
imagePoints.push_back(imageCorners);
//corresponding 3D scene points
objectPoints.push_back(objectCorners);
}
double calibrate(cv::Size &imageSize){
mustInitUndistort = true;
std::vector<cv::Mat> rvecs,tvecs;
return
cv::calibrateCamera(objectPoints, //the 3D points
imagePoints,
imageSize,
cameraMatrix, //output camera matrix
distCoeffs,
rvecs,tvecs,
flag);
}
void remap(const cv::Mat &image, cv::Mat &undistorted){
std::cout << cameraMatrix;
if(mustInitUndistort){ //called once per calibration
cv::initUndistortRectifyMap(
cameraMatrix,
distCoeffs,
cv::Mat(),
cameraMatrix,
image.size(),
CV_32FC1,
map1,map2);
mustInitUndistort = false;
}
//apply mapping functions
cv::remap(image,undistorted,map1,map2,cv::INTER_LINEAR);
}
};
I'm using 10 chessboard images (supposing that's enough for calibation) with resolution 640x480. The main function looks like this:
int main(){
CameraCalibrator calibrateCam;
std::vector<std::string> filelist;
filelist.push_back("img10.jpg");
filelist.push_back("img09.jpg");
filelist.push_back("img08.jpg");
filelist.push_back("img07.jpg");
filelist.push_back("img06.jpg");
filelist.push_back("img05.jpg");
filelist.push_back("img04.jpg");
filelist.push_back("img03.jpg");
filelist.push_back("img02.jpg");
filelist.push_back("img01.jpg");
cv::Size boardSize(8,6);
double calibrateError;
int success;
success = calibrateCam.addChessboardPoints(filelist,boardSize);
std::cout<<"Success:" << success << std::endl;
cv::Size imageSize;
cv::Mat inputImage, outputImage;
inputImage = cv::imread("img10.jpg",0);
outputImage = inputImage.clone();
imageSize = inputImage.size();
calibrateError = calibrateCam.calibrate(imageSize);
std::cout<<"Calibration error:" << calibrateError << std::endl;
calibrateCam.remap(inputImage,outputImage);
cv::namedWindow("Original");
cv::imshow("Original",inputImage);
cv::namedWindow("Undistorted");
cv::imshow("Undistorted",outputImage);
cv::waitKey();
return 0;
}
Everything runs without errors. cameraMatrix looks like this (approximately):
685.65 0 365.14
0 686.38 206.98
0 0 1
Calibration error is 0.310157, which is acceptable.
But when I use remap, output image looks even worse than original. Here is the sample:
Original image: ]
Undistorted image: ]
So, the question is, am I doing something wrong in process of calibration? Is 10 different chessboard images enough for calibration? Do you have any suggestions?
The camera matrix doesn't undistort the lens, those 4 values are simply the focal length (in H and V) and the image centre (in X and Y)
There is another 3 or 4 value row matrix (distCoeffs in your code) which contains the lens mapping - see Karl's Answer for example code
The calibration is done with a numerical optimization that has a pretty shallow slope near the solution. Also, the function being minimized is very nonlinear. So, my guess is that your 10 images aren't enough. I calibrate cameras with very wide-angle lenses (i.e. very distorted images), and I try to get like 50 or 60 images.
I try to get images with the chessboard at 3 or 4 positions along each edge of the image, plus some in the middle, with multiple orientations relative to the camera and at 3 different distances (super close, typical, and as far as you can get and still resolve the checkerboard).
Getting the chessboard near the corners is very important. Your example images do not have the chessboard very near the corner of the image. It's those points that constrain the calibration to do the right thing in the very distorted parts of the image (the corners).