I'm currently using opencv library with c++, and my goal is to cancel a fisheye effect on an image ("make it plane")
I'm using the function "undistortImage" to cancel the effect but I need before to perform camera calibration in order to find the parameters K, Knew, and D, but I didn't understand exactly the documentation ( link: http://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gga37375a2741e88052ce346884dfc9c6a0a0899eaa2f96d6eed9927c4b4f4464e05).
From my understanding, I should give two lists of points and the function "calibrate" is supposed to return the arrays I need. So my question is the following: given a fisheye image, how am I supposed to pick the two lists of points to get the result ? This is for the moment my code, very basic, just takes the picture, display it, performs the undistortion and displays the new image. The elements in the matrix are random, so currently the result is not as expected. Thanks for the answers.
#include "opencv2\core\core.hpp"
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\calib3d\calib3d.hpp"
#include <stdio.h>
#include <iostream>
using namespace std;
using namespace cv;
int main(){
cout << " Usage: display_image ImageToLoadAndDisplay" << endl;
Mat image;
image = imread("C:/Users/Administrator/Downloads/eiffel.jpg", CV_LOAD_IMAGE_COLOR); // Read the file
if (!image.data) // Check for invalid input
{
cout << "Could not open or find the image" << endl;
return -1;
}
cout << "Input image depth: " << image.depth() << endl;
namedWindow("Display window", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Display window", image); // Show our image inside it.
Mat Ka = Mat::eye(3, 3, CV_64F); // Creating distortion matrix
Mat Da = Mat::ones(1, 4, CV_64F);
Mat dstImage(image.rows, image.cols, CV_32F);
cout << "K matrix depth: " << Ka.depth() << endl;
cout << "D matrix depth: " << Da.depth() << endl;
Mat Knew = Mat::eye(3, 3, CV_64F);
std::vector<cv::Vec3d> rvec;
std::vector<cv::Vec3d> tvec;
int flag = 0;
std::vector<Point3d> objectPoints1 = { Point3d(0,0,0), Point3d(1,1,0), Point3d(2,2,0), Point3d(3,3,0), Point3d(4,4,0), Point3d(5,5,0),
Point3d(6,6,0), Point3d(7,7,0), Point3d(3,0,0), Point3d(4,1,0), Point3d(5,2,0), Point3d(6,3,0), Point3d(7,4,0), Point3d(8,5,0), Point3d(5,4,0), Point3d(0,7,0), Point3d(9,7,0), Point3d(9,0,0), Point3d(4,3,0), Point3d(7,2,0)};
std::vector<Point2d> imagePoints1 = { Point(107,84), Point(110,90), Point(116,96), Point(126,107), Point(142,123), Point(168,147),
Point(202,173), Point(232,192), Point(135,69), Point(148,73), Point(165,81), Point(189,93), Point(219,112), Point(248,133), Point(166,119), Point(96,183), Point(270,174), Point(226,56), Point(144,102), Point(206,75) };
std::vector<std::vector<cv::Point2d> > imagePoints(1);
imagePoints[0] = imagePoints1;
std::vector<std::vector<cv::Point3d> > objectPoints(1);
objectPoints[0] = objectPoints1;
fisheye::calibrate(objectPoints, imagePoints, image.size(), Ka, Da, rvec, tvec, flag); // Calibration
cout << Ka<< endl;
cout << Da << endl;
fisheye::undistortImage(image, dstImage, Ka, Da, Knew); // Performing distortion
namedWindow("Display window 2", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Display window 2", dstImage); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
}
For calibration with cv::fisheye::calibrate you must provide
objectPoints vector of vectors of calibration pattern points in the calibration pattern coordinate space.
This means to provide KNOWN real-world coordinates of the points (must be corresponding points to the ones in imagePoints), but you can choose the coordinate system positon arbitrarily (but carthesian), so you must know your object - e.g. a planar test pattern.
imagePoints vector of vectors of the projections of calibration pattern points
These must be the same points as in objectPoints, but given in image coordinates, so where the projection of the object points hit your image (read/extract the coordinates from your image).
For example, if your camera did capture this image (taken from here ):
you must know the dimension of your testpattern (up to a scale), for example you could choose the top-left corner of the top-left square to be position (0,0,0), the top-right corner of the top-left square to be (1,0,0), and the bottom-left corner of the top-left square to be (1,1,0), so your whole testpattern would be placed on the xy-plane.
Then you could extract these correspondences:
pixel real-world
(144,103) (4,3,0)
(206,75) (7,2,0)
(109,151) (2,5,0)
(253,159) (8,6,0)
for these points (marked red):
The pixel position could be your imagePoints list while the real-world positions could be your objectPoints list.
Does this answer your question?
Related
im beginner in Opencv with c++. I have to draw a filled rectangle(10x10) in the middle of a image where every 5th pixel is black.
i Know how to create a rectangle. But how i can fill it and change the color of every 5th pixel ?
Would be nice if someone can help :/
void cv::rectangle ( InputOutputArray img,
Point pt1,
Point pt2,
const Scalar & color,
int thickness = 1,
int lineType = LINE_8,
int shift = 0
)
My code so far:
#include "opencv2/opencv.hpp"
#include<sstream>
using namespace std;
using namespace cv;
int main(void)
{
//Laden vom Bild
Mat img;
img = imread("C:\\Users\\Mehmet\\Desktop\\yoshi.png");
if (!img.data)
{
cout << "Could not find the image";
return -1;
}
namedWindow("window");
imshow("window", img);
imwrite("C:\\Users\\Max Mustermann\\Desktop\\11.png", img);
cv::Size sz = img.size();
int imageWidth = sz.width;
int imageHeight = sz.height;
cout <<"Es gibt " <<img.channels()<<" Farbkanäle" << endl;;
cout << "Die Breite betreagt: "<<sz.width << endl;
cout <<"Die Hoehe betreagt: " << sz.height<<endl;
std::cout << img.type();
Mat img1;
img.convertTo(img1, CV_32FC3, 1 / 255.0);
waitKey(0);
return 0;
}
```
You may be able to find the answer to your question in the opencv document.
To fill the rectangle, you can change the parameter 'thickness'
==> 'thickness Thickness of lines that make up the rectangle. Negative values, like FILLED, mean that the function has to draw a filled rectangle.'
Link:
https://docs.opencv.org/4.5.2/d6/d6e/group__imgproc__draw.html#ga07d2f74cadcf8e305e810ce8eed13bc9
And, changing color can be done by the color parameter. Controlling this parameter is easy with using cv::Scalar(BLUE, GREEN, RED).
For example, Rectangle(~~~,cv::Scalar(255,0,0),~~~); will make blue colorized rectangle with depending other parameters. So, if you want to change the color, change these values as what you want.
Consequently, if you want to change the color of rectangle repeatably, I think you can surely make the loop with this two parameters.
I am trying to make a visual odometry algorithm work in real time (using my stereo camera). The camera feed gets returned as a single image (i420 pixel format), where I have to manually split the image into a left and right frame. One of the problems that I am running into is when I call cv::triangulatePoints. The function gives me an error saying that the input matrices (meaning the left and right frame) are not continuous.
When I receive the input image from the camera, using:
// Read camera feed
IMAGE_FORMAT fmt = {IMAGE_ENCODING_I420, 50};
BUFFER *buffer = arducam_capture(camera_instance, &fmt, 3000);
if (!buffer)
return -1;
// Store feed in image
cv::Mat image = cv::Mat(cv::Size(width,(int)(height * 1.5)), CV_8UC1, buffer->data);
arducam_release_buffer(buffer);
// Change image to grayscale (grayscale increases FPS)
cv::cvtColor(image, image, cv::COLOR_YUV2GRAY_I420);
if (!image.isContinuous())
std::cout << "image is not continuous" << std::endl;
The image passes the continuity check fine (meaning the image is continuous).
However, after I resize and split the image into a left and right frame, using:
double scale_factor = 640.0 / width;
int custom_width = int(width * scale_factor);
int custom_height = int(height * scale_factor);
// OpenCV resize
cv::Mat frame = cv::Mat(cv::Size(custom_width, (int)(custom_height * 1.5)), CV_8UC1);
cv::resize(image, frame, frame.size(), 0, 0);
// Split image into left and right frame
cv::Mat frame_left = frame(cv::Rect(0, 0, custom_width / 2, (int)(custom_height * 1.5)));
cv::Mat frame_right = frame(cv::Rect(custom_width / 2, 0, custom_width / 2, (int)(custom_height * 1.5)));
if (!frame.isContinuous())
std::cout << "frame is not continuous" << std::endl;
if (!frame_right.isContinuous())
std::cout << "right frame is not continuous" << std::endl;
if (!frame_left.isContinuous())
std::cout << "left frame is not continuous" << std::endl;
The resized image (frame) is continuous, but the left and right frames fail the continuity check (meaning they are not continuous).
So I guess my question is how can I split the image into two different images, while keeping them continuous?
The solution to this problem is actually quite simple:
if (!frame_right.isContinuous()) {
frame_right = frame_right.clone();
if (!frame_left.isContinuous()) {
frame_left = frame_left.clone();
By using the clone() function, you can copy the image and OpenCV will consider it to be a new image. This way the right and left frames will retain continuity (or be set to continuous status).
So splitting the image destroys continuity and cloning will restore continuity.
I want to find angles of rotation of the head using opencv and dlib. So, I tried to use this code from the tutorial:
cv::Mat im = imread("img.jpg");
matrix<bgr_pixel> dlibImage;
assign_image(dlibImage, cv_image<bgr_pixel>(im));
auto face = detector(dlibImage)[0];
auto shape = sp(dlibImage, face);
// 2D image points.
std::vector<cv::Point2d> image_points;
image_points.push_back(cv::Point2d(shape.part(30).x(), shape.part(30).y())); // Nose tip
image_points.push_back(cv::Point2d(shape.part(8).x(), shape.part(8).y())); // Chin
image_points.push_back(cv::Point2d(shape.part(36).x(), shape.part(36).y())); // Left eye left corner
image_points.push_back(cv::Point2d(shape.part(45).x(), shape.part(45).y())); // Right eye right corner
image_points.push_back(cv::Point2d(shape.part(48).x(), shape.part(48).y())); // Left Mouth corner
image_points.push_back(cv::Point2d(shape.part(54).x(), shape.part(54).y())); // Right mouth corner
// 3D model points.
std::vector<cv::Point3d> model_points;
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f)); // Nose tip
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f)); // Chin
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f)); // Left eye left corner
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f)); // Right eye right corner
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f)); // Left Mouth corner
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f)); // Right mouth corner
// Camera internals
double focal_length = im.cols; // Approximate focal length.
Point2d center = cv::Point2d(im.cols/2,im.rows/2);
cv::Mat camera_matrix = (cv::Mat_<double>(3,3) << focal_length, 0, center.x, 0 , focal_length, center.y, 0, 0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type); // Assuming no lens distortion
cout << "Camera Matrix " << endl << camera_matrix << endl ;
// Output rotation and translation
cv::Mat rotation_vector; // Rotation in axis-angle form
cv::Mat translation_vector;
// Solve for pose
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
// Project a 3D point (0, 0, 1000.0) onto the image plane.
// We use this to draw a line sticking out of the nose
std::vector<Point3d> nose_end_point3D;
std::vector<Point2d> nose_end_point2D;
nose_end_point3D.push_back(Point3d(0,0,1000.0));
projectPoints(nose_end_point3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs, nose_end_point2D);
for(int i=0; i < image_points.size(); i++)
{
circle(im, image_points[i], 3, Scalar(0,0,255), -1);
}
cv::line(im,image_points[0], nose_end_point2D[0], cv::Scalar(255,0,0), 2);
cout << "Rotation Vector " << endl << rotation_vector << endl;
cout << "Translation Vector" << endl << translation_vector << endl;
cout << nose_end_point2D << endl;
// Display image.
cv::imshow("Output", im);
cv::waitKey(0);
But, unfortunately, I get completely different results depending on the size of the same image!
If I use this img.jpg which has size 299x299 px(many sizes are ok, but we take the nearest), then all ok and I get right result:
Output:
Rotation Vector
[-0,04450161828760668;
-2,133664002574712;
-0,2208024002827168]
But if I use this img.jpg which has size 298x298 px, then I get absolutely wrong result:
Output:
Rotation Vector
[-2,999117288644056;
0,0777816930911016;
-0,7573144061217354]
I also understood that it's due to the coords of the landmarks, not due to size of image, because result are same for the same hardcoded landmarks while sizes of this image are different.
How can I always get a correct pose estimation, as in the first case?
P.S. also, I want to note this problem has very nondeterministic behaviour - now all ok with 298x298, but I get wrong result with 297x297 size.
I am trying to use openCV to detect red round object and draw a circle around that object. However,the segmentation fault occurs when i use circle function to draw circle. I don't know why is it happening and how to fix it? Thanks!!
#include <opencv/cvaux.h>
#include <opencv/highgui.h>
#include <opencv/cxcore.h>
#include <stdlib.h>
#include <cv.hpp>
#include <cxcore.hpp>
#include <highgui.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include<stdio.h>
#include<math.h>
#include<opencv/cv.h>
#include<opencv/highgui.h>
#include<opencv2/objdetect/objdetect.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<vector>
using namespace cv; // if you don want to use scope resolution operator(::) in the code to call the classes or functions from cv namespace, you need this line
using namespace std; // if you don want to use scope resolution operator(::) in the code to call the classes or functions from std namespace, you need this line
int main(int argc, char* argv[]){
VideoCapture capWebcam(0); //use scope resolution operator :: because VideoCapture is a class under namespace of cv
//use VideoCapture class to instantiate an object called capWebcam; here used the constructor of the object immediately to
//grab the only (0) camera
if(capWebcam.isOpened()==false){ //check whether the camera is detected and successfully grabbed
printf("Error: camera not detected!!\n");
cout<<"Error: camera not detected!!\n"<<endl;
return(1);
}
Mat matOriginal; // matrix object used to store image from webcam
Mat matProcessed;
vector<Vec3f> vecCircles; //declare a 3-element vector of type floats, this will be the pass by reference(i.e. a pointer) output of HoughCicles()
vector<Vec3f>::iterator itrCircles; //iterator for circles vector just a counter, but has the same data type from the itrCircles' data member
namedWindow("Original"); //window for original image
namedWindow("Processed"); //window for Processed image
char charCheckForEscKey =0;
while(charCheckForEscKey!=27){ //as long as ESC is not pressed, stays in the while
if(capWebcam.read(matOriginal) == false){ //check to see whether the image read from webcam correctly
cout<<"Error: image frame not read!!\n"<<endl;
break;
} //
inRange(matOriginal, //this time we don't need to pass a pointer; we pass the image as an object instead
Scalar(0,0,175), //specify the lower bound of BGR we want to keep
Scalar(100,100,256), //upper bound of BGR
matProcessed); //return the processed image to another object
GaussianBlur(matProcessed,matProcessed,Size(9,9),1.5,1.5); //take matProcessed image and blur by Gaussian filter(9x9 window with std of 1.5 in both x,y direction) and return to same object
HoughCircles(matProcessed,
vecCircles, //use vector element to receive the x,y,radius of the detected circle
CV_HOUGH_GRADIENT, //algorithms used to detect circles
2, //size of image divided by this value = "accumulator resolution"
matProcessed.rows/4, //min distance between the centers of two detected circles
100, //upper pixel value threshold for canny edge detection to interpret as edge
50, //lower pixel value threshold for canny edge detection to interpret as edge
10, //min radius of a circle can be detected
400); //max radius of a circle can be detected
for(itrCircles = vecCircles.begin();itrCircles != vecCircles.end();itrCircles++) //retrieve the x,y and radius of the detected circles from vecCircles object one by one
cout<< "circle position x = " << (*itrCircles)[0] //because itrCircles is a pointer(pass by reference), to get the value need to use * to dereference
<< ",y = " << (*itrCircles)[1]
<< ",r = " << (*itrCircles)[2] << "\n" << endl;
// draw the center of detected circle in green
circle(matOriginal,
Point((int)(*itrCircles)[0],(int)(*itrCircles)[1]),
3,
Scalar(0,255,0),
CV_FILLED);
// draw the circumference of detected circle
circle(matOriginal,
Point((int)(*itrCircles)[0],(int)(*itrCircles)[1]),
(int)(*itrCircles)[2],
Scalar(0,0,255),
3);
imshow("Original",matOriginal); //show the original mat(image) in Original window
imshow("Processed",matProcessed);// show the processed mat(image) in Processed window
charCheckForEscKey = waitKey(10); // delay 10 ms to allow a time gap to listen to any key pressed
} // end while
return(0);
} // end main
The crash is caused by the missing parenthesis on the for loop, and so the iterators you are using to draw are not correctly initialized. You should do:
for(itrCircles = vecCircles.begin();itrCircles != vecCircles.end();itrCircles++)
{
// your functions
}
May I suggest to drop iterators, and use the foreach loop?
for (const auto& circ : vecCircles)
{
// your functions
}
Here the full example, cleaned from all useless stuff (especially useless headers).
#include <opencv2\opencv.hpp>
#include <iostream>
#include<vector>
using namespace cv;
using namespace std;
int main(){
VideoCapture capWebcam(0);
if (capWebcam.isOpened() == false){
cout << "Error: camera not detected!!\n" << endl;
return -1;
}
Mat matOriginal; // matrix object used to store image from webcam
Mat matProcessed;
vector<Vec3f> vecCircles;
namedWindow("Original"); //window for original image
namedWindow("Processed"); //window for Processed image
char charCheckForEscKey = 0;
while (charCheckForEscKey != 27){ //as long as ESC is not pressed, stays in the while
if (!capWebcam.read(matOriginal)){
cout << "Error: image frame not read!!" << endl;
break;
} //
inRange(matOriginal, //this time we don't need to pass a pointer; we pass the image as an object instead
Scalar(0, 0, 175), //specify the lower bound of BGR we want to keep
Scalar(100, 100, 256), //upper bound of BGR
matProcessed); //return the processed image to another object
GaussianBlur(matProcessed, matProcessed, Size(9, 9), 1.5, 1.5); //take matProcessed image and blur by Gaussian filter(9x9 window with std of 1.5 in both x,y direction) and return to same object
HoughCircles(matProcessed,
vecCircles, //use vector element to receive the x,y,radius of the detected circle
CV_HOUGH_GRADIENT, //algorithms used to detect circles
2, //size of image divided by this value = "accumulator resolution"
matProcessed.rows / 4, //min distance between the centers of two detected circles
100, //upper pixel value threshold for canny edge detection to interpret as edge
50, //lower pixel value threshold for canny edge detection to interpret as edge
10, //min radius of a circle can be detected
400); //max radius of a circle can be detected
for (const auto& circ : vecCircles) //retrieve the x,y and radius of the detected circles from vecCircles object one by one
{
cout << "circle position x = " << circ[0] //because itrCircles is a pointer(pass by reference), to get the value need to use * to dereference
<< ",y = " << circ[1]
<< ",r = " << circ[2] << "\n" << endl;
// draw the center of detected circle in green
circle(matOriginal, Point(circ[0], circ[1]), 3, Scalar(0, 255, 0), CV_FILLED);
// draw the circumference of detected circle
circle(matOriginal, Point(circ[0], circ[1]), circ[2], Scalar(0, 0, 255), 3);
}
imshow("Original", matOriginal); //show the original mat(image) in Original window
imshow("Processed", matProcessed);// show the processed mat(image) in Processed window
charCheckForEscKey = waitKey(10); // delay 10 ms to allow a time gap to listen to any key pressed
} // end while
return(0);
} // end main
I'm converting code from Matlab to C++, and one of the functions that I don't understand is imtransform. I need to "register" an image, which basically means stretching, skewing, and rotating my image so that it overlaps correctly with another image.
Matlab's imtransform does the registration for you, but as I'm programming this in C++ I need to know what's been abstracted. What is the normal math involved in image registration? How can I go from 2 arrays of data (which make up images) to 1 array, which is the combined image overlapped?
I recommend you to use OpenCV within c++ and there are a lot of image processing tools and functions you can call and use.
The Registration module implements parametric image registration. The implemented method is direct alignment, that is, it uses directly the pixel values for calculating the registration between a pair of images, as opposed to feature-based registration.
The OpenCV constants that represent these models have a prefix MOTION_ and are shown inside the brackets.
Translation ( MOTION_TRANSLATION ) : The first image can be shifted ( translated ) by (x , y) to obtain the second image. There are only two parameters x and y that we need to estimate.
Euclidean ( MOTION_EUCLIDEAN ) : The first image is a rotated and shifted version of the second image. So there are three parameters — x, y and angle . You will notice in Figure 4, when a square undergoes Euclidean transformation, the size does not change, parallel lines remain parallel, and right angles remain unchanged after transformation.
Affine ( MOTION_AFFINE ) : An affine transform is a combination of rotation, translation ( shift ), scale, and shear. This transform has six parameters. When a square undergoes an Affine transformation, parallel lines remain parallel, but lines meeting at right angles no longer remain orthogonal.
Homography ( MOTION_HOMOGRAPHY ) : All the transforms described above are 2D transforms. They do not account for 3D effects. A homography transform on the other hand can account for some 3D effects ( but not all ). This transform has 8 parameters. A square when transformed using a Homography can change to any quadrilateral.
Reference: https://docs.opencv.org/3.4.2/db/d61/group__reg.html
This is an example I found very useful for image registration:
#include <opencv2/opencv.hpp>
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/features2d.hpp"
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d;
const int MAX_FEATURES = 500;
const float GOOD_MATCH_PERCENT = 0.15f;
void alignImages(Mat &im1, Mat &im2, Mat &im1Reg, Mat &h)
{
Mat im1Gray, im2Gray;
cvtColor(im1, im1Gray, CV_BGR2GRAY);
cvtColor(im2, im2Gray, CV_BGR2GRAY);
// Variables to store keypoints and descriptors
std::vector<KeyPoint> keypoints1, keypoints2;
Mat descriptors1, descriptors2;
// Detect ORB features and compute descriptors.
Ptr<Feature2D> orb = ORB::create(MAX_FEATURES);
orb->detectAndCompute(im1Gray, Mat(), keypoints1, descriptors1);
orb->detectAndCompute(im2Gray, Mat(), keypoints2, descriptors2);
// Match features.
std::vector<DMatch> matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
matcher->match(descriptors1, descriptors2, matches, Mat());
// Sort matches by score
std::sort(matches.begin(), matches.end());
// Remove not so good matches
const int numGoodMatches = matches.size() * GOOD_MATCH_PERCENT;
matches.erase(matches.begin()+numGoodMatches, matches.end());
// Draw top matches
Mat imMatches;
drawMatches(im1, keypoints1, im2, keypoints2, matches, imMatches);
imwrite("matches.jpg", imMatches);
// Extract location of good matches
std::vector<Point2f> points1, points2;
for( size_t i = 0; i < matches.size(); i++ )
{
points1.push_back( keypoints1[ matches[i].queryIdx ].pt );
points2.push_back( keypoints2[ matches[i].trainIdx ].pt );
}
// Find homography
h = findHomography( points1, points2, RANSAC );
// Use homography to warp image
warpPerspective(im1, im1Reg, h, im2.size());
}
int main(int argc, char **argv)
{
// Read reference image
string refFilename("form.jpg");
cout << "Reading reference image : " << refFilename << endl;
Mat imReference = imread(refFilename);
// Read image to be aligned
string imFilename("scanned-form.jpg");
cout << "Reading image to align : " << imFilename << endl;
Mat im = imread(imFilename);
// Registered image will be resotred in imReg.
// The estimated homography will be stored in h.
Mat imReg, h;
// Align images
cout << "Aligning images ..." << endl;
alignImages(im, imReference, imReg, h);
// Write aligned image to disk.
string outFilename("aligned.jpg");
cout << "Saving aligned image : " << outFilename << endl;
imwrite(outFilename, imReg);
// Print estimated homography
cout << "Estimated homography : \n" << h << endl;
}
Raw C++ does not have any of the concepts you refer to built into it. However, there are many image processing libraries for C++ you can use that can do various transforms. DevIL and FreeImage should be able to do layering, as well as some transforms.