Inverse Perspective Mapping OpenCV C++ - c++

I am doing Inverse Perspective Mapping using opencv C++. I am following this code to get the desired result. Please have a look at the result.
[
I am using opencv c++ remap function. In addition to the current result I need to how to project a pixel from the source image to the distination image. i.e if I click on the pixel (320, 140), how would I get the corresponding pixel i.e (0, 0) in the distination picture.
void remap(InputArray src, OutputArray dst, InputArray map1, InputArray map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar& borderValue=Scalar())
I have the calculated the arguments map1, map2. I guess I have to use them but i don't know how.

Related

OpenCV Explanation solvenpn

Can anyone give me more explanation about the opencv function solvepnp()?
The opencv documentation says
bool cv::solvePnP (
InputArray objectPoints,
InputArray imagePoints,
InputArray cameraMatrix,
InputArray distCoeffs,
OutputArray rvec,
OutputArray tvec,
bool useExtrinsicGuess = false,
int flags = SOLVEPNP_ITERATIVE)
I'm wondering what the objectPoints, imagePoints and cameraMatrix are. I have once calibrated my camera and have a parameter xml file from it, can i use this?
It is used when you have for example a 3D model of an object and you have a view of it in the real world, it will give you an approximate position and orientation of the camera towards the object.
For example:
objectPoints – Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector can be also passed here.
imagePoints – Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector can be also passed here.
You can find the rest at this link

Using cvWarpAffine to crop image in openCV

If I want to crop image center at (x,y) with window size ws using
void cvWarpAffine(const CvArr* src, CvArr* dst, const CvMat* map_matrix,
int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )
in openCV, how should I set the parameters of transformation matrix? I am confused about how to set the default parameters of other transformations except crop?
P.S. In my situation I also want to crop the point centre at edge and need padding in the fix window. So use cv:rect will be more complecated to deal with the edge.

OpenCV apply camera distortion - apply calibration

I have a program that detects objects in a live video stream. I am looking to compensate for the distortion in the camera, I have used the OpenCV calibration tool and produced an XML file with the relevant parameters.
However I am unsure how to then apply this using the undistort function, it is my understanding that this will need to be applied to each frame as it is captured?
void undistort(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray newCameraMatrix=noArray() )
I am having trouble identifying each of these parameters, below is my current understanding.
undistorted(currentFrame, resultsWindow, calibrationFile, notSure, notSure);
Is this function called as below:
if(captureOpen == false){
img_scene = cvCaptureFromFile(videoFeed);
}
while(1) {
image = cvQueryFrame(img_scene);
undistort();
undistorted(currentFrame, resultsWindow, calibrationFile, notSure,
notSure);
No, that will not work. You need to manually read your XML file beforehand and fill the corresponding parameters with the data found in the file. The file should contain the camera matrix (look for cx, cy, fx, fy values) and the distortion parameters (k1, k2, k3, p1, p2, etc.).
The documentation for undistort for 2.4.x is here : http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistort
Typically src is a Mat containing the current frame. dst an output Mat of the same size that will be filled with the undistorted image. You will have to convert that back to your preferred format or display it in the window. cameraMatrix is a 3x3 Mat that you have filled with your camera intrinsics. distCoeffs is usually a 1x4 or 1x5 Mat containing the distorsion coeffs. (Note that p1 and p2 must be written right after k2).

Opencv: 2D barcode (Data Matrix) detection

I am working on detect a 2D Barcode on a PCB board. The environment is Visual Studio 2012.
We met some problems and can’t filter out the 2D barcode image successfully.
Loading the figure: Original Image Size is 1600*1200.
After we load the figure and staring a series of processing as following steps:
1. Finding threshold value by auto-threshold method.
2. Doing binary threshold to image.
3. Doing Opening to make image clearly.
Opening:
dst = open(src,element) = dilate(erode(src, element))
4. Filter out the rectangle except the squares.
Then we can get a collection of squares.
As the following image, after the steps 1-4 we can find squares on the image.
5. Using a similar Data Matrix Template compare with squares respectively by the histogram analysis.
5.1 Calculate the histogram
void calcHist( const Mat* images, int nimages,
const int* channels, InputArray mask,
OutputArray hist, int dims, const int* histSize,
const float** ranges, bool uniform=true, bool accumulate=false );
5.2 Normalize the value range of an array
void normalize( InputArray src, OutputArray dst, double alpha=1, double beta=0,
int norm_type=NORM_L2, int dtype=-1, InputArray mask=noArray());
5.3 Compare two histograms with correlation.
double compareHist( InputArray H1, InputArray H2, CV_COMP_CORREL );
6. After the processing we can’t filter the correct image from the square collection.
6.1 We have adjusted the bins of histogram from 256 to 64/32 but the results without robustness, the correlation values are very low even less than 0.5.
6.2 We also try to use the EMD (Earth Mover's Distance) to estimate the similarity of two squares and it’s not solving this problem.
[[Question]]:
Is it possible to share us some suggestion to improve our detection method?
Why not use libraries?
datamatrix opencv module
zxing Cpp
libdmtx
Otherwise, you can study the code in these libs and try to optimise your own code.

OpenCV apply my own filter to a color image

I'm working in OpenCV C++ to filtering image color. I want to filter the image using my own matrix. See this code:
img= "c:/Test/tes.jpg";
Mat im = imread(img);
And then i want to filtering/multiply with my matrix (this matrix can replaced with another matrix 3x3)
Mat filter = (Mat_<double>(3, 3) <<17.8824, 43.5161, 4.11935,
3.45565, 27.1554, 3.86714,
0.0299566, 0.184309, 1.46709);
How to multiply the img mat matrix with my own matrix? I'm still not understand how to multiply 3 channel (RGB) matrix with another matrix (single channel) and resulted image with new color.
you should take a look at the opencv documentation. You could use this function:
filter2D(InputArray src, OutputArray dst, int ddepth, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )
which would give you something like this in your code:
Mat output;
filter2D(im, output, -1, filter);
About your question for 3-channel matrix; it is specified in the documentation:
kernel – convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
So by default your "filter" matrix will be applied equally to each color plane.
EDIT You find a fully functional example on the opencv site: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/filter_2d/filter_2d.html