Translating a curve in openCv from ROI to frame - c++

I'm currently working on a project, where i use opencv to find a curve in an image. Therefor i set a region of interest, to where I'm looking for the curve. Now my problem is that when i calculate the parameters of my polynomial (lets say 2nd degree), I use the relative coordinates from the ROI, but i want to translate the parameters of the function (which are stored in a cv::Mat) to the original image.
The solution I'm looking for should work for any degree polynomial.
To be more precise I have the function parameters of the polynomial relative to my ROI, but i want the parameters relative to the original image.

Let's assume that the polynomial is a function f(x). Then g(x) = f(x) + a moves it a units vertically (positive a moves the function up).
Function h(x) = f(x - b)* moves the function horizontally (positive b moves the function to the right).
Therefore, to move your polynomial b units horizontally and a units vertically you should define the transform as T(x, a, b) = f(x-b) + a
In your case, a = roi.y; and b = roi.x; provided that image coordinates start form 0,0.
Here is a link to an interactive demo I made. You can test this for different functions and move the sliders.
https://www.desmos.com/calculator/fezybrsyhw

Related

Estimate the camera pose in the reference system using one marker with ARUCO

I am currently working on a camera pose estimation project using only one marker with ARUCO.
I used Aruco's Marker Detector to detect markers and get the marker's Rvec and Tvec. I understand these two vectors represent the transform from the marker to the camera, which is the marker's pose w.r.t camera. I form a 4 by 4 matrix called T_marker_camera using these two vectors.
Then, I set up a world frame (left handed) and get the marker's world pose, which is a 4 by 4 transform matrix.
I want to calculate the pose of the camera w.r.t the world frame, and I use the following formula to calculate it:
T_camera_world = T_marker_world * T_marker_camera_inv
Before I perform the above formula, I convert the OpenCV coordinates to the left handed one (flip the sign of x axis).
However, I didn't get the correct x, y, z of the camera w.r.t the world frame.
What did I miss to get the correct answer?
Thanks
The one equation you gave looks right, so the issue is probably somewhere that you didn't show/describe.
A fix in your notation will help clarify.
Write the pose/source frame on the right (input), the reference/destination frame on the left (output). Then your matrices "match up" like dominos.
rvec and tvec yield a matrix that should be called T_cam_marker.
If you want the pose of your camera in the world frame, that is
T_world_cam = T_world_marker * T_marker_cam
T_world_cam = T_world_marker * inv(T_cam_marker)
(equivalent to what you wrote, but domino)
Be sure that you do matrix multiplication, not element-wise multiplication.
To move between left-handed and right-handed coordinate systems, insert a matrix that maps coordinates accordingly. Frames:
OpenCV camera/screen: right-handed, {X right, Y down, Z far}
ARUCO (in OpenCV anyway): right-handed, {X right, Y far, Z up}, first corner is top left (-X+Y quadrant)
whatever leftie frame you have, let's say {X right, Y up, Z far} and it's a screen or something
The hand-change matrix for typical frames on screens is an identity but with the entry for Y being a -1. I don't know why you would flip the X but that's "equivalent", ignoring any rotations.

Get known position in one image to another using 8-point algorithm

I have two images and and know the position of a point in the first image. Now I want to get the corresponding position in the second image.
This is my idea:
I can use algorithms such as SIFT to match keypoints (as seen in the image)
I know the camera matrix using calibration with e.g. chessboards
Using the 8 point algorithm I calculate the fundamental matrix F
Can I now use F to calculate the corresponding point?
Using fundamental matrix F alone is not enough. If you have a point on one image, you can't find its position on the second image, because it depends not only on configuration of the cameras, but also on the distance from the camera to that point.
This can also be seen from the equation x2^T * F * x1 = 0. If you know x1 and F, then for x2 you get equation x2^T * b = 0, where b = F * x1. This is an equation of a point x2 lying on the line b (points x1, x2 and line b are in homogeneous coordinates). Although you cant find the exact position of the point on the second image, you know that it must lie somewhere on that line.
Hartley and Zisserman have a great explanation these of these concepts in their book Multiple View Geometry in Computer Vision. Be sure to check it out for more details.

OpenCV estimate distance & normal vector from homography

I'm matching a template from which I know my distance to & my normal vector to.
i.e. if my homography is the identity matrix then my camera is at Distance = 1.0m & my normal is at 0.
Now I have a second image in which I successfully aligned my template giving an homography:
[0.82072, 0.05685, 66.75024]
H = [0.02006, 0.86092, 39.34907]
[0.00003, 0.00017, 01.00000]
I also have my camera matrix.
the opencv function :
cv::decomposeHomographyMat()
gives me 4 solutions for the Rotation(3x3 mat),Translation(3x1 mat) & Normal vector(3x1).
cv::warpPerspective()
Is able to map nearly perfectly the current view of the camera to my template.
So it should be possible to get the actual scaling (template to alignment) & the normal vector.
But I can't figure it out how to actually choose the correct solutions of cv::decomposeHomographyMat(), I'm I missing something?
EDIT: Posted "question" without the question...
I figured it out.
Step one:
I create a set of point in the ROI I can map to my template (points in the area defined by the corners of the ROI).
Step two:
Warp the points in ROI (from step one; 8 points are enough in all my tests & use case) with all the solutions of cv::decomposeHomographyMat()
Exclude all solutions that give a point3D(x, y, z) with a z value < 0 (i.e. point is behind the camera).
Step three:
At this point you should have one to two solutions left.
All rotations matrixes should be the same, only the normal & translation matrix should differ.
Translations matrixes should verify:
Translation_Solution1 = -1* Translation_Solution2
Then compare your ROI area to you template area.
If you ROI area is smaller than your template, it means that you template as been "scaled down", i.e. your camera did a translation on z in the negative values.
Else you camera did a translation on the positive z values.
Chose the appropriate solution.
My error was to think that warpPerspective() was actually solving the Homography decomposition, but its not.
in paper Faugeras O D, Lustman F. Motion and structure from motion in a piecewise planar environment.1988 page 9 https://www.researchgate.net/publication/243764888_Motion_and_Structure_from_Motion_in_a_Piecewise_Planar_Environment

Panoramic Image Photogrametry: How to calculate range?

Assume that I took two panoramic image with vertical offset of H and each image is presented in equirectangular projection with size Xm and Ym. To do this, I place my panoramic camera at position say A and took an image, then move camera H meter up and took another image.
I know that a point in image 1 with coordinate of X1,Y1 is the same point on image 2 with coordinate X2 and Y2(assuming that X1=X2 as we have only vertical offset).
My question is that How I can calculate the range of selected of point (the point that know its X1and Y1 is on image 1 and its position on image 2 is X2 and Y2 from the Point A (where camera was when image no 1 was taken.).
Yes, you can do it - hold on!!!
Key thing y = focal length of your lens - now I can do it!!!
So, I think your question can be re-stated more simply by saying that if you move your camera (on the right in the diagram) up H metres, a point moves down p pixels in the image taken from the new location.
Like this if you imagine looking from the side, across you taking the picture.
If you know the micron spacing of the camera's CCD from its specification, you can convert p from pixels to metres to match the units of H.
Your range from the camera to the plane of the scene is given by x + y (both in red at the bottom), and
x=H/tan(alpha)
y=p/tan(alpha)
so your range is
R = x + y = H/tan(alpha) + p/tan(alpha)
and
alpha = tan inverse(p/y)
where y is the focal length of your lens. As y is likely to be something like 50mm, it is negligible, so, to a pretty reasonable approximation, your range is
H/tan(alpha)
and
alpha = tan inverse(p in metres/focal length)
Or, by similar triangles
Range = H x focal length of lens
--------------------------------
(Y2-Y1) x CCD photosite spacing
being very careful to put everything in metres.
Here is a shot in the dark, given my understanding of the problem at hand you want to do something similar to computer stereo vision, I point you to http://en.wikipedia.org/wiki/Computer_stereo_vision to start. Not sure if this is still possible to do in the manner you are suggesting, it sounds like you may need some more physical constraints but I do remember being able to correlate two 2d points in images after undergoing a strict translation. Think :
lambda[x,y,1]^t = W[r1, tx;r2, ty;ry, tz][x; y; z; 1]^t
Where lamda is a scale factor, W is a 3x3 matrix covering the intrinsic parameters of your camera, r1, r2, and r3 are row vectors that make up the 3x3 rotation matrix (in your case you can assume the identity matrix since you have only applied a translation), and tx, ty, tz which are your translation components.
Since you are looking at two 2d points at the same 3d point [x,y,z] this 3d point is shared by both 2d points. I cannot say if you can rationalize the actual x,y, and z values particularly for your depth calculation but this is where I would start.

Crop image by detecting a specific large object or blob in image?

Please anyone help me to resolve my issue. I am working on image processing based project and I stuck at a point. I got this image after some processing and for further processing i need to crop or detect only deer and remove other portion of image.
This is my Initial image:
And my result should be something like this:
It will be more better if I get only a single biggest blob in the image and save it as a image.
It looks like the deer in your image is pretty much connected and closed. What we can do is use regionprops to find all of the bounding boxes in your image. Once we do this, we can find the bounding box that gives the largest area, which will presumably be your deer. Once we find this bounding box, we can crop your image and focus on the deer entirely. As such, assuming your image is stored in im, do this:
im = im2bw(im); %// Just in case...
bound = regionprops(im, 'BoundingBox', 'Area');
%// Obtaining Bounding Box co-ordinates
bboxes = reshape([bound.BoundingBox], 4, []).';
%// Obtain the areas within each bounding box
areas = [bound.Area].';
%// Figure out which bounding box has the maximum area
[~,maxInd] = max(areas);
%// Obtain this bounding box
%// Ensure all floating point is removed
finalBB = floor(bboxes(maxInd,:));
%// Crop the image
out = im(finalBB(2):finalBB(2)+finalBB(4), finalBB(1):finalBB(1)+finalBB(3));
%// Show the images
figure;
subplot(1,2,1);
imshow(im);
subplot(1,2,2);
imshow(out);
Let's go through this code slowly. We first convert the image to binary just in case. Your image may be an RGB image with intensities of 0 or 255... I can't say for sure, so let's just do a binary conversion just in case. We then call regionprops with the BoundingBox property to find every bounding box of every unique object in the image. This bounding box is the minimum spanning bounding box to ensure that the object is contained within it. Each bounding box is a 4 element array that is structured like so:
[x y w h]
Each bounding box is delineated by its origin at the top left corner of the box, denoted as x and y, where x is the horizontal co-ordinate while y is the vertical co-ordinate. x increases positively from left to right, while y increases positively from top to bottom. w,h are the width and height of the bounding box. Because these points are in a structure, I extract them and place them into a single 1D vector, then reshape it so that it becomes a M x 4 matrix. Bear in mind that this is the only way that I know of that can extract values in arrays for each structuring element efficiently without any for loops. This will facilitate our searching to be quicker. I have also done the same for the Area property. For each bounding box we have in our image, we also have the attribute of the total area encapsulated within the bounding box.
Thanks to #Shai for the spot, we can't simply use the bounding box co-ordinates to determine whether or not something has the biggest area within it as we could have a thin diagonal line that could drive the bounding box co-ordinates to be higher. As such, we also need to rely on the total area that the object takes up within the bounding box as well. Simply put, it's just the sum of all of the pixels that are contained within the object.
Therefore, we search the entire area vector that we have created to see which has the maximum area. This corresponds to your deer. Once we find this location, extract the bounding box locations, then use this to crop the image. Bear in mind that the bounding box values may have floating point numbers. As the image co-ordinates are in integer based, we need to remove these floating point values before we decide to crop. I decided to use floor. I then write code that displays the original image, with the cropped result.
Bear in mind that this will only work if there is just one object in the image. If you want to find multiple objects, check bwboundaries in MATLAB. Otherwise, I believe this should get you started.
Just for completeness, we get the following result:
While object detection is a very general CV task, you can start with something simple if the assumptions are strong enough and you can guarantee that the input images will contain a single prominent white blob well described by a bounding box.
One very simple idea is to subdivide the picture in 3x3=9 patches, calculate the statistics for each patch and compute some objective function. In the most simple case you just do a grid search over various partitions and select that with the highest objective metric. Here's an illustration:
If every line is a parameter x_1, x_2, y_1 and y_2, then you want to optimize
either by
grid search (try all x_i, y_i in some quantization steps)
genetic-algorithm-like random search
gradient descent (move every parameter in that direction that optimizes the target function)
The target function F can be define over statistics of the patches, e.g. like this
F(9 patches) {
brightest_patch = max(patches)
others = patches \ brightest_patch
score = brightness(brightest_patch) - 1/8 * brightness(others)
return score
}
or anything else that incorporates relevant statistics of the patches as well as their size. This also allows to incorporate a "prior knowledge": if you expect the blob to appear in the middle of the image, then you can define a "regularization" term that will penalize F if the parameters x_i and y_i deviate from the expected position too much.
Thanks to all who answer and comment on my Question. With your help I got my exact solution. I am posting my final code and result for others.
img = im2bw(imread('deer.png'));
[L, num] = bwlabel(img, 4);
%%// Get biggest blob or object
count_pixels_per_obj = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(count_pixels_per_obj);
biggest_blob = (L==ind);
%%// crop only deer
bound = regionprops(biggest_blob, 'BoundingBox');
%// Obtaining Bounding Box co-ordinates
bboxes = reshape([bound.BoundingBox], 4, []).';
%// Obtain this bounding box
%// Ensure all floating point is removed
finalBB = floor(bboxes);
out = biggest_blob(finalBB(2):finalBB(2)+finalBB(4),finalBB(1):finalBB(1)+finalBB(3));
%%// Show images
figure;
imshow(out);