I am trying to render an outline using Vulkan's stencil buffers. This technique involves rendering the object twice with the second one being scaled up in order to account for said outline. Normally this is done in 3D space in which the normal vectors for each vertex can be used to scale the object correctly. I however am trying the same in 2D space and without pre-calculated normals.
An Example: Given are the Coordinates I, H and J and I need to find L, K and M with the condition that the distance between each set of parallel vectors is the same.
I tried scaling up the object and then moving it to the correct location but that got me nowhere.
I am searching for a solution that is ideally applicable to arbitrary shapes in 2D space and also somewhat efficient. Also I am unsure if this should be calculated on the GPU or the CPU.
Lets draw an example of a single point of some 2D polygon.
The position of point M depends only on position of A and its two adjacent lines, I have added normals too - green and blue. Points P and Q line on the intersection of a shifted and non-shifted lines.
If we know the adjacent points of A - B , C and the distances to O and P, then
M = A - d_p * normalize(B-A) - d_o * normalize(C-A)
this is true because P, O lie on the lines B-A and C-A.
The distances are easy to compute from the two-color right triangles:
d_p=s/sin(alfa)
d_o=s/sin(alfa)
where s is the desired stencil shift. They are of the course the same.
So the whole computation, given coordinates of A,B,C of some polygon corner and the desired shift s is:
b = normalize(B-A) # vector
c = normalize(C-A) # vector
alfa = arccos(b.c) # dot product
d = s/sin(alfa)
M = A - sign(b.c) * (b+c)*d
This also proves that M lies on the alfa angle bisector line.
Anyway, the formula is generic and holds for any 2D polygon, it is easily parallelizible since each point is shifted independently of others. But
for non-convex corners, you need to use the opposite sign, we can use dot product to generalize.
It is not numerically stable for b.c close to zero i.e. when b,c lines are almost parallel, in that case I would recommend just shifting A by d*n_b where n_b is the normalized normal of B-A line, in 2D it is normalize((B.y - A.y, A.x-B.x)).
I have two images and and know the position of a point in the first image. Now I want to get the corresponding position in the second image.
This is my idea:
I can use algorithms such as SIFT to match keypoints (as seen in the image)
I know the camera matrix using calibration with e.g. chessboards
Using the 8 point algorithm I calculate the fundamental matrix F
Can I now use F to calculate the corresponding point?
Using fundamental matrix F alone is not enough. If you have a point on one image, you can't find its position on the second image, because it depends not only on configuration of the cameras, but also on the distance from the camera to that point.
This can also be seen from the equation x2^T * F * x1 = 0. If you know x1 and F, then for x2 you get equation x2^T * b = 0, where b = F * x1. This is an equation of a point x2 lying on the line b (points x1, x2 and line b are in homogeneous coordinates). Although you cant find the exact position of the point on the second image, you know that it must lie somewhere on that line.
Hartley and Zisserman have a great explanation these of these concepts in their book Multiple View Geometry in Computer Vision. Be sure to check it out for more details.
I'm trying to find the focal length, position and orientation of a camera in world space.
Because I need this to be resolution-independent, I normalized my image coordinates to be in the range [-1, 1] for x, and a somewhat smaller range for y (depending on aspect ratio). So (0, 0) is the center of the image. I've already corrected for lens distortion (using k1 and k2 coefficients), so this does not enter the picture, except sometimes throwing x or y slightly out of the [-1, 1] range.
As a given, I have a planar, fixed rectangle in world space of known dimensions (in millimeters). The four corners of the rectangle are guaranteed to be visible, and are manually marked in the image. For example:
std::vector<cv::Point3f> worldPoints = {
cv::Point3f(0, 0, 0),
cv::Point3f(2000, 0, 0),
cv::Point3f(0, 3000, 0),
cv::Point3f(2000, 3000, 0),
};
std::vector<cv::Point2f> imagePoints = {
cv::Point2f(-0.958707, -0.219624),
cv::Point2f(-1.22234, 0.577061),
cv::Point2f(0.0837469, -0.1783),
cv::Point2f(0.205473, 0.428184),
};
Effectively, the equation I think I'm trying to solve is (see the equivalent in the OpenCV documentation):
/ xi \ / fx 0 \ / tx \ / Xi \
s | yi | = | fy 0 | | Rxyz ty | | Yi |
\ 1 / \ 1 / \ tz / | Zi |
\ 1 /
where:
i is 1, 2, 3, 4
xi, yi is the location of point i in the image (between -1 and 1)
fx, fy are the focal lengths of the camera in x and y direction
Rxyz is the 3x3 rotation matrix of the camera (has only 3 degrees of freedom)
tx, ty, tz is the translation of the camera
Xi, Yi, Zi is the location of point i in world space (millimeters)
So I have 8 equations (4 points of 2 coordinates each), and I have 8 unknowns (fx, fy, Rxyz, tx, ty, tz). Therefore, I conclude (barring pathological cases) that a unique solution must exist.
However, I can't seem to figure out how to compute this solution using OpenCV.
I have looked at the imgproc module:
getPerspectiveTransform works, but gives me a 3x3 matrix only (from 2D points to 2D points). If I could somehow extract the needed parameters from this matrix, that would be great.
I have also looked at the calib3d module, which contains a few promising functions that do almost, but not quite, what I need:
initCameraMatrix2D sounds almost perfect, but when I pass it my four points like this:
cv::Mat cameraMatrix = cv::initCameraMatrix2D(
std::vector<std::vector<cv::Point3f>>({worldPoints}),
std::vector<std::vector<cv::Point2f>>({imagePoints}),
cv::Size2f(2, 2), -1);
it returns me a camera matrix that has fx, fy set to -inf, inf.
calibrateCamera seems to use a complicated solver to deal with overdetermined systems and outliers. I tried it anyway, but all I can get from it are assertion failures like this:
OpenCV(3.4.1) Error: Assertion failed (0 <= i && i < (int)vv.size()) in getMat_, file /build/opencv/src/opencv-3.4.1/modules/core/src/matrix_wrap.cpp, line 79
Is there a way to entice OpenCV to do what I need? And if not, how could I do it by hand?
3x3 rotation matrices have 9 elements but, as you said, only 3 degrees of freedom. One subtlety is that exploiting this property makes the equation non-linear in the angles you want to estimate, and non-linear equations are harder to solve than linear ones.
This kind of equations are usually solved by:
considering that the P=K.[R | t] matrix has 12 degrees of freedom and solving the resulting linear equation using the SVD decomposition (see Section 7.1 of 'Multiple View Geometry' by Hartley & Zisserman for more details)
decomposing this intermediate result into an initial approximate solution to your non-linear equation (see for example cv::decomposeProjectionMatrix)
refining the approximate solution using an iterative solver which is able to deal with non-linear equations and with the reduced degrees of freedom of the rotation matrix (e.g. Levenberg-Marquard algorithm). I am not sure if there is a generic implementation of this in OpenCV, however it is not too complicated to implement one yourself using the Ceres Solver library.
However, your case is a bit particular because you do not have enough point matches to solve the linear formulation (i.e. step 1) reliably. This means that, as you stated it, you have no way to initialize an iterative refining algorithm to get an accurate solution to your problem.
Here are a few work-arounds that you can try:
somehow get 2 additional point matches, leading to a total of 6 matches hence 12 constraints on your linear equation, allowing you to solve the problem using the steps 1, 2, 3 above.
somehow guess manually an initial estimate for your 8 parameters (2 focal lengths, 3 angles & 3 translations), and directly refine them using an iterative solver. Be aware that the iterative process might converge to a wrong solution if your initial estimate is too far off.
reduce the number of unknowns in your model. For instance, if you manage to fix two of the three angles (e.g. roll & pitch) the equations might simplify a lot. Also, the two focal lengths are probably related via the aspect ratio, so if you know it and if your pixels are square, then you actually have a single unknown there.
if all else fails, there might be a way to extract approximated values from the rectifying homography estimated by cv::getPerspectiveTransform.
Regarding the last bullet point, the opposite of what you want is clearly possible. Indeed, the rectifying homography can be expressed analytically knowing the parameters you want to estimate. See for instance this post and this post. There is also a full chapter on this in the Hartley & Zisserman book (chapter 13).
In your case, you want to go the other way around, i.e. to extract the intrinsic & extrinsic parameters from the homography. There is a somewhat related function in OpenCV (cv::decomposeHomographyMat), but it assumes the K matrix is known and it outputs 4 candidate solutions.
In the general case, this would be tricky. But maybe in your case you can guess a reasonable estimate for the focal length, hence for K, and use the point correspondences to select the good solution to your problem. You might also implement a custom optimization algorithm, testing many focal length values and keeping the solution leading to the lowest reprojection error.
I need some help with an algorithm, I have a problem with an program.
I need to make a program where user inputs cordinates for 3 points and coefficient
for linear funciton that crosses the triangle made by those 3 points and i need to compare area of the shapes what is made function crossing that triangle.
I would paste code here but there is things in my native language and i just want to know your alogrithms for this solution, becuase my wokrs only if the points are entered in exact sequence and I cant get handle of that
http://pastebin.com/vNzGuqX4 - code
and for example i use this http://goo.gl/j18Ch0
The code is not finnished, I just noticed if I enter it in different sequence it does not work like when entering points " 1 1 2 5 4 4 0.5 1 5 " works but " 4 4 1 1 2 5 0.5 1 5 " does not
The linear must cross with 2 edges of the triangle at least. So you can find these 2 crossing points first, these 2 points with one of the 3 vertices will make a small triangle. Use this equation to calculate the area of a triangle S = sqrt(l * (l-a) * (l-b) * (l-c)) where l = (a+b+c)/2 and a, b, c are the length of the edge. It should be easy to get the length of an edge given the coordinate of the vertex. One is the area of the small triangle, the other one is the area of the big triangle minus the small one.
If your triangle is ABC, a good approach would be the following:
Find lines that go through points A and B, B and C, and C and A.
Find the intersection of your line with these three lines.
Check which two intersections lie on the triangle sides.
Depending on the intersections calculate the surface of the new small
triangle.
hell-o guys!
well, I'm playing with random walks. Midpoint displacement gives some nice results, but I would like a random walk without walk loops, like the ones (in yellow) on this screen-hot :
My first idea to deal with that problem is to check for each segment if there is an intersection with all others segments, then to delete the walk loop between the both segments and bind at the interesection point. But for some walks, it would give a strange result, like that one :
where the yellow part is a loop, and we can see that a big part of the walk would be deleted if I do what I said.
Maybe another method would be to check, when the displacement of the midpoint is made, if the segments are interesecting. In case of there is an intersection, get another displacment. But it looks to become very time consuming quickly when the number of subdivisions rises...
So I would like to know if there is a way to avoid these loops
so... it's seems playing with the amplitudes of the random numbers is a good way to avoid overlaps :
the path without displacement is drawn in cyan. I didn't get overlaps with these displacments :
do{
dx = (D>0)? 0.5*sqrt((double)(rand()%D)) - sqrt((double)D)/2. : 0 ;
dz = (D>0)? 0.5*sqrt((double)(rand()%D)) - sqrt((double)D)/2. : 0 ;
}while(dx*dx+dz*dz>D);
where D is the squared distance between the two neibourers of the point we want to displace. The (D>0)? is needed to avoid some Floating Point Exception.