I am trying to render an outline using Vulkan's stencil buffers. This technique involves rendering the object twice with the second one being scaled up in order to account for said outline. Normally this is done in 3D space in which the normal vectors for each vertex can be used to scale the object correctly. I however am trying the same in 2D space and without pre-calculated normals.
An Example: Given are the Coordinates I, H and J and I need to find L, K and M with the condition that the distance between each set of parallel vectors is the same.
I tried scaling up the object and then moving it to the correct location but that got me nowhere.
I am searching for a solution that is ideally applicable to arbitrary shapes in 2D space and also somewhat efficient. Also I am unsure if this should be calculated on the GPU or the CPU.
Lets draw an example of a single point of some 2D polygon.
The position of point M depends only on position of A and its two adjacent lines, I have added normals too - green and blue. Points P and Q line on the intersection of a shifted and non-shifted lines.
If we know the adjacent points of A - B , C and the distances to O and P, then
M = A - d_p * normalize(B-A) - d_o * normalize(C-A)
this is true because P, O lie on the lines B-A and C-A.
The distances are easy to compute from the two-color right triangles:
d_p=s/sin(alfa)
d_o=s/sin(alfa)
where s is the desired stencil shift. They are of the course the same.
So the whole computation, given coordinates of A,B,C of some polygon corner and the desired shift s is:
b = normalize(B-A) # vector
c = normalize(C-A) # vector
alfa = arccos(b.c) # dot product
d = s/sin(alfa)
M = A - sign(b.c) * (b+c)*d
This also proves that M lies on the alfa angle bisector line.
Anyway, the formula is generic and holds for any 2D polygon, it is easily parallelizible since each point is shifted independently of others. But
for non-convex corners, you need to use the opposite sign, we can use dot product to generalize.
It is not numerically stable for b.c close to zero i.e. when b,c lines are almost parallel, in that case I would recommend just shifting A by d*n_b where n_b is the normalized normal of B-A line, in 2D it is normalize((B.y - A.y, A.x-B.x)).
Related
I have two images and and know the position of a point in the first image. Now I want to get the corresponding position in the second image.
This is my idea:
I can use algorithms such as SIFT to match keypoints (as seen in the image)
I know the camera matrix using calibration with e.g. chessboards
Using the 8 point algorithm I calculate the fundamental matrix F
Can I now use F to calculate the corresponding point?
Using fundamental matrix F alone is not enough. If you have a point on one image, you can't find its position on the second image, because it depends not only on configuration of the cameras, but also on the distance from the camera to that point.
This can also be seen from the equation x2^T * F * x1 = 0. If you know x1 and F, then for x2 you get equation x2^T * b = 0, where b = F * x1. This is an equation of a point x2 lying on the line b (points x1, x2 and line b are in homogeneous coordinates). Although you cant find the exact position of the point on the second image, you know that it must lie somewhere on that line.
Hartley and Zisserman have a great explanation these of these concepts in their book Multiple View Geometry in Computer Vision. Be sure to check it out for more details.
I'm matching a template from which I know my distance to & my normal vector to.
i.e. if my homography is the identity matrix then my camera is at Distance = 1.0m & my normal is at 0.
Now I have a second image in which I successfully aligned my template giving an homography:
[0.82072, 0.05685, 66.75024]
H = [0.02006, 0.86092, 39.34907]
[0.00003, 0.00017, 01.00000]
I also have my camera matrix.
the opencv function :
cv::decomposeHomographyMat()
gives me 4 solutions for the Rotation(3x3 mat),Translation(3x1 mat) & Normal vector(3x1).
cv::warpPerspective()
Is able to map nearly perfectly the current view of the camera to my template.
So it should be possible to get the actual scaling (template to alignment) & the normal vector.
But I can't figure it out how to actually choose the correct solutions of cv::decomposeHomographyMat(), I'm I missing something?
EDIT: Posted "question" without the question...
I figured it out.
Step one:
I create a set of point in the ROI I can map to my template (points in the area defined by the corners of the ROI).
Step two:
Warp the points in ROI (from step one; 8 points are enough in all my tests & use case) with all the solutions of cv::decomposeHomographyMat()
Exclude all solutions that give a point3D(x, y, z) with a z value < 0 (i.e. point is behind the camera).
Step three:
At this point you should have one to two solutions left.
All rotations matrixes should be the same, only the normal & translation matrix should differ.
Translations matrixes should verify:
Translation_Solution1 = -1* Translation_Solution2
Then compare your ROI area to you template area.
If you ROI area is smaller than your template, it means that you template as been "scaled down", i.e. your camera did a translation on z in the negative values.
Else you camera did a translation on the positive z values.
Chose the appropriate solution.
My error was to think that warpPerspective() was actually solving the Homography decomposition, but its not.
in paper Faugeras O D, Lustman F. Motion and structure from motion in a piecewise planar environment.1988 page 9 https://www.researchgate.net/publication/243764888_Motion_and_Structure_from_Motion_in_a_Piecewise_Planar_Environment
I would like to find the intercept of 2 lines in 3D. How can I check if they really intercept withour calculating the the real 3D coorindate at first. And then how can I calculate the 3D coordinates of that particular point?
Secondly, I understand it is not possible to find intercept points if two lines do not intercept. Thus, I would like to find a way to calculate 3d point which has the minimium distance from both point. I come with two minimium distance requirements:
1. take the shortest distance connecting two points and take the mid point as my result
2. find the shortest perpendicular distance away from both lines
Would anyone can give me some tips on it?
Let the first line be P+u.U and the second Q+v.V, where uppercase letters are 3D vectors.
You want to minimize the (squared) distance, i.e.
D²(u, v) = ((P+u.U) - (Q+v.V))²
Then, deriving on u and v, you get the system of equations
D²'u(u, v) = 2U.D(u, v) = 0
D²'v(u, v) = 2V.D(u, v) = 0
or
U.P + U².u - U.Q - U.V.v = 0
V.P + U.V.u - V.Q - V².v = 0
Solve this linear system of 2 equations in the 2 unknowns u and v, and from these, the closest points.
In the special case that the lines are parallel, the system determinant U²V²-(U.V)² is zero, as one expects (actually it is the square of the cross product (UxV)²). You can set u=0 arbitrarily and solve for v.
Assume that I took two panoramic image with vertical offset of H and each image is presented in equirectangular projection with size Xm and Ym. To do this, I place my panoramic camera at position say A and took an image, then move camera H meter up and took another image.
I know that a point in image 1 with coordinate of X1,Y1 is the same point on image 2 with coordinate X2 and Y2(assuming that X1=X2 as we have only vertical offset).
My question is that How I can calculate the range of selected of point (the point that know its X1and Y1 is on image 1 and its position on image 2 is X2 and Y2 from the Point A (where camera was when image no 1 was taken.).
Yes, you can do it - hold on!!!
Key thing y = focal length of your lens - now I can do it!!!
So, I think your question can be re-stated more simply by saying that if you move your camera (on the right in the diagram) up H metres, a point moves down p pixels in the image taken from the new location.
Like this if you imagine looking from the side, across you taking the picture.
If you know the micron spacing of the camera's CCD from its specification, you can convert p from pixels to metres to match the units of H.
Your range from the camera to the plane of the scene is given by x + y (both in red at the bottom), and
x=H/tan(alpha)
y=p/tan(alpha)
so your range is
R = x + y = H/tan(alpha) + p/tan(alpha)
and
alpha = tan inverse(p/y)
where y is the focal length of your lens. As y is likely to be something like 50mm, it is negligible, so, to a pretty reasonable approximation, your range is
H/tan(alpha)
and
alpha = tan inverse(p in metres/focal length)
Or, by similar triangles
Range = H x focal length of lens
--------------------------------
(Y2-Y1) x CCD photosite spacing
being very careful to put everything in metres.
Here is a shot in the dark, given my understanding of the problem at hand you want to do something similar to computer stereo vision, I point you to http://en.wikipedia.org/wiki/Computer_stereo_vision to start. Not sure if this is still possible to do in the manner you are suggesting, it sounds like you may need some more physical constraints but I do remember being able to correlate two 2d points in images after undergoing a strict translation. Think :
lambda[x,y,1]^t = W[r1, tx;r2, ty;ry, tz][x; y; z; 1]^t
Where lamda is a scale factor, W is a 3x3 matrix covering the intrinsic parameters of your camera, r1, r2, and r3 are row vectors that make up the 3x3 rotation matrix (in your case you can assume the identity matrix since you have only applied a translation), and tx, ty, tz which are your translation components.
Since you are looking at two 2d points at the same 3d point [x,y,z] this 3d point is shared by both 2d points. I cannot say if you can rationalize the actual x,y, and z values particularly for your depth calculation but this is where I would start.
I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose