Union of circles in c++ - c++

I can't figure out how to merge circles in C++. I accomplished to union two polygons using Boost Geometry, however, the problem is that I don't know how to transform polygons to circles (if that is possible at all in Boost Geometry).
No visual representation of the geometry is necessary, in the end I would like to transform it to WKT format.
Is Boost Geometry the right approach or are there better libraries for that?
Thank you,
Andy

You can approximate circle with center point C and radius R using regular polygon with N vertices (choose N depending on needed precision). Vertex coordinates:
V[i].X = C.X + R * Cos(i * 2 * Pi / N)
V[i].Y = C.Y + R * Sin(i * 2 * Pi / N)

Related

Understanding GLSL function to draw polygon using distance field

Could someone help me understand the following function that draws a polygon of N sides (i.e. 3 being a triangle and 4 being a square):
float theta = atan(pos.x, pos.y);
float rotate_angle = 2 * PI / N;
float d = cos(floor(0.5 + theta / rotate_angle) * rotate_angle - theta) * length(pos);
What I understand from this illustration is that:
we're interested in finding the angle indicated by the red curve (call it alpha)
cos(alpha) * length will project the green line onto the blue line
by comparing the size of said projection with that of the blue line (radius of circle), we know whether a test point is inside or outside of the polygon we're trying to draw
Question
Why does alpha equal floor(0.5 + theta / rotate_angle) * rotate_angle - theta? Where does 0.5 come from? What's the significance of theta / rotate_angle?
What I have read:
[1] https://codepen.io/nik-lever/full/ZPKmmx
[2] https://thndl.com/square-shaped-shaders.html
[3] https://thebookofshaders.com/07
Simply, floor(0.5 + x) = round(x). However, because round(x) may not be available in some environments (e.g. in GLES2), floor(0.5 + x) is to be used instead.
Then, since n = round(theta / rotate_angle) gives edge section index which contains pos (e.g. n =-1, 0 or 1 for a triangle) , n * rotate_angle is the angle of edge center point(=blue line) which is nearest to the theta.
Therefore, alpha = n * rotate_angle - theta is certainly relative angle from pos to the nearest center, where -rotate_angle/2 < alpha <= rotate_angle/2.
Checking pos's projection length to the center point direction, it's possible to tell inside or outside. To detect discrete direction of polygon edges('s orthogonal vectors) seamlessly, round() function is used.

Understanding legacy code: Algorithm to remove radial lens distortion

The method below to remove lens distortion from a camera was written more than ten years ago and i am trying to understand how the approximation works.
void Distortion::removeLensDistortion(const V2 &dist, V2 &ideal) const
{
const int ITERATIONS=10;
V2 xn;
xn.x=(dist.x - lensParam.centerX)/lensParam.fX;
xn.y=(dist.y - lensParam.centerY)/lensParam.fY;
V2 x=xn;
for (int i=0;i<ITERATIONS;i++) {
double r2 = Utilities::square(x.x)+Utilities::square(x.y);
double r4 = Utilities::square(r2);
double rad=1+lensParam.kc1 * r2 + lensParam.kc2 * r4;
x.x/=rad;
x.y/=rad;
}
ideal.x=x.x*lensParam.fX+lensParam.centerX;
ideal.y=x.y*lensParam.fY+lensParam.centerY;
}
As a reminder:
lensParam.centerX and lensParam.centerY is the principal point
lensParam.fX and lensParam.fY is the focal length in pixel
lensParam.kc1 and lensParam.kc2 are the first two radial distortion coefficients. This is k_1 and k_2 in the formula below.
The formula to add lens distortion given the first two radial distortion parameters is as follows:
x_distorted = x_undistorted * (1+k_1 * r² + k_2 * r^4)
y_distorted = y_undistorted * (1+k_1 * r² + k_2 * r^4)
where r²=(x_undistorted)²+(y_undistorted)² and r^4=(r²)²
In the code above, the term (1+k_1 * r² + k_2 * r^4) is calculated and saved in the variable rad and the distorted x is divided by rad in each of the ten iterations.
All of the cameras we use have a pincushion distortion (so k_1<0)
The question is how is this algorithm approximating the undistorted image points?
Do you know if there is any paper in which this algorithm is proposed?
The opencv undistortion may be a bit similar, so that link may be useful but it is not quite the same though.

Drawing tangents for a bezier curve in OpenGL using c++

So I have a program that has 4 control points
std::vector<ControlPoint> cam_pos_points;
cam_pos_points.push_back(ControlPoint(-0.79, 0.09, 0.2, 0));
cam_pos_points.push_back(ControlPoint(-0.88, -0.71, 0.2, 1));
cam_pos_points.push_back(ControlPoint(1.3, -0.8, 0.2, 2));
cam_pos_points.push_back(ControlPoint(0.71, 0.76, 0.2, 3));
Basically, what happens is that I've set up a way to move my control points and when a control point is moved, the new position is saved and the curve is re-calculated based on this new position. The way I'm drawing the curve is I'm using these equations:
for (double t = 0; t < 1; t += 0.1){
float Px =(pow((1 - t), 3) * cam_pos_points[0].positionx()) +
((pow((1 - t), 2) * t) * cam_pos_points[1].positionx()) +
(((1 - t) * pow(t, 2)) * cam_pos_points[2].positionx()) +
(pow(t, 3) * cam_pos_points[3].positionx());
float Py =(pow((1 - t), 3) * cam_pos_points[0].positiony()) +
((pow((1 - t), 2) * t) * cam_pos_points[1].positiony()) +
(((1 - t) * pow(t, 2)) * cam_pos_points[2].positiony()) +
(pow(t, 3) * cam_pos_points[3].positiony());
}
And then using these two float values, I put them into vec3's and make a bunch of points. I then draw a line between all these points by putting them into a multiline class by declaring what points will be in the curve and then drawing a straight line between each point. The end result will be a bezier curve.
The problem I'm having right now is drawing the tangents for the bezier curve. My idea was that for the first control point, was to say the tangent is on the line P1 - P2. So after drawing the tangent, when I move the tangent point, what equations am I supposed to use to re-draw the shape of the curve? I've already found the derivatives of the bezier curve equation but I don't know what to do with them:
float dx = (-3*(pow((1 - t), 2)) * cam_pos_points[0].positionx()) +
(((-2*(1 - t)) * t) * cam_pos_points[1].positionx()) +
(((1 - t) * (2*t)) * cam_pos_points[2].positionx()) +
((3*pow(t, 2)) * cam_pos_points[3].positionx());
float dy = (-3*(pow((1 - t), 2)) * cam_pos_points[0].positiony()) +
(((-2*(1 - t)) * t) * cam_pos_points[1].positiony()) +
(((1 - t) * (2*t)) * cam_pos_points[2].positiony()) +
((3*pow(t, 2)) * cam_pos_points[3].positiony());
You starting tangent will be from the first control point to the second, and the ending tangent will be from the fourth control point to the third. I'd suggest you start fresh each time you redraw it; i.e., whenever a control point moves, treat it as an all-new equation.
In the case that one (or both) of your tangents are zero-length, then they aren't actually tangents per se but the curve will go toward the opposite endpoint.
That's why you can use a Bezier section with no tangents to represent a straight line.
The equations are wrong.
The correct equation is
(1-t)^3 * p0 + 3*(1-t)^2*t * p1 + 3*(1-t)*t^2 * p2 + t^3 * p3.
expand and differentiate to get the tangents.
It looks like that you want to draw the tangent points for the tangent vector at the start and end of the Bezier curve so that you can allow users to adjust the curve's shape by moving the tangent points. If this is the case, you need to be aware that moving the tangent points will also move the 2nd or the 3rd control points. So, the correct procedure would be to re-calculate the 2nd or the 3rd control point from the moved tangent points, then redraw the curve.
For cubic Bezier curve, the C'(t) at t=0 and 1 is
C'(0)=3*(P1-P0) C'(1)=3*(P3-P2)
Let's assume your tangent point for the starting tangent is T0 and is located at
T0= P0+s0*C'(0)=P0+3*s0*(P1-P0)
where s0 is a constant scale factor for making sure your tangent point will not be located too far away from the control points. When T0 is changed to T0*, you can update the control point P1 as
P1* = (T0*-P0)/(3*s0)+P0.
Do similar update for control point P2 when the tangent point of the end tangent is moved. Then, you can redraw your curve.

Best way to interpolate triangle surface using 3 positions and normals for ray tracing

I am working on conventional Whitted ray tracing, and trying to interpolate surface of hitted triangle as if it was convex instead of flat.
The idea is to treat triangle as a parametric surface s(u,v) once the barycentric coordinates (u,v) of hit point p are known.
This surface equation should be calculated using triangle's positions p0, p1, p2 and normals n0, n1, n2.
The hit point itself is calculated as
p = (1-u-v)*p0 + u*p1 + v*p2;
I have found three different solutions till now.
Solution 1. Projection
The first solution I came to. It is to project hit point on planes that come through each of vertexes p0, p1, p2 perpendicular to corresponding normals, and then interpolate the result.
vec3 r0 = p0 + dot( p0 - p, n0 ) * n0;
vec3 r1 = p1 + dot( p1 - p, n1 ) * n1;
vec3 r2 = p2 + dot( p2 - p, n2 ) * n2;
p = (1-u-v)*r0 + u*r1 + v*r2;
Solution 2. Curvature
Suggested in a paper of Takashi Nagata "Simple local interpolation of surfaces using normal vectors" and discussed in question "Local interpolation of surfaces using normal vectors", but it seems to be overcomplicated and not very fast for real-time ray tracing (unless you precompute all necessary coefficients). Triangle here is treated as a surface of the second order.
Solution 3. Bezier curves
This solution is inspired by Brett Hale's answer. It is about using some interpolation of the higher order, cubic Bezier curves in my case.
E.g., for an edge p0p1 Bezier curve should look like
B(t) = (1-t)^3*p0 + 3(1-t)^2*t*(p0+n0*adj) + 3*(1-t)*t^2*(p1+n1*adj) + t^3*p1,
where adj is some adjustment parameter.
Computing Bezier curves for edges p0p1 and p0p2 and interpolating them gives the final code:
float u1 = 1 - u;
float v1 = 1 - v;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*(u1*n0 + u*n1)*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*(v1*n0 + v*n2)*adj;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
p = (1-w)*b1 + w*b2;
Alternatively, one can interpolate between three edges:
float u1 = 1.0 - u;
float v1 = 1.0 - v;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
float w1 = 1.0 - w;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*( u1*n0 + u*n1 )*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*( v1*n0 + v*n2 )*adj;
vec3 b0 = w1*w1*(3-2*w1)*p1 + w*w*(3-2*w)*p2 + 3*w*w1*( w1*n1 + w*n2 )*adj;
p = (1-u-v)*b0 + u*b1 + v*b2;
Maybe I messed something in code above, but this option does not seem to be very robust inside shader.
P.S. The intention is to get more correct origins for shadow rays when they are casted from low-poly models. Here you can find the resulted images from test scene. Big white numbers indicates number of solution (zero for original image).
P.P.S. I still wonder if there is another efficient solution which can give better result.
Keeping triangles 'flat' has many benefits and simplifies several stages required during rendering. Approximating a higher order surface on the other hand introduces quite significant tracing overhead and requires adjustments to your BVH structure.
When the geometry is being treated as a collection of facets on the other hand, the shading information can still be interpolated to achieve smooth shading while still being very efficient to process.
There are adaptive tessellation techniques which approximate the limit surface (OpenSubdiv is a great example). Pixar's Photorealistic RenderMan has a long history using subdivision surfaces. When they switched their rendering algorithm to path tracing, they've also introduced a pretessellation step for their subdivision surfaces. This stage is executed right before rendering begins and builds an adaptive triangulated approximation of the limit surface. This seems to be more efficient to trace and tends to use less resources, especially for the high-quality assets used in this industry.
So, to answer your question. I think the most efficient way to achieve what you're after is to use an adaptive subdivision scheme which spits out triangles instead of tracing against a higher order surface.
Dan Sunday describes an algorithm that calculates the barycentric coordinates on the triangle once the ray-plane intersection has been calculated. The point lies inside the triangle if:
(s >= 0) && (t >= 0) && (s + t <= 1)
You can then use, say, n(s, t) = nu * s + nv * t + nw * (1 - s - t) to interpolate a normal, as well as the point of intersection, though n(s, t) will not, in general, be normalized, even if (nu, nv, nw) are. You might find higher order interpolation necessary. PN-triangles were a similar hack for visual appeal rather than mathematical precision. For example, true rational quadratic Bezier triangles can describe conic sections.

How to get camera up vector from roll, pitch, and yaw?

I need to get an up vector for a camera (to get the right look) from a roll, pitch, and yaw angles (in degrees). I've been trying different things for a couple hours and have had no luck :(. Any help here would be appreciated!
Roll, Pitch and Yaw define a rotation in 3 axis. from these angles you can construct a 3x3 transformation matrix which express this rotation (see here how)
After you have this matrix you take your regular up vector, say (0,1,0) if 'up' is the Y axis and multiply it with the matrix. What you'll get is the transformed up vector.
Edit-
Applying the transformation to (0,1,0) is the same thing as taking the middle row. The 3 rows of the matrix make up an orthogonal base of the rotated system. Mind you that a 3D graphic API uses 4x4 matrices. So to make a 4x4 matrix out of the 3x3 rotation matrix you need to add a '1' at M[3][3] (the corner) and zeros at the rest like so:
r r r 0
r r r 0
r r r 0
0 0 0 1
This may not directly answer your question, although it may still help. I have a free open-source project for XNA that creates a debug terminal that overlays your game while it is running. You can use this for looking up values, invoking methods, or whatever. So if you have a transformation matrix and you wanted to extract various parts of it while the game is running, you can do that. The project can be found here:
http://www.protohacks.net/xna_debug_terminal
I don't have much expertise in the kind of math you are using, but hopefully Shoosh's post helps on that. Maybe the debug terminal can help you when trying out his idea or in any other future problems you encounter.
12 years later...
In case anyone is still interested in the answer to this question, here is the solution (even tough its in Java it should be pretty easy to translate it in other languages):
private Vector3f getRayFromCamera() {
float rx = (float)Math.sin((double)getYaw() * (double)(Math.PI / 180)) * -1 * (1-Math.abs((float)Math.cos((double)getPitch() * (double)(Math.PI / 180) - 90 * (Math.PI / 180)) * -1));
float ry = (float)Math.cos((double)getPitch() * (double)(Math.PI / 180) - 90 * (Math.PI / 180)) * -1;
float rz = (float)Math.cos((double)getYaw() * (double)(Math.PI / 180)) * -1 * (1- Math.abs((float)Math.cos((double)getPitch() * (double)(Math.PI / 180) - 90 * (Math.PI / 180)) * -1));
return new Vector3f(rx, ry, rz);
}
Note: This calculates the Front Vector but when multiplying with the vector (0,1,0) you can change that!