Drawing tangents for a bezier curve in OpenGL using c++ - c++

So I have a program that has 4 control points
std::vector<ControlPoint> cam_pos_points;
cam_pos_points.push_back(ControlPoint(-0.79, 0.09, 0.2, 0));
cam_pos_points.push_back(ControlPoint(-0.88, -0.71, 0.2, 1));
cam_pos_points.push_back(ControlPoint(1.3, -0.8, 0.2, 2));
cam_pos_points.push_back(ControlPoint(0.71, 0.76, 0.2, 3));
Basically, what happens is that I've set up a way to move my control points and when a control point is moved, the new position is saved and the curve is re-calculated based on this new position. The way I'm drawing the curve is I'm using these equations:
for (double t = 0; t < 1; t += 0.1){
float Px =(pow((1 - t), 3) * cam_pos_points[0].positionx()) +
((pow((1 - t), 2) * t) * cam_pos_points[1].positionx()) +
(((1 - t) * pow(t, 2)) * cam_pos_points[2].positionx()) +
(pow(t, 3) * cam_pos_points[3].positionx());
float Py =(pow((1 - t), 3) * cam_pos_points[0].positiony()) +
((pow((1 - t), 2) * t) * cam_pos_points[1].positiony()) +
(((1 - t) * pow(t, 2)) * cam_pos_points[2].positiony()) +
(pow(t, 3) * cam_pos_points[3].positiony());
}
And then using these two float values, I put them into vec3's and make a bunch of points. I then draw a line between all these points by putting them into a multiline class by declaring what points will be in the curve and then drawing a straight line between each point. The end result will be a bezier curve.
The problem I'm having right now is drawing the tangents for the bezier curve. My idea was that for the first control point, was to say the tangent is on the line P1 - P2. So after drawing the tangent, when I move the tangent point, what equations am I supposed to use to re-draw the shape of the curve? I've already found the derivatives of the bezier curve equation but I don't know what to do with them:
float dx = (-3*(pow((1 - t), 2)) * cam_pos_points[0].positionx()) +
(((-2*(1 - t)) * t) * cam_pos_points[1].positionx()) +
(((1 - t) * (2*t)) * cam_pos_points[2].positionx()) +
((3*pow(t, 2)) * cam_pos_points[3].positionx());
float dy = (-3*(pow((1 - t), 2)) * cam_pos_points[0].positiony()) +
(((-2*(1 - t)) * t) * cam_pos_points[1].positiony()) +
(((1 - t) * (2*t)) * cam_pos_points[2].positiony()) +
((3*pow(t, 2)) * cam_pos_points[3].positiony());

You starting tangent will be from the first control point to the second, and the ending tangent will be from the fourth control point to the third. I'd suggest you start fresh each time you redraw it; i.e., whenever a control point moves, treat it as an all-new equation.
In the case that one (or both) of your tangents are zero-length, then they aren't actually tangents per se but the curve will go toward the opposite endpoint.
That's why you can use a Bezier section with no tangents to represent a straight line.

The equations are wrong.
The correct equation is
(1-t)^3 * p0 + 3*(1-t)^2*t * p1 + 3*(1-t)*t^2 * p2 + t^3 * p3.
expand and differentiate to get the tangents.

It looks like that you want to draw the tangent points for the tangent vector at the start and end of the Bezier curve so that you can allow users to adjust the curve's shape by moving the tangent points. If this is the case, you need to be aware that moving the tangent points will also move the 2nd or the 3rd control points. So, the correct procedure would be to re-calculate the 2nd or the 3rd control point from the moved tangent points, then redraw the curve.
For cubic Bezier curve, the C'(t) at t=0 and 1 is
C'(0)=3*(P1-P0) C'(1)=3*(P3-P2)
Let's assume your tangent point for the starting tangent is T0 and is located at
T0= P0+s0*C'(0)=P0+3*s0*(P1-P0)
where s0 is a constant scale factor for making sure your tangent point will not be located too far away from the control points. When T0 is changed to T0*, you can update the control point P1 as
P1* = (T0*-P0)/(3*s0)+P0.
Do similar update for control point P2 when the tangent point of the end tangent is moved. Then, you can redraw your curve.

Related

Understanding GLSL function to draw polygon using distance field

Could someone help me understand the following function that draws a polygon of N sides (i.e. 3 being a triangle and 4 being a square):
float theta = atan(pos.x, pos.y);
float rotate_angle = 2 * PI / N;
float d = cos(floor(0.5 + theta / rotate_angle) * rotate_angle - theta) * length(pos);
What I understand from this illustration is that:
we're interested in finding the angle indicated by the red curve (call it alpha)
cos(alpha) * length will project the green line onto the blue line
by comparing the size of said projection with that of the blue line (radius of circle), we know whether a test point is inside or outside of the polygon we're trying to draw
Question
Why does alpha equal floor(0.5 + theta / rotate_angle) * rotate_angle - theta? Where does 0.5 come from? What's the significance of theta / rotate_angle?
What I have read:
[1] https://codepen.io/nik-lever/full/ZPKmmx
[2] https://thndl.com/square-shaped-shaders.html
[3] https://thebookofshaders.com/07
Simply, floor(0.5 + x) = round(x). However, because round(x) may not be available in some environments (e.g. in GLES2), floor(0.5 + x) is to be used instead.
Then, since n = round(theta / rotate_angle) gives edge section index which contains pos (e.g. n =-1, 0 or 1 for a triangle) , n * rotate_angle is the angle of edge center point(=blue line) which is nearest to the theta.
Therefore, alpha = n * rotate_angle - theta is certainly relative angle from pos to the nearest center, where -rotate_angle/2 < alpha <= rotate_angle/2.
Checking pos's projection length to the center point direction, it's possible to tell inside or outside. To detect discrete direction of polygon edges('s orthogonal vectors) seamlessly, round() function is used.

Color image boundary based on local curvature

I am searching for an algorithm (using OpenCV C or C++) which does this:
Given the boundary image, I want to find the local curvature at all points and color map it, which is what is done in the image displayed above. I got this image from Wikipedia but haven't been able to find out a way to color the boundary in this way. Kindly let me know how it can be done.
If you observe the boundary, red denotes boundary has high slope, yellow shows that the boundary is almost linear.
How can this be done?
Edit
Just to give you an idea of how I was trying to do this since two days:
I used the openCV functions convexHull and convexityDefects but realized that I am going in the wrong direction. I have to work only on the contours/boundaries of the binary image.
You can solve the problem by fitting a path of cubic Bezier curves to the boundary, then taking the curvature analytically.
[elaborated]
The boundary consists of a list of points in x, y at pixel centres, each point 1px or root 2 px form the next in the list. You need to fit a smooth cubic Bezier path to this, using a technique by Schnider in Graphics Gems (Gems 1, pp 612, An algorithm for Fitting digitized curves).
The step along the curve taking tiny steps which are always sub-pixel, and
take the curvature using
double BezierCurve::Curvature(double t) const
{
// Nice mathematically perfect formula
//Vector2 d1 = Tangent(t);
//Vector2 d2 = Deriv2(t);
//return (d1.x * d2.y - d1.y * d2.x) / pow(d1.x * d1.x + d1.y * d1.y, 1.5);
// Get the cubic coefficients like this, I store them in the Bezier
// class
/*
a = p3 + 3.0 * p1 - 3.0 * p2 - p0;
b = 3.0 * p0 - 6.0 * p1 + 3.0 * p2;
c = 3.0 * p1 - 3.0 * p0;
d = p0;
*/
double dx, dy, ddx, ddy;
dx = 3 * this->ax * t*t + 2 * this->bx * t + this->cx;
ddx = 6 * this->ax * t + 2 * this->bx;
dy = 3 * this->ay * t*t + 2 * this->by * t + this->cy;
ddy = 6 * this->ay * t + 2 * this->by;
if (dx == 0 && dy == 0)
return 0;
return (dx*ddy - ddx*dy) / ((dx*dx + dy*dy)*sqrt(dx*dx + dy*dy));
}
OpenCV findContours used with mode= CV_RETR_EXTERNAL and method= CV_CHAIN_APPROX_NONE will give you all boundary pixels ordered such as two subsequent points are neighbors.
To get the radius of a circumference by three points, there are a lot of info in the Web. Because you only need the radius, not the center, this stackexchange answer is fast.
In pseudo code:
vector_of_points = OpenCV::findContours(...)
p1 = vector start
p2, p3 are next points in vector
//boundary is circular, so in the first loop pass we must adjust
p2 = next point
p3 = last point
//Use p1 as our iterator
while ( p1 <= vector.end )
{
//curvature
radius = calculateRadius(p1, p2, p3)
//set color for pixel p2
setColor(p, radius)
increment p1, p2, p3
adjust for start point = end point
}

Best way to interpolate triangle surface using 3 positions and normals for ray tracing

I am working on conventional Whitted ray tracing, and trying to interpolate surface of hitted triangle as if it was convex instead of flat.
The idea is to treat triangle as a parametric surface s(u,v) once the barycentric coordinates (u,v) of hit point p are known.
This surface equation should be calculated using triangle's positions p0, p1, p2 and normals n0, n1, n2.
The hit point itself is calculated as
p = (1-u-v)*p0 + u*p1 + v*p2;
I have found three different solutions till now.
Solution 1. Projection
The first solution I came to. It is to project hit point on planes that come through each of vertexes p0, p1, p2 perpendicular to corresponding normals, and then interpolate the result.
vec3 r0 = p0 + dot( p0 - p, n0 ) * n0;
vec3 r1 = p1 + dot( p1 - p, n1 ) * n1;
vec3 r2 = p2 + dot( p2 - p, n2 ) * n2;
p = (1-u-v)*r0 + u*r1 + v*r2;
Solution 2. Curvature
Suggested in a paper of Takashi Nagata "Simple local interpolation of surfaces using normal vectors" and discussed in question "Local interpolation of surfaces using normal vectors", but it seems to be overcomplicated and not very fast for real-time ray tracing (unless you precompute all necessary coefficients). Triangle here is treated as a surface of the second order.
Solution 3. Bezier curves
This solution is inspired by Brett Hale's answer. It is about using some interpolation of the higher order, cubic Bezier curves in my case.
E.g., for an edge p0p1 Bezier curve should look like
B(t) = (1-t)^3*p0 + 3(1-t)^2*t*(p0+n0*adj) + 3*(1-t)*t^2*(p1+n1*adj) + t^3*p1,
where adj is some adjustment parameter.
Computing Bezier curves for edges p0p1 and p0p2 and interpolating them gives the final code:
float u1 = 1 - u;
float v1 = 1 - v;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*(u1*n0 + u*n1)*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*(v1*n0 + v*n2)*adj;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
p = (1-w)*b1 + w*b2;
Alternatively, one can interpolate between three edges:
float u1 = 1.0 - u;
float v1 = 1.0 - v;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
float w1 = 1.0 - w;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*( u1*n0 + u*n1 )*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*( v1*n0 + v*n2 )*adj;
vec3 b0 = w1*w1*(3-2*w1)*p1 + w*w*(3-2*w)*p2 + 3*w*w1*( w1*n1 + w*n2 )*adj;
p = (1-u-v)*b0 + u*b1 + v*b2;
Maybe I messed something in code above, but this option does not seem to be very robust inside shader.
P.S. The intention is to get more correct origins for shadow rays when they are casted from low-poly models. Here you can find the resulted images from test scene. Big white numbers indicates number of solution (zero for original image).
P.P.S. I still wonder if there is another efficient solution which can give better result.
Keeping triangles 'flat' has many benefits and simplifies several stages required during rendering. Approximating a higher order surface on the other hand introduces quite significant tracing overhead and requires adjustments to your BVH structure.
When the geometry is being treated as a collection of facets on the other hand, the shading information can still be interpolated to achieve smooth shading while still being very efficient to process.
There are adaptive tessellation techniques which approximate the limit surface (OpenSubdiv is a great example). Pixar's Photorealistic RenderMan has a long history using subdivision surfaces. When they switched their rendering algorithm to path tracing, they've also introduced a pretessellation step for their subdivision surfaces. This stage is executed right before rendering begins and builds an adaptive triangulated approximation of the limit surface. This seems to be more efficient to trace and tends to use less resources, especially for the high-quality assets used in this industry.
So, to answer your question. I think the most efficient way to achieve what you're after is to use an adaptive subdivision scheme which spits out triangles instead of tracing against a higher order surface.
Dan Sunday describes an algorithm that calculates the barycentric coordinates on the triangle once the ray-plane intersection has been calculated. The point lies inside the triangle if:
(s >= 0) && (t >= 0) && (s + t <= 1)
You can then use, say, n(s, t) = nu * s + nv * t + nw * (1 - s - t) to interpolate a normal, as well as the point of intersection, though n(s, t) will not, in general, be normalized, even if (nu, nv, nw) are. You might find higher order interpolation necessary. PN-triangles were a similar hack for visual appeal rather than mathematical precision. For example, true rational quadratic Bezier triangles can describe conic sections.

Need rotation matrix for opengl 3D transformation

The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.

Rotate tetris blocks at runtime

I have a class tetronimo (a tetris block) that has four QRect types (named first, second, third, fourth respectively). I draw each tetronimo using a build_tetronimo_L type functions.
These build the tetronimo in a certain direction, but as in tetris you're supposed to be able to rotate the tetronimo's, I'm trying to rotate a tetronimo by rotating each individual square of the tetronimo.
I have found the following formula to apply to each (x, y) coordinate of a particular square.
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
Now, the QRect type of Qt, does only seem to have a setCoords function that takes the (x, y) coordinates of top-left and bottom-right points of the respective square.
I have here an example (which doesn't seem to produce the correct result) of rotating the first two squares in my tetronimo.
Can anyone tell me how I'm supposed to rotate these squares correctly, using runtime rotation calculation?
void tetromino::rotate(double angle) // angle in degrees
{
std::map<std::string, rect_coords> coords = get_coordinates();
// FIRST SQUARE
rect_coords first_coords = coords["first"];
//top left x and y
int newx_first_tl = (cos(to_radians(angle)) * first_coords.top_left_x) - (sin(to_radians(angle)) * first_coords.top_left_y);
int newy_first_tl = (sin(to_radians(angle)) * first_coords.top_left_x) + (cos(to_radians(angle)) * first_coords.top_left_y);
//bottom right x and y
int newx_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) - (sin(to_radians(angle)) * first_coords.bottom_right_y);
int newy_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) + (sin(to_radians(angle)) * first_coords.bottom_right_y);
//CHANGE COORDINATES
first->setCoords( newx_first_tl, newy_first_tl, newx_first_tl + tetro_size,newy_first_tl - tetro_size);
//SECOND SQUARE
rect_coords second_coords = coords["second"];
int newx_second_tl = (cos(to_radians(angle)) * second_coords.top_left_x) - (sin(to_radians(angle)) * second_coords.top_left_y);
int newy_second_tl = (sin(to_radians(angle)) * second_coords.top_left_x) + (cos(to_radians(angle)) * second_coords.top_left_y);
//CHANGE COORDINATES
second->setCoords(newx_second_tl, newy_second_tl, newx_second_tl - tetro_size, newy_second_tl + tetro_size);
first and second are QRect types. rect_coords is just a struct with four ints in it, that store the coordinates of the squares.
The first square and second square calculations are different, as I was playing around trying to figure it out.
I hope someone can help me figure this out?
(Yes, I can do this much simpler, but I'm trying to learn from this)
It seems more like a math question than a programming question. Just plug in values like 90 degrees for the angle to figure this out. For 90 degrees, a point (x,y) is mapped to (-y, x). You probably don't want to rotate around the origin but around a certain pivot point c.x, c.y. For that you need to translate first, then rotate, then translate back:
(x,y) := (x-c.x, y-c.y) // translate into coo system w/ origin at c
(x,y) := (-y, x) // rotate
(x,y) := (x+c.x, y+c.y) // translate into original coo system
Before rotating you have to translate so that the piece is centered in the origin:
Translate your block centering it to 0, 0
Rotate the block
Translate again the center of the block to x, y
If you rotate without translating you will rotate always around 0, 0 but since the block is not centered it will be rotated around the center. To center your block is quite simple:
For each point, compute the median of X and Y, let's call it m
Subtract m.X and m.Y to the coordinates of all points
Rotate
Add again m.X and m.Y to points.
Of course you can use linear algebra and vector * matrix multiplication but maybe it is too much :)
Translation
Let's say we have a segment with coordinates A(3,5) B(10,15).
If you want to rotate it around its center, we first translate it to our origin. Let's compute mx and my:
mx = (10 - 3) / 2
my = (15 - 5) / 2
Now we compute points A1 and B1 translating the segment so it is centered to the origin:
A1(A.X - mx, A.Y - my)
B1(B.X - mx, B.Y - my)
Now we can perform our rotation of A1 and B1 (you know how).
Then we have to translate again to the original position:
A = (rotatedA1.X + mx, rotatedA1.y + my)
B = (rotatedB1.X + mx, rotatedB1.y + my)
If instead of having two points you have n points you have of course do everything for n points.
You could use Qt Graphics View which does all the geometric calculations for you.
Or are you just wanting to learn basic linear geometrical transformations? Then reading a math textbook would probably be more appropriate than coding.