in my opengl application i have a Bézier curve in 3d space and i want to to move an object along it.
everything it's ok a part of rotations: i have some problem in calculating them. in my mind the pipeline should be this:
find point on the Bézier (position vector)
find tangent, normal, binormal (frenet frame)
find the angle between tangent vector and x axis
(the same for normal and y axis and binormal and z axis)
push matrix
translate in position, rotate in angles, draw object
pop matrix
but it does not go as i expected: the rotations seems to be random and does not follow the curve.
any suggestions?
You're going to have problems with the Frenet frame, because, unfortunately, it is undefined when the curve is even momentarily straight
(has vanishing curvature), and it exhibits wild swings in orientation around points
where the osculating plane’s normal has major changes in direction, especially at inflection points, where the normal flips.
I'd recommend using something called a Bishop frame (you can Google it, and find out how to compute it in a discrete setting). It is also referred to as a parallel transport frame or a minimum rotation frame - it has the advantage that the frame is always defined, and it changes orientation in a controlled way.
I don't think the problems with Frenet frames necessarily explain the problems you are having. You should start with some easy test cases - Bezier curves that are confined to the XY plane, for example, and step through your calculations until you find what's wrong.
Instead of computing angles just push the frame into the modelview matrix. Normal, Binormal and Tangent go in the upper left 3x3 of the matrix, translation in the 4th column and element 4,4 is 1. Instead of Frenet frame use the already mentioned Bishop frame. So in code:
// assuming you manage your curve in a (opaque) struct Bezier
struct BezierCurve;
typedef float vec3[3];
void bezierEvaluate(BezierCurve *bezier, float t, vec3 normal, vec3 binormal, vec3 tangent, vec3 pos);
void apply_bezier_transform(Bezier *bezier, float t)
{
float M[16]; // OpenGL uses column major ordering
// and this code is a excellent example why it does so:
bezierEvaluate(bezier, t, &M[0], &M[4], &M[8], &M[12]);
M[3] = M[7] = M[11] = 0.;
M[15] = 1.;
glMultMatrixf(M);
}
Related
I am trying to rotate a "cube" full of little cubes using keyboard which works but not so great.
I am struggling with setting the pivot point of rotation to the very center of the big "cube" / world. As you can see on this video, center of front (initial) face of the big cube is the pivot point for my rotation right now, which is a bit confusing when I rotate the world a little bit.
To explain it better, it looks like I am moving initial face of the cube when using keys to rotate the cube. So the pivot point might be okay from this point of view, but what is wrong in my code? I don't understand why it is moving by front face, not the entire cube by its very center?
In case of generating all little cubes, I call a function in 3 for loops (x, y, z) and the function returns cubeMat so I have all cubes generated as you can see on the video.
cubeMat = scale(cubeMat, {0.1f, 0.1f, 0.1f});
cubeMat = translate(cubeMat, {positioning...);
For rotation itself, a short example of rotation to left looks like this:
mat4 total_rotation; //global variable - never resets
mat4 rotation; //local variable
if(keysPressed[GLFW_KEY_LEFT]){
timer -= delta;
rotation = rotate(mat4{}, -delta, {0, 1, 0});
}
... //rest of key controls
total_rotation *= rotation;
And inside of those 3 for cycles is also this:
program.setUniform("ModelMatrix", total_rotation * cubeMat);
cube.render();
I have read that I should use transformation to set the pivot point to the middle but in this case, how can I set the pivot point inside of little cube which is in center of world? That cube is obviously x=2, y=2, z=2 since in for cycles, I generate cubes starting at x=0.
You are accumulating the rotation matrices by right-multiplication. This way, all rotations are performed in the local coordinate systems that result from all previous transformations. And this is why your right-rotation results in a turn after an up-rotation (because it is a right-rotation in the local coordinate system).
But you want your rotations to be in the global coordinate system. Thus, simply revert the multiplication order:
total_rotation = rotation * total_rotation;
I'm currently in the process of finishing the implementation for a camera that functions in the same way as the camera in Maya. The part I'm stuck in the tumble functionality.
The problem is the following: the tumble feature works fine so long as the position of the camera is not parallel with the up vector (currently defined to be (0, 1, 0)). As soon as the camera becomes parallel with this vector (so it is looking straight up or down), the camera locks in place and will only rotate around the up vector instead of continuing to roll.
This question has already been asked here, unfortunately there is no actual solution to the problem. For reference, I also tried updating the up vector as I rotated the camera, but the resulting behaviour is not what I require (the view rolls as a result of the new orientation).
Here's the code for my camera:
using namespace glm;
// point is the position of the cursor in screen coordinates from GLFW
float deltaX = point.x - mImpl->lastPos.x;
float deltaY = point.y - mImpl->lastPos.y;
// Transform from screen coordinates into camera coordinates
Vector4 tumbleVector = Vector4(-deltaX, deltaY, 0, 0);
Matrix4 cameraMatrix = lookAt(mImpl->eye, mImpl->centre, mImpl->up);
Vector4 transformedTumble = inverse(cameraMatrix) * tumbleVector;
// Now compute the two vectors to determine the angle and axis of rotation.
Vector p1 = normalize(mImpl->eye - mImpl->centre);
Vector p2 = normalize((mImpl->eye + Vector(transformedTumble)) - mImpl->centre);
// Get the angle and axis
float theta = 0.1f * acos(dot(p1, p2));
Vector axis = cross(p1, p2);
// Rotate the eye.
mImpl->eye = Vector(rotate(Matrix4(1.0f), theta, axis) * Vector4(mImpl->eye, 0));
The vector library I'm using is GLM. Here's a quick reference on the custom types used here:
typedef glm::vec3 Vector;
typedef glm::vec4 Vector4;
typedef glm::mat4 Matrix4;
typedef glm::vec2 Point2;
mImpl is a PIMPL that contains the following members:
Vector eye, centre, up;
Point2 lastPoint;
Here is what I think. It has something to do with the gimbal lock, that occurs with euler angles (and thus spherical coordinates).
If you exceed your minimal(0, -zoom,0) or maxima(0, zoom,0) you have to toggle a boolean. This boolean will tell you if you must treat deltaY positive or not.
It could also just be caused by a singularity, therefore just limit your polar angle values between 89.99° and -89.99°.
Your problem could be solved like this.
So if your camera is exactly above (0, zoom,0) or beneath (0, -zoom,0) of your object, than the camera only rolls.
(I am also assuming your object is at (0,0,0) and the up-vector is set to (0,1,0).)
There might be some mathematical trick to resolve this, I would do it with linear algebra though.
You need to introduce a new right-vector. If you make a cross product, you will get the camera-vector. Camera-vector = up-vector x camera-vector. Imagine these vectors start at (0,0,0), then easily, to get your camera position just do this subtraction (0,0,0)-(camera-vector).
So if you get some deltaX, you rotate towards the right-vector(around the up-vector) and update it.
Any influence of deltaX should not change your up-vector.
If you get some deltaY you rotate towards the up-vector(around the right-vector) and update it. (This has no influence on the right-vector).
https://en.wikipedia.org/wiki/Rotation_matrix at Rotation matrix from axis and angle you can find a important formula.
You say u is your vector you want to rotate around and theta is the amount you want to pivot. The size of theta is proportional to deltaX/Y.
For example: We got an input from deltaX, so we rotate around the up-vector.
up-vector:= (0,1,0)
right-vector:= (0,0,-1)
cam-vector:= (0,1,0)
theta:=-1*30° // -1 due to the positive mathematical direction of rotation
R={[cos(-30°),0,-sin(-30°)],[0,1,0],[sin(-30°),0,cos(-30°)]}
new-cam-vector=R*cam-vector // normal matrix multiplication
One thing is left to be done: Update the right-vector.
right-vector=camera-vector x up-vector .
To give some background to this question, I'm creating a game that needs to know whether the 'Orbit' of an object is within tolerance to another Orbit. To show this, I plot a Torus-shape with a given radius (the tolerance) using the Target Orbit, and now I need to check if the ellipse is within that torus.
I'm getting lost in the equations on Math/Stack exchange so asking for a more specific solution. For clarification, here's an image of the game with the Torus and an Orbit (the red line). Quite simply, I want to check if that red orbit is within that Torus shape.
What I believe I need to do, is plot four points in World-Space on one of those orbits (easy enough to do). I then need to calculate the shortest distance between that point, and the other orbits' ellipse. This is the difficult part. There are several examples out there of finding the shortest distance of a point to an ellipse, but all are 2D and quite difficult to follow.
If that distance is then less than the tolerance for all four points, then in think that equates to the orbit being inside the target torus.
For simplicity, the origin of all of these orbits is always at the world Origin (0, 0, 0) - and my coordinate system is Z-Up. Each orbit has a series of parameters that defines it (Orbital Elements).
Here simple approach:
Sample each orbit to set of N points.
Let points from first orbit be A and from second orbit B.
const int N=36;
float A[N][3],B[N][3];
find 2 closest points
so d=|A[i]-B[i]| is minimal. If d is less or equal to your margin/treshold then orbits are too close to each other.
speed vs. accuracy
Unless you are using some advanced method for #2 then its computation will be O(N^2) which is a bit scary. The bigger the N the better accuracy of result but a lot more time to compute. There are ways how to remedy both. For example:
first sample with small N
when found the closest points sample both orbits again
but only near those points in question (with higher N).
you can recursively increase accuracy by looping #2 until you have desired precision
test d if ellipses are too close to each other
I think I may have a new solution.
Plot the four points on the current orbit (the ellipse).
Project those points onto the plane of the target orbit (the torus).
Using the Target Orbit inclination as the normal of a plane, calculate the angle between each (normalized) point and the argument of periapse
on the target orbit.
Use this angle as the mean anomaly, and compute the equivalent eccentric anomaly.
Use those eccentric anomalies to plot the four points on the target orbit - which should be the nearest points to the other orbit.
Check the distance between those points.
The difficulty here comes from computing the angle and converting it to the anomaly on the other orbit. This should be more accurate and faster than a recursive function though. Will update when I've tried this.
EDIT:
Yep, this works!
// The Four Locations we will use for the checks
TArray<FVector> CurrentOrbit_CheckPositions;
TArray<FVector> TargetOrbit_ProjectedPositions;
CurrentOrbit_CheckPositions.SetNum(4);
TargetOrbit_ProjectedPositions.SetNum(4);
// We first work out the plane of the target orbit.
const FVector Target_LANVector = FVector::ForwardVector.RotateAngleAxis(TargetOrbit.LongitudeAscendingNode, FVector::UpVector); // Vector pointing to Longitude of Ascending Node
const FVector Target_INCVector = FVector::UpVector.RotateAngleAxis(TargetOrbit.Inclination, Target_LANVector); // Vector pointing up the inclination axis (orbit normal)
const FVector Target_AOPVector = Target_LANVector.RotateAngleAxis(TargetOrbit.ArgumentOfPeriapsis, Target_INCVector); // Vector pointing towards the periapse (closest approach)
// Geometric plane of the orbit, using the inclination vector as the normal.
const FPlane ProjectionPlane = FPlane(Target_INCVector, 0.f); // Plane of the orbit. We only need the 'normal', and the plane origin is the Earths core (periapse focal point)
// Plot four points on the current orbit, using an equally-divided eccentric anomaly.
const float ECCAngle = PI / 2.f;
for (int32 i = 0; i < 4; i++)
{
// Plot the point, then project it onto the plane
CurrentOrbit_CheckPositions[i] = PosFromEccAnomaly(i * ECCAngle, CurrentOrbit);
CurrentOrbit_CheckPositions[i] = FVector::PointPlaneProject(CurrentOrbit_CheckPositions[i], ProjectionPlane);
// TODO: Distance from the plane is the 'Depth'. If the Depth is > Acceptance Radius, we are outside the torus and can early-out here
// Normalize the point to find it's direction in world-space (origin in our case is always 0,0,0)
const FVector PositionDirectionWS = CurrentOrbit_CheckPositions[i].GetSafeNormal();
// Using the Inclination as the comparison plane - find the angle between the direction of this vector, and the Argument of Periapse vector of the Target orbit
// TODO: we can probably compute this angle once, using the Periapse vectors from each orbit, and just multiply it by the Index 'I'
float Angle = FMath::Acos(FVector::DotProduct(PositionDirectionWS, Target_AOPVector));
// Compute the 'Sign' of the Angle (-180.f - 180.f), using the Cross Product
const FVector Cross = FVector::CrossProduct(PositionDirectionWS, Target_AOPVector);
if (FVector::DotProduct(Cross, Target_INCVector) > 0)
{
Angle = -Angle;
}
// Using the angle directly will give us the position at th eccentric anomaly. We want to take advantage of the Mean Anomaly, and use it as the ecc anomaly
// We can use this to plot a point on the target orbit, as if it was the eccentric anomaly.
Angle = Angle - TargetOrbit.Eccentricity * FMathD::Sin(Angle);
TargetOrbit_ProjectedPositions[i] = PosFromEccAnomaly(Angle, TargetOrbit);}
I hope the comments describe how this works. Finally solved after several months of head-scratching. Thanks all!
I am trying to follow this course about computer graphics, but I'm stuck in the homework 1. I don't understand what's the role of the vector eye and up. The descripcion of the homework can be found in this link, there's also the skeleton of the first assignment.
So far I have the following code:
// Transform.cpp: implementation of the Transform class.
#include "Transform.h"
//Please implement the following functions:
// Helper rotation function.
mat3 Transform::rotate(const float degrees, const vec3& axis) {
// Please implement this.
float radians = degrees * M_PI / 180.0f;
mat3 r1(cos(radians));
mat3 r2(0, -axis.z, axis.y, axis.z, 0, -axis.x, -axis.y, axis.x, 0);
mat3 r3(axis.x*axis.x, axis.x*axis.y, axis.x*axis.z,
axis.x*axis.y, axis.y*axis.y, axis.y*axis.z,
axis.x*axis.z, axis.z*axis.y, axis.z*axis.z);
for(int i = 0; i < 3; i++){
for(int j = 0; j < 3; j++){
r2[i][j] = r2[i][j]*sin(radians);
r3[i][j] = r3[i][j]*(1-cos(radians));
}
}
return r1 + r2 + r3;
}
// Transforms the camera left around the "crystal ball" interface
void Transform::left(float degrees, vec3& eye, vec3& up) {
eye = eye * rotate(degrees, up);
}
// Transforms the camera up around the "crystal ball" interface
void Transform::up(float degrees, vec3& eye, vec3& up) {
vec3 newAxis = glm::cross(eye, up);
}
// Your implementation of the glm::lookAt matrix
mat4 Transform::lookAt(vec3 eye, vec3 up) {
return lookAtMatrix;
}
Transform::Transform()
{
}
Transform::~Transform()
{
}
for the left method it appears to be doing the right thing, which is, rotating the object around the y-axis (actually I'm not sure if the object is moving or what I'm moving is the camera, can someone clarify?).
for the up method I cannot make it work which will be rotating the object (or camera?) around the x-axis (at least that's what I think).
finally, I don't understand what should the lookAt method should do.
Can someone help me understand the actions to be performed?
Can someone explain what are the roles of vectors eye and up?
View transforms are often implemented using a "look-at" function. The idea being that you specify where the camera is, what direction it is looking in, and what direction represents "up" in your particular space, and you get a matrix back which represents that transform.
It looks like you're trying to implement some kind "rotating ball" navigation control. That's fairly simple - horizontal movement should rotate around some "Y" axis, and vertical movement should rotate around the "right" (or X) axis. Generally those rotations work around the current view axes, rather than globally, so that the movement is intuitive. I'm not sure exactly what you're looking for there.
A look-at function works as follows.
A 3x3 matrix representing a rotation can be viewed as being composed the 3 perpendicular unit axes of the space you are transforming into. So if you can supply those vectors, you can build the matrix.
The first axis is easy. A camera is typically oriented to look along "Z", so if you take the vector representing the direction of the thing being looked at from the camera's position, then normalise it, this is the Z axis.
Then you need to define a distinct 'up' vector - (0,1,0) is typical, but you will need to choose a different one in cases where the Z-axis is pointing in the same direction.
The cross product of this 'up' vector and the 'Z' axis gives the 'X' axis - this is because the cross product gives a perpendicular vector, and what constitutes horizontal will be perpendicular to both the 'forward' direction, and 'up'.
Then the cross product of the 'X' and 'Z' axes gives the 'Y' axis (which is not necessarily the same as the 'Y' axis - consider looking towards the ceiling or towards the floor).
These three axes, normalised, (x,y,z) directly form a rotation matrix.
The translation portion of the matrix is generally the position of the camera, transformed by the rotation's inverse (such that when transforming the camera position by the lookat matrix itself, it should end up back at the origin).
1) Your course is using the OpenGL library, and your homework assignment is to fill in the skeleton module "Transform.cpp".
2) The method you're asking about is "mat4 Transform::lookAt(vec3 eye, vec3 up)":
lookAt: Finally, you need to code in the transformation matrix, given
the eye and up vectors. You will likely need to refer to the class
notes to do this. It is likely to help to dene a uvw coordinate frame
(as 3 vectors), and to build up an auxiliary 4 4 matrix M which is
returned as the result of this function. Consult class notes and
lectures for this part.
3) A hint for what these two arguments "eye" and "up" mean should be in your class notes and lectures.
4) Another hint is to "define a uvw coordinate frame (as three vectors), and build up an auxiliary 4x4 matrix ... which is returned as a result...".
5) A final hint:
Q: What's the difference between an OpenGL mat3 and mat4?
A:
What extractly mat3(a mat4 matrix) statement in glsl do?
mat3(MVI) * normal
Returns the upper 3x3 matrix from the 4x4 matrix and multiplies the
normal by that. This matrix is called the 'normal matrix'. You use
this to bring your normals from world space to eye space.
The reason why the original matrix is 4x4 and not 3x3 is because 4x4
matrices let you do affine transformations and contain useful
information for perspective rendering. But to take a normal from world
space to eye space, you just need the 3x3 model view matrix.
I am developing a small tool for 3D visualization of molecules.
For my project i choose to make a thing in the way of what Mr "Brad Larson" did with his Apple software "Molecules". A link where you can find a small presentation of the technique used : Brad Larsson software presentation
For doing my job i must compute sphere impostor and cylinder impostor.
For the moment I have succeed to do the "Sphere Impostor" with the help of another tutorial Lies and Impostors
for summarize the computing of the sphere impostor : first we send a "sphere position" and the "sphere radius" to the "vertex shader" which will create in the camera-space an square which always face the camera, after that we send our square to the fragment shader where we use a simple ray tracing to find which fragment of the square is included in the sphere, and finally we compute the normal and the position of the fragment to compute lighting. (another thing we also write the gl_fragdepth for giving a good depth to our impostor sphere !)
But now i am blocked in the computing of the cylinder impostor, i try to do a parallel between the sphere impostor and the cylinder impostor but i don't find anything, my problem is that for the sphere it was some easy because the sphere is always the same no matter how we see it, we will always see the same thing : "a circle" and another thing is that the sphere was perfectly defined by Math then we can find easily the position and the normal for computing lighting and create our impostor.
For the cylinder it's not the same thing, and i failed to find a hint to modeling a form which can be used as "cylinder impostor", because the cylinder shows many different forms depending on the angle we see it !
so my request is to ask you about a solution or an indication for my problem of "cylinder impostor".
In addition to pygabriels answer I want to share a standalone implementation using the mentioned shader code from Blaine Bell (PyMOL, Schrödinger, Inc.).
The approach, explained by pygabriel, also can be improved. The bounding box can be aligned in such a way, that it always faces to the viewer. Only two faces are visible at most. Hence, only 6 vertices (ie. two faces made up of 4 triangles) are needed.
See picture here, the box (its direction vector) always faces to the viewer:
Image: Aligned bounding box
For source code, download: cylinder impostor source code
The code does not cover round caps and orthographic projections. It uses geometry shader for vertex generation. You can use the shader code under the PyMOL license agreement.
I know this question is more than one-year old, but I'd still like to give my 2 cents.
I was able to produce cylinder impostors with another technique, I took inspiration from pymol's code. Here's the basic strategy:
1) You want to draw a bounding box (a cuboid) for the cylinder. To do that you need 6 faces, that translates in 18 triangles that translates in 36 triangle vertices. Assuming that you don't have access to geometry shaders, you pass to a vertex shader 36 times the starting point of the cylinder, 36 times the direction of the cylinder, and for each of those vertex you pass the corresponding point of the bounding box. For example a vertex associated with point (0, 0, 0) means that it will be transformed in the lower-left-back corner of the bounding box, (1,1,1) means the diagonally opposite point etc..
2) In the vertex shader, you can construct the points of the cylinder, by displacing each vertex (you passed 36 equal vertices) according to the corresponding points you passed in.
At the end of this step you should have a bounding box for the cylinder.
3) Here you have to reconstruct the points on the visible surface of the bounding box. From the point you obtain, you have to perform a ray-cylinder intersection.
4) From the intersection point you can reconstruct the depth and the normal. You also have to discard intersection points that are found outside of the bounding box (this can happen when you view the cylinder along its axis, the intersection point will go infinitely far).
By the way it's a very hard task, if somebody is interested here's the source code:
https://github.com/chemlab/chemlab/blob/master/chemlab/graphics/renderers/shaders/cylinderimp.frag
https://github.com/chemlab/chemlab/blob/master/chemlab/graphics/renderers/shaders/cylinderimp.vert
A cylinder impostor can actually be done just the same way as a sphere, like Nicol Bolas did it in his tutorial. You can make a square facing the camera and colour it that it will look like a cylinder, just the same way as Nicol did it for spheres. And it's not that hard.
The way it is done is ray-tracing of course. Notice that a cylinder facing upwards in camera space is kinda easy to implement. For example intersection with the side can be projected to the xz plain, it's a 2D problem of a line intersecting with a circle. Getting the top and bottom isn't harder either, the z coordinate of the intersection is given, so you actually know the intersection point of the ray and the circle's plain, all you have to do is to check if its inside the circle. And basically, that's it, you get two points, and return the closer one (the normals are pretty trivial too).
And when it comes to an arbitrary axis, it turns out to be almost the same problem. When you solve equations at the fixed axis cylinder, you are solving them for a parameter that describes how long do you have to go from a given point in a given direction to reach the cylinder. From the "definition" of it, you should notice that this parameter doesn't change if you rotate the world. So you can rotate the arbitrary axis to become the y axis, solve the problem in a space where equations are easier, get the parameter for the line equation in that space, but return the result in camera space.
You can download the shaderfiles from here. Just an image of it in action:
The code where the magic happens (It's only long 'cos it's full of comments, but the code itself is max 50 lines):
void CylinderImpostor(out vec3 cameraPos, out vec3 cameraNormal)
{
// First get the camera space direction of the ray.
vec3 cameraPlanePos = vec3(mapping * max(cylRadius, cylHeight), 0.0) + cameraCylCenter;
vec3 cameraRayDirection = normalize(cameraPlanePos);
// Now transform data into Cylinder space wherethe cyl's symetry axis is up.
vec3 cylCenter = cameraToCylinder * cameraCylCenter;
vec3 rayDirection = normalize(cameraToCylinder * cameraPlanePos);
// We will have to return the one from the intersection of the ray and circles,
// and the ray and the side, that is closer to the camera. For that, we need to
// store the results of the computations.
vec3 circlePos, sidePos;
vec3 circleNormal, sideNormal;
bool circleIntersection = false, sideIntersection = false;
// First check if the ray intersects with the top or bottom circle
// Note that if the ray is parallel with the circles then we
// definitely won't get any intersection (but we would divide with 0).
if(rayDirection.y != 0.0){
// What we know here is that the distance of the point's y coord
// and the cylCenter is cylHeight, and the distance from the
// y axis is less than cylRadius. So we have to find a point
// which is on the line, and match these conditions.
// The equation for the y axis distances:
// rayDirection.y * t - cylCenter.y = +- cylHeight
// So t = (+-cylHeight + cylCenter.y) / rayDirection.y
// About selecting the one we need:
// - Both has to be positive, or no intersection is visible.
// - If both are positive, we need the smaller one.
float topT = (+cylHeight + cylCenter.y) / rayDirection.y;
float bottomT = (-cylHeight + cylCenter.y) / rayDirection.y;
if(topT > 0.0 && bottomT > 0.0){
float t = min(topT,bottomT);
// Now check for the x and z axis:
// If the intersection is inside the circle (so the distance on the xz plain of the point,
// and the center of circle is less than the radius), then its a point of the cylinder.
// But we can't yet return because we might get a point from the the cylinder side
// intersection that is closer to the camera.
vec3 intersection = rayDirection * t;
if( length(intersection.xz - cylCenter.xz) <= cylRadius ) {
// The value we will (optianally) return is in camera space.
circlePos = cameraRayDirection * t;
// This one is ugly, but i didn't have better idea.
circleNormal = length(circlePos - cameraCylCenter) <
length((circlePos - cameraCylCenter) + cylAxis) ? cylAxis : -cylAxis;
circleIntersection = true;
}
}
}
// Find the intersection of the ray and the cylinder's side
// The distance of the point and the y axis is sqrt(x^2 + z^2), which has to be equal to cylradius
// (rayDirection.x*t - cylCenter.x)^2 + (rayDirection.z*t - cylCenter.z)^2 = cylRadius^2
// So its a quadratic for t (A*t^2 + B*t + C = 0) where:
// A = rayDirection.x^2 + rayDirection.z^2 - if this is 0, we won't get any intersection
// B = -2*rayDirection.x*cylCenter.x - 2*rayDirection.z*cylCenter.z
// C = cylCenter.x^2 + cylCenter.z^2 - cylRadius^2
// It will give two results, we need the smaller one
float A = rayDirection.x*rayDirection.x + rayDirection.z*rayDirection.z;
if(A != 0.0) {
float B = -2*(rayDirection.x*cylCenter.x + rayDirection.z*cylCenter.z);
float C = cylCenter.x*cylCenter.x + cylCenter.z*cylCenter.z - cylRadius*cylRadius;
float det = (B * B) - (4 * A * C);
if(det >= 0.0){
float sqrtDet = sqrt(det);
float posT = (-B + sqrtDet)/(2*A);
float negT = (-B - sqrtDet)/(2*A);
float IntersectionT = min(posT, negT);
vec3 Intersect = rayDirection * IntersectionT;
if(abs(Intersect.y - cylCenter.y) < cylHeight){
// Again it's in camera space
sidePos = cameraRayDirection * IntersectionT;
sideNormal = normalize(sidePos - cameraCylCenter);
sideIntersection = true;
}
}
}
// Now get the results together:
if(sideIntersection && circleIntersection){
bool circle = length(circlePos) < length(sidePos);
cameraPos = circle ? circlePos : sidePos;
cameraNormal = circle ? circleNormal : sideNormal;
} else if(sideIntersection){
cameraPos = sidePos;
cameraNormal = sideNormal;
} else if(circleIntersection){
cameraPos = circlePos;
cameraNormal = circleNormal;
} else
discard;
}
From what I can understand of the paper, I would interpret it as follows.
An impostor cylinder, viewed from any angle has the following characteristics.
From the top, it is a circle. So considering you'll never need to view a cylinder top down, you don't need to render anything.
From the side, it is a rectangle. The pixel shader only needs to compute illumination as normal.
From any other angle, it is a rectangle (the same one computed in step 2) that curves. Its curvature can be modeled inside the pixel shader as the curvature of the top ellipse. This curvature can be considered as simply an offset of each "column" in texture space, depending on viewing angle. The minor axis of this ellipse can be computed by multiplying the major axis (thickness of the cylinder) with a factor of the current viewing angle (angle / 90), assuming that 0 means you're viewing the cylinder side-on.
Viewing angles. I have only taken the 0-90 case into account in the math below, but the other cases are trivially different.
Given the viewing angle (phi) and the diameter of the cylinder (a) here's how the shader needs to warp the Y-Axis in texture space Y = b' sin(phi). And b' = a * (phi / 90). The cases phi = 0 and phi = 90 should never be rendered.
Of course, I haven't taken the length of this cylinder into account - which would depend on your particular projection and is not an image-space problem.