I'm trying to code up a fast and minimalist sphere impostor-- the sphere doesn't need to be shaded or do anything but correct gl_FragDepth.
I have some code that's very close to working... cutting out everything but the key code:
vertex shader:
out_XYZ=float4(sphereCenter.xyz,1.)*sceneComboMatrix;
switch(gl_VertexID)
{
case 0:
out_XYZ.xy+=vec2(radius,-radius);
out_UV=float2(1.,-1.);
break;
case 1:
out_XYZ.xy+=vec2(-radius,-radius);
out_UV=float2(-1.,-1.);
break;
case 2:
out_XYZ.xy+=vec2(radius,radius);
out_UV=float2(1.,1.);
break;
case 3:
out_XYZ.xy+=vec2(-radius,radius);
out_UV=float2(-1.,1.);
break;
}
fragment shader:
float aLen2=dot(in_UV,in_UV);
float aL=inversesqrt(aLen2);
if (aL<1.) discard;
out_Color=float4(1.,0.,0.,1.);
float magicNumber=.0031f;
gl_FragDepth=gl_FragCoord.z-(sin(1.-(1./aL))*(3.14/2.)*magicNumber);
This produces the following:
Not TOO bad... it works as a proof of concept. The issue is, my variable magicNumber above, which I simply fiddled around with, should be replaced with the radius of the sphere in zdepth.
How can I compute the size of the sphere's radius in terms of zdepth?
(A note: I don't need this to be hyperaccurate-- it's being used for some particles that can clip slightly wrong... but it does need to be more accurate than my screenshot above)
(EDIT----------)
Using resources from Nicol Bolas, I wrote some code to attempt to get the difference in zdepth between the sphere's maximally close point ot the camera and the sphere's center. However, the result pushes the sphere VERY FAR toward the camera. Why doesn't this code give me the length of the sphere's radius in z-space (i.e. the difference in Z between the sphere's center and another point closer to the camera?)
vec4 aSPos=vec4(spherePos.xyz,1.)*sceneComboMatrix;
float aD1=aSPos.z/aSPos.w;
aSPos=vec4(spherePos.xyz+(normalTowardCamera.xyz*sphereRadius),1.)*sceneComboMatrix;
float aD2=aSPos.z/aSPos.w;
radiusLengthInZ=((gl_DepthRange.diff*(aD1-aD2))+gl_DepthRange.near+gl_DepthRange.far)/2.0;
Related
The situation si as follows. I am trying to implement a linear voxel search in a glsl shader for efficient voxel ray tracing. In toehr words, I have a 3D texture and I am ray tracing on it but I am trying to ray trace such that I only ever check voxels intersected by the ray once.
To this effect I have written a program with the following results:
Not efficient but correct:
The above image was obtained by adding a small epsilon ray multiple times and sampling from the texture on each iteration. Which produces the correct results but it's very inefficient.
That would look like:
loop{
start += direction*0.01;
sample(start);
}
To make it efficient I decided to instead implement the following lookup function:
float bound(float val)
{
if(val >= 0)
return voxel_size;
return 0;
}
float planeIntersection(vec3 ray, vec3 origin, vec3 n, vec3 q)
{
n = normalize(n);
if(dot(ray,n)!=0)
return (dot(q,n)-dot(n,origin))/dot(ray,n);
return -1;
}
vec3 get_voxel(vec3 start, vec3 direction)
{
direction = normalize(direction);
vec3 discretized_pos = ivec3((start*1.f/(voxel_size))) * voxel_size;
vec3 n_x = vec3(sign(direction.x), 0,0);
vec3 n_y = vec3(0, sign(direction.y),0);
vec3 n_z = vec3(0, 0,sign(direction.z));
float bound_x, bound_y, bound_z;
bound_x = bound(direction.x);
bound_y = bound(direction.y);
bound_z = bound(direction.z);
float t_x, t_y, t_z;
t_x = planeIntersection(direction, start, n_x,
discretized_pos+vec3(bound_x,0,0));
t_y = planeIntersection(direction, start, n_y,
discretized_pos+vec3(0,bound_y,0));
t_z = planeIntersection(direction, start, n_z,
discretized_pos+vec3(0,0,bound_z));
if(t_x < 0)
t_x = 1.f/0.f;
if(t_y < 0)
t_y = 1.f/0.f;
if(t_z < 0)
t_z = 1.f/0.f;
float t = min(t_x, t_y);
t = min(t, t_z);
return start + direction*t;
}
Which produces the following result:
Notice the triangle aliasing on the left side of some surfaces.
It seems this aliasing occurs because some coordinates are not being set to their correct voxel.
For example modifying the truncation part as follows:
vec3 discretized_pos = ivec3((start*1.f/(voxel_size)) - vec3(0.1)) * voxel_size;
Creates:
So it has fixed the issue for some surfaces and caused it for others.
I wanted to know if there is a way in which I can correct this truncation so that this error does not happen.
Update:
I have narrowed down the issue a bit. Observe the following image:
The numbers represent the order in which I expect the boxes to be visited.
As you can see for some of the points the sampling of the fifth box seems to be ommitted.
The following is the sampling code:
vec4 grabVoxel(vec3 pos)
{
pos *= 1.f/base_voxel_size;
pos.x /= (width-1);
pos.y /= (depth-1);
pos.z /= (height-1);
vec4 voxelVal = texture(voxel_map, pos);
return voxelVal;
}
yep that was the +/- rounding I was talking about in my comments somewhere in your previous questions related to this. What you need to do is having step equal to grid size in one of the axises (and test 3 times once for |dx|=1 then for |dy|=1 and lastly |dz|=1).
Also you should create a debug draw 2D slice through your map to actually see where the hits for a single specific test ray occurred. Now based on direction of ray in each axis you set the rounding rules separately. Without this you are just blindly patching one case and corrupting other two ...
Now actually Look at this (I linked it to your before but you clearly did not):
Wolf and Doom ray casting techniques
especially pay attention to:
On the right It shows you how to compute the ray step (your epsilon). You simply scale the ray direction so one of the coordinate is +/-1. For simplicity start with 2D slice through your map. The red dot is ray start position. Green is ray step vector for vertical grid lines hits and red is for horizontal grid lines hits (z will be analogically the same).
Now you should add the 2D overview of your map through some height slice that is visible (like on the image on the left) add a dot or marker to each intersection detected but distinguish between x,y and z hits by color. Do this for single ray only (I use the center of view ray). Fist handle view when you look at X+ directions than X- and when done move to Y,Z ...
In my GLSL volumetric 3D back raytracer I also linked you before look at these lines:
if (dir.x<0.0) { p+=dir*(((floor(p.x*n)-_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(+1.0,0.0,0.0); }
if (dir.x>0.0) { p+=dir*((( ceil(p.x*n)+_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(-1.0,0.0,0.0); }
if (dir.y<0.0) { p+=dir*(((floor(p.y*n)-_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,+1.0,0.0); }
if (dir.y>0.0) { p+=dir*((( ceil(p.y*n)+_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,-1.0,0.0); }
if (dir.z<0.0) { p+=dir*(((floor(p.z*n)-_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,+1.0); }
if (dir.z>0.0) { p+=dir*((( ceil(p.z*n)+_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,-1.0); }
they are how I did this. As you can see I use different rounding/flooring rule for each of the 6 cases. This way you handle case without corrupting the other. The rounding rule depends on a lot of stuff like how is your coordinate system offseted to (0,0,0) and more so it might be different in your code but the if conditions should be the same. Also as you can see I am handling this by offsetting the ray start position a bit instead of having these conditions inside the ray traversal loop castray.
That macro cast ray and look for intersections with grid and on top of that actually zsorts the intersections and use the first valid one (that is what l,ll are for and no other conditions or combination of ray results are needed). So my way of deal with this is cast ray for each type of intersection (x,y,z) starting on the first intersection with the grid for the same axis. You need to take into account the starting offset so the l,ll resembles the intersection distance to real start of ray not to offseted one ...
Also a good idea is to do this on CPU side first and when 100% working port to GLSL as in GLSL is very hard to debug things like this.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking for a way to do a wormhole effect like this:
- https://www.youtube.com/watch?v=WeOBXmLeJWo&feature=youtu.be&t=43s
I have already found nice tunnels in the examples, but here it is a little bit more involved. Actually the space appears to be warped somehow and the movement is high velocity, so it is not just entering into a simple tunnel. Any idea how to do the space warping part of it?
I decided to add more info because this was too broad:
I have a galaxy and each star has a 3d coord, size, etc. in this galaxy. I can visit these stars with a space ship. There are very distant stars and it would take a lot of time to get to them, that's why I need warp (faster than light) speed. This does not necessarily requires a wormhole according to physics, but this app does not have to be overly realistic. I don't want to solve this with pure OpenGL so ofc. we can use shaders. I'd like to warp the space in the middle of the screen when accelerating to warp speeds. After that a tunnel effect can come, because I think it would consume a lot of resources to update every star by very high speeds, so I'd like to update only the close stars. This can't be a prerendered animation, because the destination is not always certain, so this has sometimes exploration purposes and sometimes traveling purposes. I don't think warping only the sky box is enough, but I am not sure about this.
There are 2 things going on there:
space curvature around the hole
You need to construct equation that describe the curvature of space around hole parametrized by hole parameters (mass,position,orientation) and time so you can animate it. Then from this curvature you can compute relative displacement of pixel/voxel around it. I would start with cylindrical cones with radius modulated by sin of the distance from hole +/- some animation parameters (need experimentation).
Something like this:
and for example start with (in wormhole local coordinates LCS):
r = R * sin(z*0.5*M_PI/wormhole_max_depth)
Then modulate it by additional therms. The wormhole_max_depth,R should be functions of time even linear or with some periodic therm so it is pulsating a bit.
The displacement can be done by simply computing distance of concerned point to cone surface and push it towards it the more the closer to the cone it is (inside cone voxels are considered below surface so apply max displacement strength)
particle/light/matter bursting out of the hole
I would go for this only when #1 is alredy done. It should be simple particle effect with some nice circular blended alpha texture animated on the surface of the cone from #1. I see it as few for loops with pseudo random displacement in position and speed ...
Techniques
This topic depends on how you want to do this. I see these possibilities:
Distort geometry during rendering (3D vector)
So you can apply the cone displacement directly on rendered stuff. This would be best applicable in GLSL but the geometry rendered must have small enough primitives to make this work on vertex level ...
Distort skybox/stars only (3D vector or 2D raster but objects stay unaffected)
So you apply the displacement on texture coordinates of skybox or directly on the star positions.
Distort whole rendered scene in second pass (2D raster)
This need to use 2 pass rendering and in the second pass just wrap the texture coordinates near hole.
As you got different local stars in each sector I would use star background generated from star catalogue (list of all your stars) And apply the distortion on them directly in 3D vector space (so no skybox.. option #2). And also because my engines already use such representation and rendering for the same reasons.
[Edit1] cone geometry
I haven't much time for this recently until today so I did not make much of a progress. I decided to start with cone geometry so here it is:
class wormhole
{
public:
reper rep; // coordinate system transform matrix
double R0,R1,H,t; // radiuses,depth
wormhole(){ R0=10.0; R1=100.0; H=50.0; t=0.0; };
wormhole(wormhole& a){ *this=a; };
~wormhole(){};
wormhole* operator = (const wormhole *a) { *this=*a; return this; };
/*wormhole* operator = (const wormhole &a) { ...copy... return this; };*/
void ah2xyz(double *xyz,double a,double h) // compute cone position from parameters a=<0,2pi>, h=<0,1>
{
double r,tt;
tt=t; if (t>0.5) tt=0.5; r=2.0*R0*tt; // inner radius R0
tt=t; if (t>1.0) tt=1.0; r+=(R1-r)*h*h*tt; // outer radius R1
xyz[0]=r*cos(a);
xyz[1]=r*sin(a);
xyz[2]=H*h*tt;
rep.l2g(xyz,xyz);
}
void draw_cone()
{
int e;
double a,h,da=pi2*0.04,p[3];
glColor3f(0.2,0.2,0.2);
for (h=0.0;h<=1.0;h+=0.1){ glBegin(GL_LINE_STRIP); for (e=1,a=0.0;e;a+=da) { if (a>=pi2) { e=0; a=0.0; } ah2xyz(p,a,h); glVertex3dv(p); } glEnd(); }
for (e=1,a=0.0;e;a+=da){ glBegin(GL_LINE_STRIP); for (h=0.0;h<=1.0;h+=0.1) { if (a>=pi2) { e=0; a=0.0; } ah2xyz(p,a,h); glVertex3dv(p); } glEnd(); }
}
} hole;
Where rep is my class for homogenous 4x4 transform matrix (remembering both direct and inverse matrices at the same time) function l2g just transforms from local coordinates to global. The cone parameters are:
R0 - inner cone radius when fully grown
R1 - outer cone radius when fully grown
H - the height/depth of the cone when fully grown
t - is animation parameter values <0.0,1.0> are the growth and values above 1.0 are reserved for wormhole fully grown animation
Here how it looks like:
What I would do is simply calculate a vector from the texture coordinate of the screen center to the texture coordinate of the pixel you're shading.
Then modify that vector in any way you want (time based for example) and apply it to the texture coordinate of the pixel you're shading and then use the resulting coordinate to sample your texture.
In pseudocode this would be something like this:
vec2 vector_to_screen_center = vec2(0.5) - texture_coordinate;
texture_coordinate += vector_to_screen_center * sin(time) * 0.1; // Time based modulation of the vector.
gl_FragColor = texture2D(screen_texture, texture_coordinate);
Your question does not have a GLSL tag. If you plan to do this without shaders, this is going to be hard and/or inefficient.
what I am trying to do is implementing soft shadows in my simple ray tracer, developed in C++. The idea behind this, if I understood correctly, is to shoot multiple rays towards the light, instead of a single ray towards the center of the light, and average the results. The rays are therefore shot in different positions of the light. So far I am using random points, which I don't know if it is correct or if I should use points regularly distributed on the light surface. Assuming that I am doing right, I choose a random point on the light, which in my framework is implemented as a sphere. This is given by:
Vec3<T> randomPoint() const
{
T x;
T y;
T z;
// random vector in unit sphere
std::random_device rd; //used for the new <random> library
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(-1, 1);
do
{
x = dis(gen);
y = dis(gen);
z = dis(gen);
} while (pow(x, 2) + pow(y, 2) + pow(z, 2) > 1); // simple rejection sampling
return center + Vec3<T>(x, y, z) * radius;
}
After this, I don't know how exactly I should move since my rendering equation (in my simple ray tracer) is defined as follows:
Vec3<float> surfaceColor = 0
for(int i < 0; i < lightsInTheScene.size(); i++){
surfaceColor += obj->surfaceColor * transmission *
std::max(float(0), nHit.dot(lightDirection)) * g_lights[i]->emissionColor;
}
return surfaceColor + obj->emissionColor;
where transmission is a simple float which is set to 0 in case the ray that goes from my hitPoint to the lightCenter used to find an object in the middle.
So, what I tried to do was:
creating multiple rays towards random points on the light
counting how many of them hit an object on their path and memorize this number
for simplicity: Let's imagine in my case that I shoot 3 shadow rays from my point towards random points on the light. Only 2 of 3 rays reach the light. Therefore the final color of my pixel will be = color * shadowFactor where shadowFactor = 2/3. In my equation then I delete the transmission factor (which is now wrong) and I use the shadowFactor instead. The problem is that in my equation I have:
std::max(float(0), nHit.dot(lightDirection))
Which I don't know how to change since I don't have anymore a lightDirection which points towards the center of the light. Can you please help me understanding what should I do it and what's wrong so far? Thanks in advance!
You should evaluate the entire BRDF for the picked light samples. Then, you will also have the light direction (vector from object position to picked light sample). And you can average these results. Note that most area lights have a non-isotropic light emission characteristic (i.e. the amount of light emitted from a point varies by the outgoing direction).
Averaging the visibility does not produce correct results (although they are usually visually plausible).
I am developing a small tool for 3D visualization of molecules.
For my project i choose to make a thing in the way of what Mr "Brad Larson" did with his Apple software "Molecules". A link where you can find a small presentation of the technique used : Brad Larsson software presentation
For doing my job i must compute sphere impostor and cylinder impostor.
For the moment I have succeed to do the "Sphere Impostor" with the help of another tutorial Lies and Impostors
for summarize the computing of the sphere impostor : first we send a "sphere position" and the "sphere radius" to the "vertex shader" which will create in the camera-space an square which always face the camera, after that we send our square to the fragment shader where we use a simple ray tracing to find which fragment of the square is included in the sphere, and finally we compute the normal and the position of the fragment to compute lighting. (another thing we also write the gl_fragdepth for giving a good depth to our impostor sphere !)
But now i am blocked in the computing of the cylinder impostor, i try to do a parallel between the sphere impostor and the cylinder impostor but i don't find anything, my problem is that for the sphere it was some easy because the sphere is always the same no matter how we see it, we will always see the same thing : "a circle" and another thing is that the sphere was perfectly defined by Math then we can find easily the position and the normal for computing lighting and create our impostor.
For the cylinder it's not the same thing, and i failed to find a hint to modeling a form which can be used as "cylinder impostor", because the cylinder shows many different forms depending on the angle we see it !
so my request is to ask you about a solution or an indication for my problem of "cylinder impostor".
In addition to pygabriels answer I want to share a standalone implementation using the mentioned shader code from Blaine Bell (PyMOL, Schrödinger, Inc.).
The approach, explained by pygabriel, also can be improved. The bounding box can be aligned in such a way, that it always faces to the viewer. Only two faces are visible at most. Hence, only 6 vertices (ie. two faces made up of 4 triangles) are needed.
See picture here, the box (its direction vector) always faces to the viewer:
Image: Aligned bounding box
For source code, download: cylinder impostor source code
The code does not cover round caps and orthographic projections. It uses geometry shader for vertex generation. You can use the shader code under the PyMOL license agreement.
I know this question is more than one-year old, but I'd still like to give my 2 cents.
I was able to produce cylinder impostors with another technique, I took inspiration from pymol's code. Here's the basic strategy:
1) You want to draw a bounding box (a cuboid) for the cylinder. To do that you need 6 faces, that translates in 18 triangles that translates in 36 triangle vertices. Assuming that you don't have access to geometry shaders, you pass to a vertex shader 36 times the starting point of the cylinder, 36 times the direction of the cylinder, and for each of those vertex you pass the corresponding point of the bounding box. For example a vertex associated with point (0, 0, 0) means that it will be transformed in the lower-left-back corner of the bounding box, (1,1,1) means the diagonally opposite point etc..
2) In the vertex shader, you can construct the points of the cylinder, by displacing each vertex (you passed 36 equal vertices) according to the corresponding points you passed in.
At the end of this step you should have a bounding box for the cylinder.
3) Here you have to reconstruct the points on the visible surface of the bounding box. From the point you obtain, you have to perform a ray-cylinder intersection.
4) From the intersection point you can reconstruct the depth and the normal. You also have to discard intersection points that are found outside of the bounding box (this can happen when you view the cylinder along its axis, the intersection point will go infinitely far).
By the way it's a very hard task, if somebody is interested here's the source code:
https://github.com/chemlab/chemlab/blob/master/chemlab/graphics/renderers/shaders/cylinderimp.frag
https://github.com/chemlab/chemlab/blob/master/chemlab/graphics/renderers/shaders/cylinderimp.vert
A cylinder impostor can actually be done just the same way as a sphere, like Nicol Bolas did it in his tutorial. You can make a square facing the camera and colour it that it will look like a cylinder, just the same way as Nicol did it for spheres. And it's not that hard.
The way it is done is ray-tracing of course. Notice that a cylinder facing upwards in camera space is kinda easy to implement. For example intersection with the side can be projected to the xz plain, it's a 2D problem of a line intersecting with a circle. Getting the top and bottom isn't harder either, the z coordinate of the intersection is given, so you actually know the intersection point of the ray and the circle's plain, all you have to do is to check if its inside the circle. And basically, that's it, you get two points, and return the closer one (the normals are pretty trivial too).
And when it comes to an arbitrary axis, it turns out to be almost the same problem. When you solve equations at the fixed axis cylinder, you are solving them for a parameter that describes how long do you have to go from a given point in a given direction to reach the cylinder. From the "definition" of it, you should notice that this parameter doesn't change if you rotate the world. So you can rotate the arbitrary axis to become the y axis, solve the problem in a space where equations are easier, get the parameter for the line equation in that space, but return the result in camera space.
You can download the shaderfiles from here. Just an image of it in action:
The code where the magic happens (It's only long 'cos it's full of comments, but the code itself is max 50 lines):
void CylinderImpostor(out vec3 cameraPos, out vec3 cameraNormal)
{
// First get the camera space direction of the ray.
vec3 cameraPlanePos = vec3(mapping * max(cylRadius, cylHeight), 0.0) + cameraCylCenter;
vec3 cameraRayDirection = normalize(cameraPlanePos);
// Now transform data into Cylinder space wherethe cyl's symetry axis is up.
vec3 cylCenter = cameraToCylinder * cameraCylCenter;
vec3 rayDirection = normalize(cameraToCylinder * cameraPlanePos);
// We will have to return the one from the intersection of the ray and circles,
// and the ray and the side, that is closer to the camera. For that, we need to
// store the results of the computations.
vec3 circlePos, sidePos;
vec3 circleNormal, sideNormal;
bool circleIntersection = false, sideIntersection = false;
// First check if the ray intersects with the top or bottom circle
// Note that if the ray is parallel with the circles then we
// definitely won't get any intersection (but we would divide with 0).
if(rayDirection.y != 0.0){
// What we know here is that the distance of the point's y coord
// and the cylCenter is cylHeight, and the distance from the
// y axis is less than cylRadius. So we have to find a point
// which is on the line, and match these conditions.
// The equation for the y axis distances:
// rayDirection.y * t - cylCenter.y = +- cylHeight
// So t = (+-cylHeight + cylCenter.y) / rayDirection.y
// About selecting the one we need:
// - Both has to be positive, or no intersection is visible.
// - If both are positive, we need the smaller one.
float topT = (+cylHeight + cylCenter.y) / rayDirection.y;
float bottomT = (-cylHeight + cylCenter.y) / rayDirection.y;
if(topT > 0.0 && bottomT > 0.0){
float t = min(topT,bottomT);
// Now check for the x and z axis:
// If the intersection is inside the circle (so the distance on the xz plain of the point,
// and the center of circle is less than the radius), then its a point of the cylinder.
// But we can't yet return because we might get a point from the the cylinder side
// intersection that is closer to the camera.
vec3 intersection = rayDirection * t;
if( length(intersection.xz - cylCenter.xz) <= cylRadius ) {
// The value we will (optianally) return is in camera space.
circlePos = cameraRayDirection * t;
// This one is ugly, but i didn't have better idea.
circleNormal = length(circlePos - cameraCylCenter) <
length((circlePos - cameraCylCenter) + cylAxis) ? cylAxis : -cylAxis;
circleIntersection = true;
}
}
}
// Find the intersection of the ray and the cylinder's side
// The distance of the point and the y axis is sqrt(x^2 + z^2), which has to be equal to cylradius
// (rayDirection.x*t - cylCenter.x)^2 + (rayDirection.z*t - cylCenter.z)^2 = cylRadius^2
// So its a quadratic for t (A*t^2 + B*t + C = 0) where:
// A = rayDirection.x^2 + rayDirection.z^2 - if this is 0, we won't get any intersection
// B = -2*rayDirection.x*cylCenter.x - 2*rayDirection.z*cylCenter.z
// C = cylCenter.x^2 + cylCenter.z^2 - cylRadius^2
// It will give two results, we need the smaller one
float A = rayDirection.x*rayDirection.x + rayDirection.z*rayDirection.z;
if(A != 0.0) {
float B = -2*(rayDirection.x*cylCenter.x + rayDirection.z*cylCenter.z);
float C = cylCenter.x*cylCenter.x + cylCenter.z*cylCenter.z - cylRadius*cylRadius;
float det = (B * B) - (4 * A * C);
if(det >= 0.0){
float sqrtDet = sqrt(det);
float posT = (-B + sqrtDet)/(2*A);
float negT = (-B - sqrtDet)/(2*A);
float IntersectionT = min(posT, negT);
vec3 Intersect = rayDirection * IntersectionT;
if(abs(Intersect.y - cylCenter.y) < cylHeight){
// Again it's in camera space
sidePos = cameraRayDirection * IntersectionT;
sideNormal = normalize(sidePos - cameraCylCenter);
sideIntersection = true;
}
}
}
// Now get the results together:
if(sideIntersection && circleIntersection){
bool circle = length(circlePos) < length(sidePos);
cameraPos = circle ? circlePos : sidePos;
cameraNormal = circle ? circleNormal : sideNormal;
} else if(sideIntersection){
cameraPos = sidePos;
cameraNormal = sideNormal;
} else if(circleIntersection){
cameraPos = circlePos;
cameraNormal = circleNormal;
} else
discard;
}
From what I can understand of the paper, I would interpret it as follows.
An impostor cylinder, viewed from any angle has the following characteristics.
From the top, it is a circle. So considering you'll never need to view a cylinder top down, you don't need to render anything.
From the side, it is a rectangle. The pixel shader only needs to compute illumination as normal.
From any other angle, it is a rectangle (the same one computed in step 2) that curves. Its curvature can be modeled inside the pixel shader as the curvature of the top ellipse. This curvature can be considered as simply an offset of each "column" in texture space, depending on viewing angle. The minor axis of this ellipse can be computed by multiplying the major axis (thickness of the cylinder) with a factor of the current viewing angle (angle / 90), assuming that 0 means you're viewing the cylinder side-on.
Viewing angles. I have only taken the 0-90 case into account in the math below, but the other cases are trivially different.
Given the viewing angle (phi) and the diameter of the cylinder (a) here's how the shader needs to warp the Y-Axis in texture space Y = b' sin(phi). And b' = a * (phi / 90). The cases phi = 0 and phi = 90 should never be rendered.
Of course, I haven't taken the length of this cylinder into account - which would depend on your particular projection and is not an image-space problem.
in my opengl application i have a Bézier curve in 3d space and i want to to move an object along it.
everything it's ok a part of rotations: i have some problem in calculating them. in my mind the pipeline should be this:
find point on the Bézier (position vector)
find tangent, normal, binormal (frenet frame)
find the angle between tangent vector and x axis
(the same for normal and y axis and binormal and z axis)
push matrix
translate in position, rotate in angles, draw object
pop matrix
but it does not go as i expected: the rotations seems to be random and does not follow the curve.
any suggestions?
You're going to have problems with the Frenet frame, because, unfortunately, it is undefined when the curve is even momentarily straight
(has vanishing curvature), and it exhibits wild swings in orientation around points
where the osculating plane’s normal has major changes in direction, especially at inflection points, where the normal flips.
I'd recommend using something called a Bishop frame (you can Google it, and find out how to compute it in a discrete setting). It is also referred to as a parallel transport frame or a minimum rotation frame - it has the advantage that the frame is always defined, and it changes orientation in a controlled way.
I don't think the problems with Frenet frames necessarily explain the problems you are having. You should start with some easy test cases - Bezier curves that are confined to the XY plane, for example, and step through your calculations until you find what's wrong.
Instead of computing angles just push the frame into the modelview matrix. Normal, Binormal and Tangent go in the upper left 3x3 of the matrix, translation in the 4th column and element 4,4 is 1. Instead of Frenet frame use the already mentioned Bishop frame. So in code:
// assuming you manage your curve in a (opaque) struct Bezier
struct BezierCurve;
typedef float vec3[3];
void bezierEvaluate(BezierCurve *bezier, float t, vec3 normal, vec3 binormal, vec3 tangent, vec3 pos);
void apply_bezier_transform(Bezier *bezier, float t)
{
float M[16]; // OpenGL uses column major ordering
// and this code is a excellent example why it does so:
bezierEvaluate(bezier, t, &M[0], &M[4], &M[8], &M[12]);
M[3] = M[7] = M[11] = 0.;
M[15] = 1.;
glMultMatrixf(M);
}