I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...
Related
So I've implemented Frustum Culling in my game engine and I'm experiencing a strange bug. I am rendering a building that is segmented into chunks and I'm only rendering the chunks which are in the frustum. My camera starts at around (-.033, 11.65, 2.2) and everything looks fine. I start moving around and there is no flickering. When I set a breakpoint in the frustum culling code I can see that it is indeed culling some of the meshes. Everything seems great. Then when I reach the center of the building, around (3.9, 4.17, 2.23) meshes start to disappear that are in view. The same is true on the other side as well. I can't figure out why this bug could exist.
I implement frustum culling by using the extraction method listed here Extracting View Frustum Planes (Gribb & Hartmann method). I had to use glm::inverse() rather than transpose as it suggested and I think the matrix math was given for row-major matrices so I flipped that. All in all my frustum plane calculation looks like
std::vector<Mesh*> render_meshes;
auto comboMatrix = proj * glm::inverse(view * model);
glm::vec4 p_planes[6];
p_planes[0] = comboMatrix[3] + comboMatrix[0]; //left
p_planes[1] = comboMatrix[3] - comboMatrix[0]; //right
p_planes[2] = comboMatrix[3] + comboMatrix[1]; //bottom
p_planes[3] = comboMatrix[3] - comboMatrix[1]; //top
p_planes[4] = comboMatrix[3] + comboMatrix[2]; //near
p_planes[5] = comboMatrix[3] - comboMatrix[2]; //far
for (int i = 0; i < 6; i++){
p_planes[i] = glm::normalize(p_planes[i]);
}
for (auto mesh : meshes) {
if (!frustum_cull(mesh, p_planes)) {
render_meshes.emplace_back(mesh);
}
}
I then decide to cull each mesh based on its bounding box (as calculated by ASSIMP with the aiProcess_GenBoundingBoxes flag) as follows (returning true means culled)
glm::vec3 vmin, vmax;
for (int i = 0; i < 6; i++) {
// X axis
if (p_planes[i].x > 0) {
vmin.x = m->getBBoxMin().x;
vmax.x = m->getBBoxMax().x;
}
else {
vmin.x = m->getBBoxMax().x;
vmax.x = m->getBBoxMin().x;
}
// Y axis
if (p_planes[i].y > 0) {
vmin.y = m->getBBoxMin().y;
vmax.y = m->getBBoxMax().y;
}
else {
vmin.y = m->getBBoxMax().y;
vmax.y = m->getBBoxMin().y;
}
// Z axis
if (p_planes[i].z > 0) {
vmin.z = m->getBBoxMin().z;
vmax.z = m->getBBoxMax().z;
}
else {
vmin.z = m->getBBoxMax().z;
vmax.z = m->getBBoxMin().z;
}
if (glm::dot(glm::vec3(p_planes[i]), vmin) + p_planes[i][3] > 0)
return true;
}
return false;
Any guidance?
Update 1: Normalizing the full vec4 representing the plane is incorrect as only the vec3 represents the normal of the plane. Further, normalization is not necessary for this instance as we only care about the sign of the distance (not the magnitude).
It is also important to note that I should be using the rows of the matrix not the columns. I am achieving this by replacing
p_planes[0] = comboMatrix[3] + comboMatrix[0];
with
p_planes[0] = glm::row(comboMatrix, 3) + glm::row(comboMatrix, 0);
in all instances.
You are using GLM incorrectly. As per the paper of Gribb and Hartmann, you can extract the plane equations as a sum or difference of different rows of the matrix, but in glm, mat4 foo; foo[n] will yield the n-th column (similiar to how GLSL is designed).
This here
for (int i = 0; i < 6; i++){
p_planes[i] = glm::normalize(p_planes[i]);
}
also doesn't make sense, since glm::normalize(vec4) will simply normalize a 4D vector. This will result in the plane to be shifted around along its normal direction. Only thexyz components must be brought to unit length, and w must be scaled accordingly. It is even explained in details in the paper itself. However, since you only need to know on which half-space a point lies, normalizing the plane equation is a waste of cycles, you only care about the sign, not the maginitude of the value anyway.
After following #derhass solution for normalizing the planes correctly for intersection tests you would do as follows
For bounding box plane intersection after projecting your box onto that plane which we call p and after calculating the midpoint of the box say m and after calculating the distance of that mid point from the plane say d to check for intersection we do
d<=p
But for frustum culling we just don't want our box to NOT intersect wih our frustum plane but we want it to be at -p distance from our plane and only then we know for sure that NO PART of our box is intersecting our plane that is
if(d<=-p)//then our box is fully not intersecting our plane so we don't draw it or cull it[d will be negative if the midpoint lies on the other side of our plane]
Similarly for triangles we have check if the distance of ALL 3 points of the triangle from the plane are negative.
To project a box onto a plane we take the 3 axises[x,y,z UNIT VECTORS] of the box,scale them by the boxes respective HALF width,height,depth and find the sum of each of their dot products[Take only the positive magnitude of each dot product NO SIGNED DISTANCE] with the planes normal which will be your 'p'
Not with the above approach for an AABB you can also cull against OOBB's with the same approach cause only the axises will change.
EDIT:
how to project a bounding box onto a plane?
Let's consider an AABB for our example
It has the following parameters
Lower extent Min(x,y,z)
Upper extent Max(x,y,z)
Up Vector U=(0,1,0)
Left Vector. L=(1,0,0)
Front Vector. F=(0,0,1)
Step 1: calculate half dimensions
half_width=(Max.x-Min.x)/2;
half_height=(Max.y-Min.y)/2;
half_depth=(Max.z-Min.z)/2;
Step 2: Project each individual axis of the box onto the plane normal,take only the positive magnitude of each dot product scaled by each half dimension and find the total sum. make sure both the box axis and the plane normal are unit vectors.
float p=(abs(dot(L,N))*half_width)+
(abs(dot(U,N))*half_height)+
(abs(dot(F,N))*half_depth);
abs() returns absolute magnitude we want it to be positive
because we are dealing with distances
Where N is the planes normal unit vector
Step 3: compute mid point of box
M=(Min+Max)/2;
Step 4: compute distance of the mid point from plane
d=dot(M,N)+plane.w
Step 5: do the check
d<=-p //return true i.e don't render or do culling
U can see how to use his for OOBB where the U,F,L vectors are the axises of the OOBB and the centre(mid point) and half dimensions are parameters you pass in manually
For an sphere as well you would calculate the distance of the spheres center from the plane (called d) but do the check
d<=-r //radius of the sphere
Put this in an function called outside(Plane,Bounds) which returns true if the bounds is fully outside the plane then for each of the 6 planes
bool is_inside_frustum()
{
for(Plane plane:frustum_planes)
{
if(outside(plane,AABB))
{
return false
}
}
return true;
}
I am using cocos2d-x 3.8.
I try to create two polygon sprites with the following code.
I know we can detect intersect with BoundingBox but is too rough.
Also, I know we can use Cocos2d-x C++ Physics engine to detect collisions but doesn't it waste a lot of resource of the mobile device? The game I am developing does not need physics engine.
is there a way to detect the intersect of polygon sprites?
Thank you.
auto pinfoTree = AutoPolygon::generatePolygon("Tree.png");
auto treeSprite= Sprite::create(pinfoTree);
treeSprite-> setPosition(width / 4 * 3 - 30 , height / 2 - 200);
this->addChild(treeSprite);
auto pinfoBird = AutoPolygon::generatePolygon("Bird.png");
auto Bird= Sprite::create(pinfoTree);
Bird->setPosition(width / 4 * 3, height / 2);
this->addChild(Bird)
This is a bit more complicated: AutoPolygon gives you a bunch of triangles - the PhysicsBody::createPolygon requires a convex polygon with clockwise winding… so these are 2 different things. The vertex count might even be limited. I think Box2d’s maximum count for 1 polygon is 8.
If you want to try this you’ll have to merge the triangles to form polygons. An option would be to start with one triangle and add more as long as the whole thing stays convex. If you can’t add any more triangles start a new polygon. Add all the polygons as PhysicsShapes to your physics body to form a compound object.
I would propose that you don’t follow this path because
Autopolygon is optimized for rendering - not for best fitting
physics - that is a difference. A polygon traced with Autopolygon will always be bigger than the original sprite - Otherwise you would see rendering artifacts.
You have close to no control over the generated polygons
Tracing the shape in the app will increase your startup time
Triangle meshes and physics outlines are 2 different things
I would try some different approach: Generate the collision shapes offline. This gives you a bunch of advantages:
You can generate and tweak the polygons in a visual editor e.g. by
using PhysicsEditor
Loading the prepares polygons is way faster
You can set additional parameters like mass etc
The solution is battle proven and works out of the box
But if you want to know how polygon intersect work. You can look at this code.
// Calculate the projection of a polygon on an axis
// and returns it as a [min, max] interval
public void ProjectPolygon(Vector axis, Polygon polygon, ref float min, ref float max) {
// To project a point on an axis use the dot product
float dotProduct = axis.DotProduct(polygon.Points[0]);
min = dotProduct;
max = dotProduct;
for (int i = 0; i < polygon.Points.Count; i++) {
flaot d = polygon.Points[i].DotProduct(axis);
if (d < min) {
min = dotProduct;
} else {
if (dotProduct> max) {
max = dotProduct;
}
}
}
}
// Calculate the distance between [minA, maxA] and [minB, maxB]
// The distance will be negative if the intervals overlap
public float IntervalDistance(float minA, float maxA, float minB, float maxB) {
if (minA < minB) {
return minB - maxA;
} else {
return minA - maxB;
}
}
// Check if polygon A is going to collide with polygon B.
public boolean PolygonCollision(Polygon polygonA, Polygon polygonB) {
boolean result = true;
int edgeCountA = polygonA.Edges.Count;
int edgeCountB = polygonB.Edges.Count;
float minIntervalDistance = float.PositiveInfinity;
Vector edge;
// Loop through all the edges of both polygons
for (int edgeIndex = 0; edgeIndex < edgeCountA + edgeCountB; edgeIndex++) {
if (edgeIndex < edgeCountA) {
edge = polygonA.Edges[edgeIndex];
} else {
edge = polygonB.Edges[edgeIndex - edgeCountA];
}
// ===== Find if the polygons are currently intersecting =====
// Find the axis perpendicular to the current edge
Vector axis = new Vector(-edge.Y, edge.X);
axis.Normalize();
// Find the projection of the polygon on the current axis
float minA = 0; float minB = 0; float maxA = 0; float maxB = 0;
ProjectPolygon(axis, polygonA, ref minA, ref maxA);
ProjectPolygon(axis, polygonB, ref minB, ref maxB);
// Check if the polygon projections are currentlty intersecting
if (IntervalDistance(minA, maxA, minB, maxB) > 0)
result = false;
return result;
}
}
The function can be used this way
boolean result = PolygonCollision(polygonA, polygonB);
I once had to program a collision detection algorithm where a ball was to collide with a rotating polygon obstacle. In my case the obstacles where arcs with certain thickness. and where moving around an origin. Basically it was rotating in an orbit. The ball was also rotating around an orbit about the same origin. It can move between orbits. To check the collision I had to just check if the balls angle with respect to the origin was between the lower and upper bound angles of the arc obstacle and check if the ball and the obstacle where in the same orbit.
In other words I used the various constrains and properties of the objects involved in the collision to make it more efficient. So use properties of your objects to cause the collision. Try using a similar approach depending on your objects
So I am trying to write a Raytracer as a personal project, and I have got the basic recursion, mesh geometry, and ray triangle intersection down.
I am trying to get a plausible image out of it but encounter the problem that all pixel rows are the same, giving me straight vertical lines.
I found that all pixel positions generated from the camera function are the same on the y axis but cannot find the problem with my vector math here (I use my Vertex structure as vectors too, its lazy I know):
void Renderer::CameraShader()
{
//compute the width and height of the screen based on angle and distance of the near clip plane
double widthRad = tan(0.5*m_Cam.angle)*m_Cam.nearClipPlane;
double heightRad = ((double)m_Cam.pixelRows / (double)m_Cam.pixelCols)*widthRad;
//get the horizontal vector of the camera by crossing the direction angle with an
Vertex cross = ((m_Cam.direction - m_Cam.origin).CrossProduct(Vertex(0, 1, 0)).Normalized(0.0001))*widthRad;
//get the up/down vector of the camera by crossing the horizontal vector with the direction vector
Vertex crossDown = m_Cam.direction.CrossProduct(cross).Normalized(0.0001)*heightRad;
//generate rays per pixel row and column
for (int i = 0; i < m_Cam.pixelCols;i++)
{
for (int j = 0; j < m_Cam.pixelRows; j++)
{
Vertex pixelPos = m_Cam.origin + (m_Cam.direction - m_Cam.origin).Normalized(0.0001)*m_Cam.nearClipPlane //vector of the screen center
- cross + (cross*((i / (double)m_Cam.pixelCols)*widthRad*2)) //horizontal vector based on i
+ crossDown - (crossDown*((j / (double)m_Cam.pixelRows)*heightRad*2)); //vertical vector based on j
//cast a ray through according screen pixel to get color
m_Image[i][j] = raycast(m_Cam.origin, pixelPos - m_Cam.origin, p_MaxBounces);
}
}
}
I hope the comments in the code make clear what is happening.
If anyone sees the problem help would be nice
The problem was that I had to substract the camera origin from the direction point. It now actually renders sillouettes, so I guess I can say its fixed :)
I am doing a program to test sphere-frustum intersection and being able to determine the sphere's visibility. I am extracting the frustum's clipping planes into camera space and checking for intersection. It works perfectly for all planes except the far plane and I cannot figure out why. I keep pulling the camera back but my program still claims the sphere is visible, despite it having been clipped long ago. If I go far enough it eventually determines that it is not visible, but this is some distance after it has exited the frustum.
I am using a unit sphere at the origin for the test. I am using the OpenGL Mathematics (GLM) library for vector and matrix data structures and for its built in math functions. Here is my code for the visibility function:
void visibilityTest(const struct MVP *mvp) {
static bool visLastTime = true;
bool visThisTime;
const glm::vec4 modelCenter_worldSpace = glm::vec4(0,0,0,1); //at origin
const int negRadius = -1; //unit sphere
//Get cam space model center
glm::vec4 modelCenter_cameraSpace = mvp->view * mvp->model * modelCenter_worldSpace;
//---------Get Frustum Planes--------
//extract projection matrix row vectors
//NOTE: since glm stores their mats in column-major order, we extract columns
glm::vec4 rowVec[4];
for(int i = 0; i < 4; i++) {
rowVec[i] = glm::vec4( mvp->projection[0][i], mvp->projection[1][i], mvp->projection[2][i], mvp->projection[3][i] );
}
//determine frustum clipping planes (in camera space)
glm::vec4 plane[6];
//NOTE: recall that indices start at zero. So M4 + M3 will be rowVec[3] + rowVec[2]
plane[0] = rowVec[3] + rowVec[2]; //near
plane[1] = rowVec[3] - rowVec[2]; //far
plane[2] = rowVec[3] + rowVec[0]; //left
plane[3] = rowVec[3] - rowVec[0]; //right
plane[4] = rowVec[3] + rowVec[1]; //bottom
plane[5] = rowVec[3] - rowVec[1]; //top
//extend view frustum by 1 all directions; near/far along local z, left/right among local x, bottom/top along local y
// -Ax' -By' -Cz' + D = D'
plane[0][3] -= plane[0][2]; // <x',y',z'> = <0,0,1>
plane[1][3] += plane[1][2]; // <0,0,-1>
plane[2][3] += plane[2][0]; // <-1,0,0>
plane[3][3] -= plane[3][0]; // <1,0,0>
plane[4][3] += plane[4][1]; // <0,-1,0>
plane[5][3] -= plane[5][1]; // <0,1,0>
//----------Determine Frustum-Sphere intersection--------
//if any of the dot products between model center and frustum plane is less than -r, then the object falls outside the view frustum
visThisTime = true;
for(int i = 0; i < 6; i++) {
if( glm::dot(plane[i], modelCenter_cameraSpace) < static_cast<float>(negRadius) ) {
visThisTime = false;
}
}
if(visThisTime != visLastTime) {
printf("Sphere is %s visible\n", (visThisTime) ? "" : "NOT " );
visLastTime = visThisTime;
}
}
The polygons appear to be clipped by the far plane properly so it seems that the projection matrix is set up properly, but the calculations make it seem like the plane is way far out. Perhaps I am not calculating something correctly or have a fundamental misunderstanding of the calculations that are required?
The calculations that deal specifically with the far clipping plane are:
plane[1] = rowVec[3] - rowVec[2]; //far
and
plane[1][3] += plane[1][2]; // <0,0,-1>
I'm setting the plane to be equal to the 4th row (or in this case column) of the projection matrix - the 3rd row of the projection matrix. Then I'm extending the far plane one unit further (due to the sphere's radius of one; D' = D - C(-1) )
I've looked over this code many times and I can't see why it shouldn't work. Any help is appreciated.
EDIT:
I can't answer my own question as I don't have the rep, so I will post it here.
The problem was that I wasn't normalizing the plane equations. This didn't seem to make much of a difference for any of the clip planes besides the far one, so I hadn't even considered it (but that didn't make it any less wrong). After normalization everything works properly.
I am developing a small tool for 3D visualization of molecules.
For my project i choose to make a thing in the way of what Mr "Brad Larson" did with his Apple software "Molecules". A link where you can find a small presentation of the technique used : Brad Larsson software presentation
For doing my job i must compute sphere impostor and cylinder impostor.
For the moment I have succeed to do the "Sphere Impostor" with the help of another tutorial Lies and Impostors
for summarize the computing of the sphere impostor : first we send a "sphere position" and the "sphere radius" to the "vertex shader" which will create in the camera-space an square which always face the camera, after that we send our square to the fragment shader where we use a simple ray tracing to find which fragment of the square is included in the sphere, and finally we compute the normal and the position of the fragment to compute lighting. (another thing we also write the gl_fragdepth for giving a good depth to our impostor sphere !)
But now i am blocked in the computing of the cylinder impostor, i try to do a parallel between the sphere impostor and the cylinder impostor but i don't find anything, my problem is that for the sphere it was some easy because the sphere is always the same no matter how we see it, we will always see the same thing : "a circle" and another thing is that the sphere was perfectly defined by Math then we can find easily the position and the normal for computing lighting and create our impostor.
For the cylinder it's not the same thing, and i failed to find a hint to modeling a form which can be used as "cylinder impostor", because the cylinder shows many different forms depending on the angle we see it !
so my request is to ask you about a solution or an indication for my problem of "cylinder impostor".
In addition to pygabriels answer I want to share a standalone implementation using the mentioned shader code from Blaine Bell (PyMOL, Schrödinger, Inc.).
The approach, explained by pygabriel, also can be improved. The bounding box can be aligned in such a way, that it always faces to the viewer. Only two faces are visible at most. Hence, only 6 vertices (ie. two faces made up of 4 triangles) are needed.
See picture here, the box (its direction vector) always faces to the viewer:
Image: Aligned bounding box
For source code, download: cylinder impostor source code
The code does not cover round caps and orthographic projections. It uses geometry shader for vertex generation. You can use the shader code under the PyMOL license agreement.
I know this question is more than one-year old, but I'd still like to give my 2 cents.
I was able to produce cylinder impostors with another technique, I took inspiration from pymol's code. Here's the basic strategy:
1) You want to draw a bounding box (a cuboid) for the cylinder. To do that you need 6 faces, that translates in 18 triangles that translates in 36 triangle vertices. Assuming that you don't have access to geometry shaders, you pass to a vertex shader 36 times the starting point of the cylinder, 36 times the direction of the cylinder, and for each of those vertex you pass the corresponding point of the bounding box. For example a vertex associated with point (0, 0, 0) means that it will be transformed in the lower-left-back corner of the bounding box, (1,1,1) means the diagonally opposite point etc..
2) In the vertex shader, you can construct the points of the cylinder, by displacing each vertex (you passed 36 equal vertices) according to the corresponding points you passed in.
At the end of this step you should have a bounding box for the cylinder.
3) Here you have to reconstruct the points on the visible surface of the bounding box. From the point you obtain, you have to perform a ray-cylinder intersection.
4) From the intersection point you can reconstruct the depth and the normal. You also have to discard intersection points that are found outside of the bounding box (this can happen when you view the cylinder along its axis, the intersection point will go infinitely far).
By the way it's a very hard task, if somebody is interested here's the source code:
https://github.com/chemlab/chemlab/blob/master/chemlab/graphics/renderers/shaders/cylinderimp.frag
https://github.com/chemlab/chemlab/blob/master/chemlab/graphics/renderers/shaders/cylinderimp.vert
A cylinder impostor can actually be done just the same way as a sphere, like Nicol Bolas did it in his tutorial. You can make a square facing the camera and colour it that it will look like a cylinder, just the same way as Nicol did it for spheres. And it's not that hard.
The way it is done is ray-tracing of course. Notice that a cylinder facing upwards in camera space is kinda easy to implement. For example intersection with the side can be projected to the xz plain, it's a 2D problem of a line intersecting with a circle. Getting the top and bottom isn't harder either, the z coordinate of the intersection is given, so you actually know the intersection point of the ray and the circle's plain, all you have to do is to check if its inside the circle. And basically, that's it, you get two points, and return the closer one (the normals are pretty trivial too).
And when it comes to an arbitrary axis, it turns out to be almost the same problem. When you solve equations at the fixed axis cylinder, you are solving them for a parameter that describes how long do you have to go from a given point in a given direction to reach the cylinder. From the "definition" of it, you should notice that this parameter doesn't change if you rotate the world. So you can rotate the arbitrary axis to become the y axis, solve the problem in a space where equations are easier, get the parameter for the line equation in that space, but return the result in camera space.
You can download the shaderfiles from here. Just an image of it in action:
The code where the magic happens (It's only long 'cos it's full of comments, but the code itself is max 50 lines):
void CylinderImpostor(out vec3 cameraPos, out vec3 cameraNormal)
{
// First get the camera space direction of the ray.
vec3 cameraPlanePos = vec3(mapping * max(cylRadius, cylHeight), 0.0) + cameraCylCenter;
vec3 cameraRayDirection = normalize(cameraPlanePos);
// Now transform data into Cylinder space wherethe cyl's symetry axis is up.
vec3 cylCenter = cameraToCylinder * cameraCylCenter;
vec3 rayDirection = normalize(cameraToCylinder * cameraPlanePos);
// We will have to return the one from the intersection of the ray and circles,
// and the ray and the side, that is closer to the camera. For that, we need to
// store the results of the computations.
vec3 circlePos, sidePos;
vec3 circleNormal, sideNormal;
bool circleIntersection = false, sideIntersection = false;
// First check if the ray intersects with the top or bottom circle
// Note that if the ray is parallel with the circles then we
// definitely won't get any intersection (but we would divide with 0).
if(rayDirection.y != 0.0){
// What we know here is that the distance of the point's y coord
// and the cylCenter is cylHeight, and the distance from the
// y axis is less than cylRadius. So we have to find a point
// which is on the line, and match these conditions.
// The equation for the y axis distances:
// rayDirection.y * t - cylCenter.y = +- cylHeight
// So t = (+-cylHeight + cylCenter.y) / rayDirection.y
// About selecting the one we need:
// - Both has to be positive, or no intersection is visible.
// - If both are positive, we need the smaller one.
float topT = (+cylHeight + cylCenter.y) / rayDirection.y;
float bottomT = (-cylHeight + cylCenter.y) / rayDirection.y;
if(topT > 0.0 && bottomT > 0.0){
float t = min(topT,bottomT);
// Now check for the x and z axis:
// If the intersection is inside the circle (so the distance on the xz plain of the point,
// and the center of circle is less than the radius), then its a point of the cylinder.
// But we can't yet return because we might get a point from the the cylinder side
// intersection that is closer to the camera.
vec3 intersection = rayDirection * t;
if( length(intersection.xz - cylCenter.xz) <= cylRadius ) {
// The value we will (optianally) return is in camera space.
circlePos = cameraRayDirection * t;
// This one is ugly, but i didn't have better idea.
circleNormal = length(circlePos - cameraCylCenter) <
length((circlePos - cameraCylCenter) + cylAxis) ? cylAxis : -cylAxis;
circleIntersection = true;
}
}
}
// Find the intersection of the ray and the cylinder's side
// The distance of the point and the y axis is sqrt(x^2 + z^2), which has to be equal to cylradius
// (rayDirection.x*t - cylCenter.x)^2 + (rayDirection.z*t - cylCenter.z)^2 = cylRadius^2
// So its a quadratic for t (A*t^2 + B*t + C = 0) where:
// A = rayDirection.x^2 + rayDirection.z^2 - if this is 0, we won't get any intersection
// B = -2*rayDirection.x*cylCenter.x - 2*rayDirection.z*cylCenter.z
// C = cylCenter.x^2 + cylCenter.z^2 - cylRadius^2
// It will give two results, we need the smaller one
float A = rayDirection.x*rayDirection.x + rayDirection.z*rayDirection.z;
if(A != 0.0) {
float B = -2*(rayDirection.x*cylCenter.x + rayDirection.z*cylCenter.z);
float C = cylCenter.x*cylCenter.x + cylCenter.z*cylCenter.z - cylRadius*cylRadius;
float det = (B * B) - (4 * A * C);
if(det >= 0.0){
float sqrtDet = sqrt(det);
float posT = (-B + sqrtDet)/(2*A);
float negT = (-B - sqrtDet)/(2*A);
float IntersectionT = min(posT, negT);
vec3 Intersect = rayDirection * IntersectionT;
if(abs(Intersect.y - cylCenter.y) < cylHeight){
// Again it's in camera space
sidePos = cameraRayDirection * IntersectionT;
sideNormal = normalize(sidePos - cameraCylCenter);
sideIntersection = true;
}
}
}
// Now get the results together:
if(sideIntersection && circleIntersection){
bool circle = length(circlePos) < length(sidePos);
cameraPos = circle ? circlePos : sidePos;
cameraNormal = circle ? circleNormal : sideNormal;
} else if(sideIntersection){
cameraPos = sidePos;
cameraNormal = sideNormal;
} else if(circleIntersection){
cameraPos = circlePos;
cameraNormal = circleNormal;
} else
discard;
}
From what I can understand of the paper, I would interpret it as follows.
An impostor cylinder, viewed from any angle has the following characteristics.
From the top, it is a circle. So considering you'll never need to view a cylinder top down, you don't need to render anything.
From the side, it is a rectangle. The pixel shader only needs to compute illumination as normal.
From any other angle, it is a rectangle (the same one computed in step 2) that curves. Its curvature can be modeled inside the pixel shader as the curvature of the top ellipse. This curvature can be considered as simply an offset of each "column" in texture space, depending on viewing angle. The minor axis of this ellipse can be computed by multiplying the major axis (thickness of the cylinder) with a factor of the current viewing angle (angle / 90), assuming that 0 means you're viewing the cylinder side-on.
Viewing angles. I have only taken the 0-90 case into account in the math below, but the other cases are trivially different.
Given the viewing angle (phi) and the diameter of the cylinder (a) here's how the shader needs to warp the Y-Axis in texture space Y = b' sin(phi). And b' = a * (phi / 90). The cases phi = 0 and phi = 90 should never be rendered.
Of course, I haven't taken the length of this cylinder into account - which would depend on your particular projection and is not an image-space problem.