Dragging 3-Dimensional Objects with C++ and OpenGL - c++

I have been developing a 3d chessboard and I have been stuck trying to drag the pieces for a few days now.
Once I select an object using my ray-caster, I start my dragging function which calculates the difference between the current location of the mouse (in world coordinates) and its previous location, I then translate my object by the difference of these coordinates.
I debug my ray-caster by drawing lines so I am sure those coordinates are accurate.
Translating my object based on the ray-caster coordinates only moves the object a fraction of the distance it should actually go.
Am I missing a step?
-Calvin
I believe my issue is in this line of code....
glm::vec3 World_Delta = Current_World_Location - World_Start_Location;
If I multiply the equation by 20 the object start to move more like I would expect it to, but it is never completely accurate.
Below is some relevent code
RAY-CASTING:
void CastRay(int mouse_x, int mouse_y) {
int Object_Selected = -1;
float Closest_Object = -1;
//getWorldCoordinates calls glm::unproject
nearPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 0.0f));
farPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 1.0f));
glm::vec3 direction = Input_Math.normalize(farPoint - nearPoint);
//getObjectStack() Retrieves all objects in the current scene
std::vector<LoadOBJ> objectList = Object_Input_Info.getObjectStack();
for (int i = 0; i < objectList.size(); i++) {
std::vector<glm::vec3> Vertices = objectList[i].getVertices();
for(int j = 0; j < Vertices.size(); j++) {
if ( ( j + 1 ) % 3 == 0 ) {
glm::vec3 face_normal = Input_Math.normalize(Input_Math.CrossProduct(Vertices[j-1] - Vertices[j-2], Vertices[j] - Vertices[j-2]));
float nDotL = glm::dot(direction, face_normal);
if (nDotL <= 0.0f ) { //if nDotL == 0 { Perpindicular } else if nDotL < 0 { SameDirection } else { OppositeDirection }
float distance = glm::dot(face_normal, (Vertices[j-2] - nearPoint)) / nDotL;
glm::vec3 p = nearPoint + distance * direction;
glm::vec3 n1 = Input_Math.CrossProduct(Vertices[j-1] - Vertices[j-2], p - Vertices[j-2]);
glm::vec3 n2 = Input_Math.CrossProduct(Vertices[j] - Vertices[j-1], p - Vertices[j-1]);
glm::vec3 n3 = Input_Math.CrossProduct(Vertices[j-2] - Vertices[j], p - Vertices[j]);
if( glm::dot(face_normal, n1) >= 0.0f && glm::dot(face_normal, n2) >= 0.0f && glm::dot(face_normal, n3) >= 0.0f ) {
if(p.z > Closest_Object) {
//I Create this "dragplane" to be used by my dragging function.
Drag_Plane[0] = (glm::vec3(Vertices[j-2].x, Vertices[j-2].y, p.z ));
Drag_Plane[1] = (glm::vec3(Vertices[j-1].x, Vertices[j-1].y, p.z ));
Drag_Plane[2] = (glm::vec3(Vertices[j].x , Vertices[j].y , p.z ));
//This is the object the we selected in the scene
Object_Selected = i;
//These are the coordinate the ray intersected the object
World_Start_Location = p;
}
}
}
}
}
}
if(Object_Selected >= 0) { //If an object was intersected by the ray
//selectObject -> Simply sets the boolean "dragging" to true
selectObject(Object_Selected, mouse_x, mouse_y);
}
DRAGGING
void DragObject(int mouse_x, int mouse_y) {
if(dragging) {
//Finds the Coordinates where the ray intersects the "DragPlane" set by original object intersection
farPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 1.0f));
nearPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 0.0f));
glm::vec3 direction = Input_Math.normalize(farPoint - nearPoint);
glm::vec3 face_normal = Input_Math.normalize(Input_Math.CrossProduct(Drag_Plane[1] - Drag_Plane[0], Drag_Plane[2] - Drag_Plane[0]));
float nDotL = glm::dot(direction, face_normal);
float distance = glm::dot(face_normal, (Drag_Plane[0] - nearPoint)) / nDotL;
glm::vec3 Current_World_Location = nearPoint + distance * direction;
//Calculate the difference between the current mouse location and its previous location
glm::vec3 World_Delta = Current_World_Location - World_Start_Location;
//Set the "start location" to the current location for the next loop
World_Start_Location = Current_World_Location;
//get the current object
Object_Input_Info = Object_Input_Info.getObject(currentObject);
//adds a translation matrix to the stack
Object_Input_Info.TranslateVertices(World_Delta.x, World_Delta.y, World_Delta.z);
//calculates the new vertices
Object_Input_Info.Load_Data();
//puts the new object back
Object_Input_Info.Update_Object_Stack(currentObject);
}
}

I have already faced similar problems to what your reporting.
Instead of keeping track of the translation during mouse movement, you can do the following:
In your mouse button callback, store a 'Delta' vector from the mouse position (in world coordinates) (P_mouse) to your object position (P_object). It would be something like:
Delta = P_object - P_mouse;
For every call of your mouse motion callback, you just need to update the object position by:
P_object = P_mouse + Delta;
Notice that Delta is constant during the whole dragging process.

Related

Centering the object into the 3D space with Direct3D

The idea is to present a drawn 3D object "centered" in the screen. After loading the object with WaveFrontReader I got an array of vertices:
float bmin[3], bmax[3];
bmin[0] = bmin[1] = bmin[2] = std::numeric_limits<float>::max();
bmax[0] = bmax[1] = bmax[2] = -std::numeric_limits<float>::max();
for (int k = 0; k < 3; k++)
{
for (auto& v : objx->wfr.vertices)
{
if (k == 0)
{
bmin[k] = std::min(v.position.x, bmin[k]);
bmax[k] = std::max(v.position.x, bmax[k]);
}
if (k == 1)
{
bmin[k] = std::min(v.position.y, bmin[k]);
bmax[k] = std::max(v.position.y, bmax[k]);
}
if (k == 2)
{
bmin[k] = std::min(v.position.z, bmin[k]);
bmax[k] = std::max(v.position.z, bmax[k]);
}
}
}
I got the idea from the Viewer in TinyObjLoader (which uses OpenGL though), and then:
float maxExtent = 0.5f * (bmax[0] - bmin[0]);
if (maxExtent < 0.5f * (bmax[1] - bmin[1])) {
maxExtent = 0.5f * (bmax[1] - bmin[1]);
}
if (maxExtent < 0.5f * (bmax[2] - bmin[2])) {
maxExtent = 0.5f * (bmax[2] - bmin[2]);
}
_3dp.scale[0] = maxExtent;
_3dp.scale[1] = maxExtent;
_3dp.scale[2] = maxExtent;
_3dp.translation[0] = -0.5 * (bmax[0] + bmin[0]);
_3dp.translation[1] = -0.5 * (bmax[1] + bmin[1]);
_3dp.translation[2] = -0.5 * (bmax[2] + bmin[2]);
However this doesn't work. With an object like this spider which has vertices that the coordinates do not extend +/-100, the scale gets to about 100x by the above formula and yet, with the current view set to 0,0,0 the object is too close and I have to put the Z translation manually to something like 50000 to view it into a full box with a D3D11_VIEWPORT viewport = { 0.0f, 0.0f, w, h, 0.0f, 1.0f };, Not to mention that the Y is not centered as well.
Is there a proper algorithm to center the object into view?
Thanks a lot
You can actually change the position of the camera itself and not the objects.
Its recommended that you edit the camera position in OpenGL tutorials.
In games the camera (which is what captures the viewpoint which the rendered objects are viewed from) are not in the middle of the view but actually further way so you can see everything going on in the view/scene.

Ray casting in rotating fan configuration produces point cloud with curvature, how to eliminate curvature?

I'm attempting to perform an intersection test using ray casting (not sure if correct term so please forgive me if not) and am outputting the intersections as a point cloud, and the point cloud shows curvature (on the Z-axis only, the point cloud is completely flat on the Y axis, and the horizontal axis in this image is the X axis):
I borrowed concepts from the Scratchapixel site, specifically http://scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection.
Essentially, I am generating 16 rays, all with the same origin vector. The direction vectors start at +15 degrees on the YZ plane, and continue in increments of -2 degrees down to -15. I have an axis aligned bounding box that I am testing intersection with. I use a rotation transform to rotate the 16 rays CCW around the Z axis. I am performing the intersection test for all 16 rays each 0.1 degrees, and if it returns true, I add the point to the point cloud.
Here's my intersection code:
bool test_intersect(Box b, Ray r, Vec3f& intersect_point)
{
float txmin = 0.0f, txmax = 0.0f, tymin = 0.0f, tymax = 0.0f, tzmin = 0.0f, tzmax = 0.0f;
float t_min = 0.0f, t_max = 0.0f, t = 0.0f;
// Determine inverse direction of ray to alleviate 0 = -0 issues
Vec3f inverse_direction(1 / r.direction.x, 1 / r.direction.y, 1 / r.direction.z);
// Solving box_min/box_max0 = O + Dt
txmin = (b.box_min.x - r.origin.x) * inverse_direction.x;
txmax = (b.box_max.x - r.origin.x) * inverse_direction.x;
tymin = (b.box_min.y - r.origin.y) * inverse_direction.y;
tymax = (b.box_max.y - r.origin.y) * inverse_direction.y;
tzmin = (b.box_min.z - r.origin.z) * inverse_direction.z;
tzmax = (b.box_max.z - r.origin.z) * inverse_direction.z;
// Depending on direction of ray tmin may > tmax, so we may need to swap
if (txmin > txmax) std::swap(txmin, txmax);
if (tymin > tymax) std::swap(tymin, tymax);
if (tzmin > tzmax) std::swap(tzmin, tzmax);
t_min = txmin;
t_max = txmax;
// If t-value of a min is greater than t-value of max,
// we missed the object in that plane.
if ((t_min > tymax) || (tymin > t_max))
return false;
if (tymin > t_min)
t_min = tymin;
if (tymax < t_max)
t_max = tymax;
if ((t_min > tzmax) || (tzmin > t_max))
return false;
if (tzmin > t_min)
t_min = tzmin;
if (tzmax < t_max)
t_max = tzmax;
if (t_min > 0)
t = t_min;
else
if (t_max > 0)
t = t_max;
else
return false;
intersect_point.x = r.origin.x + r.direction.x * t;
intersect_point.y = r.origin.y + r.direction.y * t;
intersect_point.z = r.origin.z + r.direction.z * t;
return true;
}
And my rotation:
// Rotation around z axis, for rotating array and checking beam intersections
void transform_rotate_z(Vec3f& in_vector, float angle)
{
float radians = angle * (M_PI / 180);
float result_x = cos(radians) * in_vector.x + -sin(radians) * in_vector.y;
float result_y = sin(radians) * in_vector.x + cos(radians) * in_vector.y;
in_vector.x = result_x;
in_vector.y = result_y;
}
I have racked my brain for quite a while but I can't seem to determine how I can prevent this curvature, I'm sure I'm overlooking something simple. I'd be grateful for any help you can provide.

Refraction in Raytracing?

I've been working on my raytracer again. I added reflection and multithreading support. Currently I am working on adding refractions, but its only half working.
As you can see, there is a center sphere(without specular highlight), a reflecting sphere(to the right) and a refracting sphere(left). I'm pretty happy about reflections, it does look very good. For refractions its kinda working...the light is refracted and all shadows of the spheres are visible in the sphere(refraction index 1.4), but there is an outer black ring.
EDIT: Apparently the black ring gets bigger, and therefore the sphere smaller, when I increase the refraction index of the sphere. On the contrary, when decreasing the index of refraction, the Sphere gets larger and the black ring smaller...until, with index of refraction set to one, the ring totally disappears.
IOR = 1.9
IOR = 1.1
IOR = 1.00001
And interestingly enough at IOR = 1 the sphere loses its transparency and becomes white.
I think I covered total internal reflection and it is not the issue here.
Now the code:
I'm using the operator | for dot product, so (vec|vec) is a dot product and the operator ~ to invert vectors. The objects, both ligths and spheres are stored in Object **objects;.
Raytrace function
Colour raytrace(const Ray &r, const int &depth)
{
//first find the nearest intersection of a ray with an object
Colour finalColour = skyBlue *(r.getDirection()|Vector(0,0,-1)) * SKY_FACTOR;
double t, t_min = INFINITY;
int index_nearObj = -1;
for(int i = 0; i < objSize; i++)
{
if(!dynamic_cast<Light *>(objects[i]))//skip light src
{
t = objects[i]->findParam(r);
if(t > 0 && t < t_min)
{
t_min = t;
index_nearObj = i;
}
}
}
//no intersection
if(index_nearObj < 0)
return finalColour;
Vector intersect = r.getOrigin() + r.getDirection()*t_min;
Vector normal = objects[index_nearObj]->NormalAtIntersect(intersect);
Colour objectColor = objects[index_nearObj]->getColor();
Ray rRefl, rRefr; //reflected and refracted Ray
Colour refl = finalColour, refr = finalColour; //reflected and refracted colours
double reflectance = 0, transmittance = 0;
if(objects[index_nearObj]->isReflective() && depth < MAX_TRACE_DEPTH)
{
//handle reflection
rRefl = objects[index_nearObj]->calcReflectingRay(r, intersect, normal);
refl = raytrace(rRefl, depth + 1);
reflectance = 1;
}
if(objects[index_nearObj]->isRefractive() && depth < MAX_TRACE_DEPTH)
{
//handle transmission
rRefr = objects[index_nearObj]->calcRefractingRay(r, intersect, normal, reflectance, transmittance);
refr = raytrace(rRefr, depth + 1);
}
Ray rShadow; //shadow ray
bool shadowed;
double t_light = -1;
Colour localColour;
Vector tmpv;
//get material properties
double ka = 0.2; //ambient coefficient
double kd; //diffuse coefficient
double ks; //specular coefficient
Colour ambient = ka * objectColor; //ambient component
Colour diffuse, specular;
double brightness;
localColour = ambient;
//look if the object is in shadow or light
//do this by casting a ray from the obj and
// check if there is an intersection with another obj
for(int i = 0; i < objSize; i++)
{
if(dynamic_cast<Light *>(objects[i])) //if object is a light
{
//for each light
shadowed = false;
//create Ray to light
tmpv = objects[i]->getPosition() - intersect;
rShadow = Ray(intersect + (!tmpv) * BIAS, tmpv);
t_light = objects[i]->findParam(rShadow);
if(t_light < 0) //no imtersect, which is quite impossible
continue;
//then we check if that Ray intersects one object that is not a light
for(int j = 0; j < objSize; j++)
{
if(!dynamic_cast<Light *>(objects[j]) && j != index_nearObj)//if obj is not a light
{
t = objects[j]->findParam(rShadow);
//if it is smaller we know the light is behind the object
//--> shadowed by this light
if (t >= 0 && t < t_light)
{
// Set the flag and stop the cycle
shadowed = true;
break;
}
}
}
if(!shadowed)
{
rRefl = objects[index_nearObj]->calcReflectingRay(rShadow, intersect, normal);
//reflected ray from ligh src, for ks
kd = maximum(0.0, (normal|rShadow.getDirection()));
if(objects[index_nearObj]->getShiny() <= 0)
ks = 0;
else
ks = pow(maximum(0.0, (r.getDirection()|rRefl.getDirection())), objects[index_nearObj]->getShiny());
diffuse = kd * objectColor;// * objects[i]->getColour();
specular = ks * objects[i]->getColor();
brightness = 1 /(1 + t_light * DISTANCE_DEPENDENCY_LIGHT);
localColour += brightness * (diffuse + specular);
}
}
}
finalColour = localColour + (transmittance * refr + reflectance * refl);
return finalColour;
}
Now the function that calculates the refracted Ray, I used several different sites for resource, and each had similar algorithms. This is the best I could do so far. It may just be a tiny detail I'm not seeing...
Ray Sphere::calcRefractingRay(const Ray &r, const Vector &intersection,Vector &normal, double & refl, double &trans)const
{
double n1, n2, n;
double cosI = (r.getDirection()|normal);
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;//invert
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
n = n1/n2;
double sinT2 = n*n * (1.0 - cosI * cosI);
double cosT = sqrt(1.0 - sinT2);
//fresnel equations
double rn = (n1 * cosI - n2 * cosT)/(n1 * cosI + n2 * cosT);
double rt = (n2 * cosI - n1 * cosT)/(n2 * cosI + n2 * cosT);
rn *= rn;
rt *= rt;
refl = (rn + rt)*0.5;
trans = 1.0 - refl;
if(n == 1.0)
return r;
if(cosT*cosT < 0.0)//tot inner refl
{
refl = 1;
trans = 0;
return calcReflectingRay(r, intersection, normal);
}
Vector dir = n * r.getDirection() + (n * cosI - cosT)*normal;
return Ray(intersection + dir * BIAS, dir);
}
EDIT: I also changed the refraction index around.From
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
to
if(cosI > 0.0)
{
n1 = getRefrIndex();
n2 = 1.0;
normal = ~normal;
}
else
{
n1 = 1.0;
n2 = getRefrIndex();
cosI = -cosI;
}
Then I get this, and almost the same(still upside down) with an index of refraction at 1!
And the reflection calculation:
Ray Sphere::calcReflectingRay(const Ray &r, const Vector &intersection, const Vector &normal)const
{
Vector rdir = r.getDirection();
Vector dir = rdir - 2 * (rdir|normal) * normal;
return Ray(intersection + dir*BIAS, dir);
//the Ray constructor automatically normalizes directions
}
So my question is: How do I fix the outer black circle? Which version is correct?
Help is greatly appreciated :)
This is compiled on Linux using g++ 4.8.2.
Warning: the following is a guess, not a certainty. I'd have to look at the code in more detail to be sure what's happening and why.
That said, it looks to me like your original code is basically simulating a concave lens instead of convex.
A convex lens is basically a magnifying lens, bringing light rays from a relatively small area into focus on a plane:
This also shows why the corrected code shows an upside-down image. The rays of light coming from the top on one side get projected to the bottom on the other (and vice versa).
Getting back to the concave lens though: a concave lens is a reducing lens that shows a wide angle of picture from in front of the lens:
If you look at the bottom right corner here, it shows what I suspect is the problem: especially with a high index of refraction, the rays of light trying to come into the lens intersect the edge of the lens itself. For all the angles wider than that, you're typically going to see a black ring, because the front edge of the lens is acting as a shade to prevent light from entering.
Increasing the index of refraction increases the width of that black ring, because the light is bent more, so a larger portion at the edges is intersecting the outer edge of the lens.
In case you care about how they avoid this with things like wide-angle camera lenses, the usual route is to use a meniscus lens, at least for the front element:
This isn't a panacea, but does at least prevent incoming light rays from intersecting the outer edge of the front lens element. Depending on exactly how wide an angle the lens needs to cover, it'll often be quite a bit less radical of a meniscus than this (and in some cases it'll be a plano-concave) but you get the general idea.
Final warning: of course, all of these are hand-drawn, and intended only to give general idea, not (for example) reflect the design of any particular lens, an element with any particular index of refraction, etc.
I stumbled across this exact issue as well when working on a ray tracer. #lightxbulb's comment about normalizing the ray direction vector fixed this problem for me.
Firstly, keep your code that computes the refraction indices prior to your edit. In other words, you should be seeing those black rings in your renderings.
Then, in your calcRefractingRay function where you compute cosI, use the dot product of normalize(r.getDirection()) and normal. Currently you're taking the dot product of r.getDirection() and normal.
Secondly, when you compute the refracted ray direction dir, use normalize(r.getDirection()) instead of r.getDirection(). Again, you're currently using
r.getDirection() in your calculation.
Also, there is an issue with the way you're checking for total internal reflection. You should check that the term you're taking the square root of (1.0 - sinT2) is non-negative before actually computing the square root.
Hope that helps!

Realtime object painting

I am trying to perform a realtime painting to the object texture. Using Irrlicht for now, but that does not really matter.
So far, i've got the right UV coordinates using this algorithm:
find out which object's triangle user selected (raycasting, nothing
really difficult)
find out the UV (baricentric) coordinates of intersection point on
that triangle
find out the UV (texture) coordinates of each triangle vertex
find out the UV (texture) coordinates of intersection point
calculate the texture image coordinates for intersection point
But somehow, when i am drawing in the point i got in the 5th step on texture image, i get totally wrong results. So, when drawing a rectangle in cursor point, the X (or Z) coordinate of its is inverted:
Here's the code i am using to fetch texture coordinates:
core::vector2df getPointUV(core::triangle3df tri, core::vector3df p)
{
core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
scene::IMesh* m = Mesh->getMesh(((scene::IAnimatedMeshSceneNode*)Model)->getFrameNr());
core::array<video::S3DVertex> VA, VB, VC;
video::SMaterial Material;
for (unsigned int i = 0; i < m->getMeshBufferCount(); i++)
{
scene::IMeshBuffer* mb = m->getMeshBuffer(i);
video::S3DVertex* vertices = (video::S3DVertex*) mb->getVertices();
for (unsigned long long v = 0; v < mb->getVertexCount(); v++)
{
if (vertices[v].Pos == tri.pointA)
VA.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointB)
VB.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointC)
VC.push_back(vertices[v]);
if (vertices[v].Pos == tri.pointA || vertices[v].Pos == tri.pointB || vertices[v].Pos == tri.pointC)
Material = mb->getMaterial();
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
core::vector2df
A = VA[0].TCoords,
B = VB[0].TCoords,
C = VC[0].TCoords;
core::vector2df P(A + (u * (C - A)) + (v * (B - A)));
core::dimension2du Size = Material.getTexture(0)->getSize();
CursorOnModel = core::vector2di(Size.Width * P.X, Size.Height * P.Y);
int X = Size.Width * P.X, Y = Size.Height * P.Y;
// DRAWING SOME RECTANGLE
Material.getTexture(0)->lock(true);
Device->getVideoDriver()->setRenderTarget(Material.getTexture(0), true, true, 0);
Device->getVideoDriver()->draw2DRectangle(video::SColor(255, 0, 100, 75), core::rect<s32>((X - 10), (Y - 10),
(X + 10), (Y + 10)));
Device->getVideoDriver()->setRenderTarget(0, true, true, 0);
Material.getTexture(0)->unlock();
return core::vector2df(X, Y);
}
I just wanna make my object paintable in realtime. My current problems are: wrong texture coordinate calculation and non-unique vertex UV coordinates (so, drawing something on the one side of the dwarfe's axe would draw the same on the other side of that axe).
How should i do this?
I was able to use your codebase and get it to work for me.
Re your second problem "non-unique vertex UV coordinates":
Well you are absolutely right, you need unique vertexUVs to get this working, which means that you have to unwrap you models and don't make use of shared uv-space for e.g. mirrored elements and stuff. (e.g. left/right boot - if they use the same uv-space, you'll paint automatically on both, where you want the one to be red and the other to be green). You can check out "uvlayout" (tool) or the uv-unwrap modifier ind 3ds max.
Re the first and more important problem: "**wrong texture coordinate calculation":
the calculation of your baycentric coordinates is correct, but as i suppose your input-data is wrong. I assume you get the triangle and the collisionPoint by using irrlicht's CollisionManager and TriangleSelector. The problem is, that the positions of the triangle's vertices (which you get as returnvalue from the collisionTest) is in WorldCoordiates. But you'll need them in ModelCoordinates for the calculation, so here's what you need to do:
pseudocode:
add the node which contains the mesh of the hit triangle as parameter to getPointUV()
get the inverse absoluteTransformation-Matrix by calling node->getAbsoluteTransformation() [inverse]
transform the vertices of the triangle by this inverse Matrix and use those values for the rest of the method.
Below you'll find my optimized method wich does it for a very simple mesh (one mesh, only one meshbuffer).
Code:
irr::core::vector2df getPointUV(irr::core::triangle3df tri, irr::core::vector3df p, irr::scene::IMeshSceneNode* pMeshNode, irr::video::IVideoDriver* pDriver)
{
irr::core::matrix4 inverseTransform(
pMeshNode->getAbsoluteTransformation(),
irr::core::matrix4::EM4CONST_INVERSE);
inverseTransform.transformVect(tri.pointA);
inverseTransform.transformVect(tri.pointB);
inverseTransform.transformVect(tri.pointC);
irr::core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
irr::video::S3DVertex A, B, C;
irr::video::S3DVertex* vertices = static_cast<irr::video::S3DVertex*>(
pMeshNode->getMesh()->getMeshBuffer(0)->getVertices());
for(unsigned int i=0; i < pMeshNode->getMesh()->getMeshBuffer(0)->getVertexCount(); ++i)
{
if( vertices[i].Pos == tri.pointA)
{
A = vertices[i];
}
else if( vertices[i].Pos == tri.pointB)
{
B = vertices[i];
}
else if( vertices[i].Pos == tri.pointC)
{
C = vertices[i];
}
}
irr::core::vector2df t2 = B.TCoords - A.TCoords;
irr::core::vector2df t1 = C.TCoords - A.TCoords;
irr::core::vector2df uvCoords = A.TCoords + t1*u + t2*v;
return uvCoords;
}

My shadow volumes don't move with my light

I'm currently trying to implement shadow volumes in my opengl world. Right now I'm just focusing on getting the volumes calculated correctly.
Right now I have a teapot that's rendered, and I can get it to generate some shadow volumes, however they always point directly to the left of the teapot. No matter where I move my light(and I can tell that I'm actually moving the light because the teapot is lit with diffuse lighting), the shadow volumes always go straight left.
The method I'm using to create the volumes is:
1. Find silhouette edges by looking at every triangle in the object. If the triangle isn't lit up(tested with the dot product), then skip it. If it is lit, then check all of its edges. If the edge is currently in the list of silhouette edges, remove it. Otherwise add it.
2. Once I have all the silhouette edges, I go through each edge creating a quad with one vertex at each vertex of the edge, and the other two just extended away from the light.
Here is my code that does it all:
void getSilhoueteEdges(Model model, vector<Edge> &edges, Vector3f lightPos) {
//for every triangle
// if triangle is not facing the light then skip
// for every edge
// if edge is already in the list
// remove
// else
// add
vector<Face> faces = model.faces;
//for every triangle
for ( unsigned int i = 0; i < faces.size(); i++ ) {
Face currentFace = faces.at(i);
//if triangle is not facing the light
//for this i'll just use the normal of any vertex, it should be the same for all of them
Vector3f v1 = model.vertices[currentFace.vertices[0] - 1];
Vector3f n1 = model.normals[currentFace.normals[0] - 1];
Vector3f dirToLight = lightPos - v1;
dirToLight.normalize();
float dot = n1.dot(dirToLight);
if ( dot <= 0.0f )
continue; //then skip
//lets get the edges
//v1,v2; v2,v3; v3,v1
Vector3f v2 = model.vertices[currentFace.vertices[1] - 1];
Vector3f v3 = model.vertices[currentFace.vertices[2] - 1];
Edge e[3];
e[0] = Edge(v1, v2);
e[1] = Edge(v2, v3);
e[2] = Edge(v3, v1);
//for every edge
//triangles only have 3 edges so loop 3 times
for ( int j = 0; j < 3; j++ ) {
if ( edges.size() == 0 ) {
edges.push_back(e[j]);
continue;
}
bool wasRemoved = false;
//if edge is in the list
for ( unsigned int k = 0; k < edges.size(); k++ ) {
Edge tempEdge = edges.at(k);
if ( tempEdge == e[j] ) {
edges.erase(edges.begin() + k);
wasRemoved = true;
break;
}
}
if ( ! wasRemoved )
edges.push_back(e[j]);
}
}
}
void extendEdges(vector<Edge> edges, Vector3f lightPos, GLBatch &batch) {
float extrudeSize = 100.0f;
batch.Begin(GL_QUADS, edges.size() * 4);
for ( unsigned int i = 0; i < edges.size(); i++ ) {
Edge edge = edges.at(i);
batch.Vertex3f(edge.v1.x, edge.v1.y, edge.v1.z);
batch.Vertex3f(edge.v2.x, edge.v2.y, edge.v2.z);
Vector3f temp = edge.v2 + (( edge.v2 - lightPos ) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
temp = edge.v1 + ((edge.v1 - lightPos) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
}
batch.End();
}
void createShadowVolumesLM(Vector3f lightPos, Model model) {
getSilhoueteEdges(model, silhoueteEdges, lightPos);
extendEdges(silhoueteEdges, lightPos, boxShadow);
}
I have my light defined as and the main shadow volume generation method is called by:
Vector3f vLightPos = Vector3f(-5.0f,0.0f,2.0f);
createShadowVolumesLM(vLightPos, boxModel);
All of my code seems self documented in places I don't have any comments, but if there are any confusing parts, let me know.
I have a feeling it's just a simple mistake I over looked. Here is what it looks like with and without the shadow volumes being rendered.
It would seem you aren't transforming the shadow volumes. You either need to set the model view matrix on them so they get transformed the same as the rest of the geometry. Or you need to transform all the vertices (by hand) into view space and then do the silhouetting and transformation in view space.
Obviously the first method will use less CPU time and would be, IMO, preferrable.