Compute mesh vertices from a Plane - c++

I would like to draw my Plane with OpenGL to debug my program but I don't know how I can do that (I'm not very good in math).
I've got a Plane with 2 attributs:
A constant
A normal
Here is what i've got:
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec3& a, const glm::vec3& b, const glm::vec3& c )
{
glm::vec3 edge1 = b - a;
glm::vec3 edge2 = c - a;
this->normal = glm::cross(edge1, edge2);
this->constant = -glm::dot( this->normal, a );
this->normalize();
}
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec4& values )
{
this->normal = glm::vec3( values.x, values.y, values.z );
this->constant = values.w;
}
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec3& normal, const float constant ) :
constant (constant),
normal (normal)
{
}
////////////////////////////////////////////////////////////
Plane::Plane( const glm::vec3& normal, const glm::vec3& point )
{
this->normal = normal;
this->constant = -glm::dot(normal, point);
this->normalize();
}
I would like to draw it to see if everything is ok. How I can do that?
(I need to compute vertices and indices to draw it)

When you want to draw, you need to find two vectors that are perpendecular to normal and a point on the plane. That's not so hard. First, let's get a vector that is not normal. Call it some_vect. For example:
if normal == [0, 0, 1]
some_vect = [0, 1, 0]
else
some_vect = [0, 0, 1]
Then, calculating vect1 = cross(normal, some_vect) would give you a vector perpendecular to normal. Calculating vect2 = cross(normal, vect1) would give you another vector that is perpendecular to normal.
Having two perpendecular vectors vect1 and vect2 and one point on the plane, drawing the plane becomes trivial. For example the sqaure with the following four points (remember to normalize the vectors):
point + vect1 * SIZE
point + vect2 * SIZE
point - vect1 * SIZE
point - vect2 * SIZE
where point is a point on the plane. If your constant is distance from the origin, then one point would be constant * normal.

The difficulty with drawing a plane is that it's an infinite surface; i.e. by definition it has no edges or vertices. If you want to show the plane in a typical polygonal fashion then you'll have to crop it to a particular area, such as a square.
A fairly easy approach is this:
Pick an arbitrary unit vector which is perpendicular to the normal. Store it as v1.
Use the cross product of v1 and the plane normal to get v2.
Negate v1 to get v3.
Negate v2 to get v4.
The points v1-4 now express the four corners of a square which has the same orientation as your plane. All you need to do is multiply them up to whatever size you want, and draw it relative to any point on your plane.

Related

OpenGL : how to get the matrix that mirror the Scene?

Context:
I try to make a mirror in OpenGL.
So what I need to define a mirror is a plane, defined by a point p and two normal vectors n1 and n2. n3=n1^n2 defines a vector oriented to the viewer of the mirror (the sign is important to discard fragments that are behind the mirror and also would be reflected in front of the mirror). n1, n2 and n3 define a basis in the coordinates of the mirror (with origin p an arbitrary point located on the mirror plane).
Now, to draw the reflection of the scene I need the matrix reflectionMatrix that transform any point into its mirrored. I don't even know if it's possible, but what I tried looks ok but there is problems when the mirror is very close to the point.
I tried to use transition matrix used in linear algebra, but I don't even know if I can use them with transformation matrix. I am also a bit confused about the transformation order.
The code is that :
/// #brief Compute change-of-basis matrix.
/// From basis {e1, e2, e3} to canonical basis of R^3.
mat4 transition(const glm::vec3& e1, const glm::vec3& e2, const glm::vec3& e3)
{
// GLM is column major: mat4(col1, col2, col3, col4)
return {
vec4(e1, 0),
vec4(e2, 0),
vec4(e3, 0),
vec4(vec3(0), 1)
};
}
glm::mat4 getReflectionMatrix() {
const glm::vec3 mirrorPos{scene.mirrorPos};
const glm::vec3 n1{1, 0, 0};
const glm::vec3 n2{0, 1, 0};
const glm::vec3 n3{glm::cross(n1, n2)};
// n1, n2 are orthogonal vectors of the mirror
// n3 is to complete the basis
// the vertices should be in trigonometric order to work
glm::mat4 reflectionMatrix{1};
// Let p be the point we mirror
// Compute p1 = p relative to the mirror [Translation in global coordinate system ~= premultiply]
reflectionMatrix = glm::translate(reflectionMatrix, -mirrorPos);
// Compute p2 = same as p1 but in coordinate system {n1, n2, n3}
reflectionMatrix *= glm::inverse(transition(n1, n2, n3));
// Compute p3 = (p2.x, p2.y, -p2.z) to go into the mirror (z is orthogonal distance from mirror)
reflectionMatrix = glm::scale(reflectionMatrix, {1, 1, -1});
// Compute p4 = same as p3 but in canonical coordinate system (again relative to the mirror)
reflectionMatrix *= transition(n1, n2, n3);
// Compute p5 = p4 from relative to the mirror to global space [Translation in global coordinate system]
reflectionMatrix = glm::translate(reflectionMatrix, mirrorPos);
return reflectionMatrix;
}
What is wrong with the reflection matrix? Or is there a simpler way

Determining rotation matrix about an axis for a given angle

I've been trying to understand matrices and vectors and implemented Rodrigue's rotation formula to determine the rotation matrix about an axis for a given angle. I've got function Transform which calls out to function Rotate.
// initial values of eye ={0,0,7}
//initial values of up={0,1,0}
void Transform(float degrees, vec3& eye, vec3& up) {
vec3 axis = glm::cross(glm::normalize(eye), glm::normalize(up));
glm::normalize(axis);
mat3 resultRotate = rotate(degrees, axis);
eye = eye * resultRotate;
glm::normalize(eye);
up = up * resultRotate;`enter code here`
glm::normalize(up);
}
mat3 rotate(const float degrees, const vec3& axis) {
//Implement Rodrigue's axis-angle rotation formula
float radDegree = glm::radians(degrees);
float cosValue = cosf(radDegree);
float minusCos = 1 - cosValue;
float sinValue = sinf(radDegree);
float cartesianX = axis.x;
float cartesianY = axis.y;
float cartesianZ = axis.z;
mat3 myFinalResult = mat3(cosValue +(cartesianX*cartesianX*minusCos), ((cartesianX*cartesianY*minusCos)-(cartesianZ*sinValue)),((cartesianX*cartesianZ*minusCos)+(cartesianY*sinValue)),
((cartesianX*cartesianY*minusCos)+(cartesianZ*sinValue)), (cosValue+(cartesianY*cartesianY*minusCos)), ((cartesianY*cartesianZ*minusCos) - (cartesianX*sinValue)),
((cartesianX*cartesianZ*minusCos)-(cartesianY*sinValue)), ((cartesianY*cartesianZ*minusCos) + (cartesianX*sinValue)), ((cartesianZ*cartesianZ*minusCos) + cosValue));
return myFinalResult;
}
All the values, resultant rotation matrix and the changed vectors are as expected for +angle of rotation, but wrong for negative angles and from then on, has cascading effect until the all the vectors are re-initialised. Can someone please help me figure out the problem? I cannot use any inbuilt functions like glm::rotate.
I do not use Rodrigues_rotation_formula because it needs to compute a system of equation on runtime and gets very complicated in higher dimensions.
Instead I am using axis aligned incremental rotations along with 4x4 homogenous transform matrices which are really easily portable to higher dimensions like 4D rotors.
Now there are local and global rotations. Local rotations will rotate around your matrix coordiante system local axises and global ones will rotate around world (or main coordinate system)
What you want is create a transform matrix around some point,axis and angle. To do that just:
create a transform matrix A
that has one axis aligned to axis of rotation and origin is center of rotation. To construct such matrix you need 2 perpendicular vectors which are easily obtainable from cross product.
rotate A around its local axis aligned to axis of rotation by angle
by simple multiplication of A by axis aligned incremental rotation R so
A*R;
revert the original transform of A before rotation
by simply multiplying inverse of A to the result so
A*R*Inverse(A);
apply this on matrix M you want to rotate
also by simply multiplying this to M so:
M*=A*R*Inverse(A);
And that is it... Here 3D OBB approximation you can find function :
template <class T> _mat4<T> rotate(_mat4<T> &m,T ang,_vec3<T> p0,_vec3<T> dp)
{
int i;
T c=cos(ang),s=sin(ang);
_vec3<T> x,y,z;
_mat4<T> a,_a,r=mat4(
1, 0, 0, 0,
0, c, s, 0,
0,-s, c, 0,
0, 0, 0, 1);
// basis vectors
x=normalize(dp); // axis of rotation
y=_vec3<T>(1,0,0); // any vector non parallel to x
if (fabs(dot(x,y))>0.75) y=_vec3<T>(0,1,0);
z=cross(x,y); // z is perpendicular to x,y
y=cross(z,x); // y is perpendicular to x,z
y=normalize(y);
z=normalize(z);
// feed the matrix
for (i=0;i<3;i++)
{
a[0][i]= x[i];
a[1][i]= y[i];
a[2][i]= z[i];
a[3][i]=p0[i];
a[i][3]=0;
} a[3][3]=1;
_a=inverse(a);
r=m*a*r*_a;
return r;
};
That does exactly that. Where m is original matrix to transform (and returns the rotated one), ang is signed angle in [rad], p0 is center of rotation and dp is axis of rotation direction vector.
This approach does not have any singularities nor problems to rotate by negative angles ...
If you want to use this with glm or any other GLSL like math just change the templates to what you use so float,vec3,mat4 instead of T,_vec3<T>,mat4<T>.

calling glm::unproject() correctly, confused

I'm trying to use glm::unproject() to convert my SDL mouse coordinates into a world position vector, on the x/z-plane. Basically I want to figure out which "x/z" coordinate the user clicked on with a mouse.
From other stack overflow answers I came up needing to call glm::unproject(). I think I'm passing it the wrong arguments, because the values I'm getting back for the world position (printed std std::cerr) aren't world position values as I would expect.
Am I constructing the arguments to glm::unproject() correctly below? Specifically should I be combing the camera's world position and the view matrix (computed using glm::lookAt) to compute the modelview matrix passed into glm::unproject?
struct Dimensions {
int x, y, w, h;
};
glm::mat4
Camera::view_matrix() const
{
// VIEW matrix is created by looking at some target member
auto const& target = target_->translation;
auto const position_xyz = world_position();
glm::vec3 const UP{0, 1, 0};
return glm::lookAt(position_xyz, target, UP);
}
glm::mat4
Camera::projection_matrix() const
{
auto const fov = glm::radians(90.0f);
return glm::perspective(fov, 4.0f/3.0f, 0.1f, 200.0f);
}
glm::vec3
calculate_worldpos(Camera const& camera, int const mouse_x, int const mouse_y)
{
float const width = 1024.0f, height = 768.0f;
glm::vec4 const viewport = glm::vec4(0.0f, 0.0f, width, height);
glm::mat4 const modelview = camera.view_matrix();
glm::mat4 const projection = camera.projection_matrix();
float z = 0.0;
glm::vec3 screenPos = glm::vec3(mouse_x, height - mouse_y - 1, z);
std::cerr << "screenpos: xyz: '" << glm::to_string(screenPos) << "'\n";
glm::vec3 worldPos = glm::unProject(screenPos, modelview, projection, viewport);
std::cerr << "worldpos: xyz: '" << glm::to_string(worldPos) << "'\n";
return worldPos;
}
In the image below, I have the follow setup.
camera lookAt target = (0, 0, 0)
camera world position = (-0.009, 5.107, -0.368)
(mouse_x, mouse_y, mouse_z) = (286, 393, 0)
If you look at the image below, you can see that my mouse is hovering over the world position (3, 0, 0) as shown by the grid. I would expect calculating the world position of my mouse (as shown in the picture) would return me the vector (3, 0, 0). It does not, instead I get the vector: (0.049, 5.007, -0.360).
Does anyone see where I might be going wrong? I'm assuming I'm making some kind of incorrect assumption somewhere.
Your assumption is wrong: glm::unproject returns the worldspace coordinates of the input given by a xy-position in pixel coordinates and a z-coordinate storing the depth value. On every pixel on the screen, there is an infinite number of points in worldspace that project to this pixel (All that lie on the ray going from the projecting center through this pixel). Which one you want is identified by choosing the depth coordinate which than results in one specific point on this ray. Choosing z = 0 means that the result will always be a point on the near-plane of the camera.
What you are actually looking for is the intersection of this ray (going through the camera position and the calculated point) and the xz-plane (where y=0).
The ray is given by the two points on it (camera position C, near plane point P) as follows:
-0.009 0.058
C + l * (P-C) = ( 5.107 ) + l * ( -0.100 )
-0.368 0.008
, where l is a free variable.
As already said, we are looking for the intersection point (a,b) with the y=0 plane, thus we can formulate the following equation:
-0.009 0.058 a
( 5.107 ) + l * ( -0.100 ) = ( 0 )
-0.368 0.008 b
Solving the y-equation (5.107 + l * -0.1 = 0) for l results in l = 51.07. Pasting back in the equations for x and z yields:
a = -0.009 + 51.07 * 0.058 = 2.95306
b = -0.368 + 51.07 * 0.008 = 0.04056
Which is close to the expected worldspace position. The difference is most probably given by the fact that you just showed rounded numbers in the question. For accuracy reasons, I would also not calculate a point on the near-plane but one on the far plane (z=1) since the near-plane distance is usually quite small and could lead to numerical issues.
Conclusion: All values supplied are correct, but you were just not calculating what you expected.

Raytracing Reflection distortion

I've started coding a raytracer, but today I encounter a problem when dealing with reflection.
First, here is an image of the problem:
I only computed the object's reflected color (so no light effect is applied on the reflected object)
The problem is that distortion that I really don't understand.
I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyNormal = normal;
Vector copyView = ray;
copyNormal.makeUnit();
copyView.makeUnit();
cosAngle = copyView.scale(copyNormal);
return (-2.0 * cosAngle * normal + ray);
}
So for example when my ray is hitting the bottom of my sphere I have the following values:
cos: 1
ViewVector: [185.869,-2.44308,-26.3504]
NormalVector: [185.869,-2.44308,-26.3504]
ReflectedVector: [-185.869,2.44308,26.3504]
Bellow if the code that handles the reflection:
Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
Vector rayVec, double k, unsigned int pass) {
if (pass > 10)
return obj->getColor();
if (obj->getReflectionIndex() == 0) {
// apply effects
return obj->getColor();
}
Color cuColor(obj->getColor());
Color newColor(0);
Math math;
Vector view;
Vector normal;
Vector reflected;
Position impact;
std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;
normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
view = Vector(impact.x, impact.y, impact.z) -
Vector(camera.pos.x, camera.pos.y, camera.pos.z);
reflected = math.calcReflectedVector(view, normal);
reflectedObj = this->getClosestObj(reflected, Camera(impact));
if (reflectedObj.second <= 0) {
cuColor.mix(0x000000, obj->getReflectionIndex());
return cuColor;
}
newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
reflected, reflectedObj.second, pass + 1);
// apply effects
cuColor.mix(newColor, obj->getReflectionIndex());
return newColor;
}
To calculate the normal and the reflected Vector:
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyRay = ray;
copyRay.makeUnit();
cosAngle = copyRay.scale(normal);
return (-2.0 * cosAngle * normal + copyRay);
}
Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
Vector rayVec, double k, Position& impact) const {
const Position &objPos = obj->getPosition();
Vector normal;
impact.x = pos.x + k * rayVec.x;
impact.y = pos.y + k * rayVec.y;
impact.z = pos.z + k * rayVec.z;
obj->calcNormal(normal, impact);
return normal;
}
[EDIT1]
I have a new image, i removed the plane only to keep the spheres:
As you can see there is blue and yellow on the border of the sphere.
Thanks to neam I colored the sphere applying the following formula:
newColor.r = reflected.x * 127.0 + 127.0;
newColor.g = reflected.y * 127.0 + 127.0;
newColor.b = reflected.z * 127.0 + 127.0;
Bellow is the visual result:
Ask me if you need any information.
Thanks in advance
There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.
you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())
you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).
and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.
EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine
There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm.
If don't figure out what is wrong with calcNormalVector(...) please post its code :)
Did that works ?
I assume that your ray and normal vector are already normalized.
Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
return ray - 2.0 * Math::dot(normal, ray) * normal;
}
Moreover, I can't understand with your provided code this call :
this->getClosestObj(reflected, Camera(obj->getPosition()));
That should be something like that no ?
this->getClosestObj(reflected, Camera(impact));

Ray tracing vectors

So I decided to write a ray tracer the other day, but I got stuck because I forgot all my vector math.
I've got a point behind the screen (the eye/camera, 400,300,-1000) and then a point on the screen (a plane, from 0,0,0 to 800,600,0), which I'm getting just by using the x and y values of the current pixel I'm looking for (using SFML for rendering, so it's something like 267,409,0)
Problem is, I have no idea how to cast the ray correctly. I'm using this for testing sphere intersection(C++):
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{ //operator * between 2 vec3s is a dot product
Vec3 dist = ray.start - sphere.pos; //both vec3s
float B = -1 * (ray.dir * dist);
float D = B*B - dist * dist + sphere.radius * sphere.radius; //radius is float
if(D < 0.0f)
return false;
float t0 = B - sqrtf(D);
float t1 = B + sqrtf(D);
bool ret = false;
if((t0 > 0.1f) && (t0 < t))
{
t = t0;
ret = true;
}
if((t1 > 0.1f) && (t1 < t))
{
t = t1;
ret = true;
}
return ret;
}
So I get that the start of the ray would be the eye position, but what is the direction?
Or, failing that, is there a better way of doing this? I've heard of some people using the ray start as (x, y, -1000) and the direction as (0,0,1) but I don't know how that would work.
On a side note, how would you do transformations? I'm assuming that to change the camera angle you just adjust the x and y of the camera (or the screen if you need a drastic change)
The parameter "ray" in the function,
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{
...
}
should already contain the direction information and with this direction you need to check if the ray intersects the sphere or not. (The incoming "ray" parameter is the vector between the camera point and the pixel the ray is sent.)
Therefore the local "dist" variable seems obsolete.
One thing I can see is that when you create your rays you are not using the center of each pixel in the screen as the point for building the direction vector. You do not want to use just the (x, y) coordinates on the grid for building those vectors.
I've taken a look at your sample code and the calculation is indeed incorrect. This is what you want.
http://www.csee.umbc.edu/~olano/435f02/ray-sphere.html (I took this course in college, this guy knows his stuff)
Essentially it means you have this ray, which has an origin and direction. You have a sphere with a point and a radius. You use the ray equation and plug it into the sphere equation and solve for t. That t is the distance between the ray origin and the intersection point on the spheres surface. I do not think your code does this.
So I get that the start of the ray would be the eye position, but what is the direction?
You have camera defined by vectors front, up, and right (perpendicular to each other and normalized) and "position" (eye position).
You also have width and height of viewport (pixels), vertical field of view (vfov) and horizontal field of view (hfov) in degrees or radians.
There are also 2D x and y coordinates of pixel. X axis (2D) points to the right, Y axis (2D) points down.
For a flat screen ray can be calculated like this:
startVector = eyePos;
endVector = startVector
+ front
+ right * tan(hfov/2) * (((x + 0.5)/width)*2.0 - 1.0)
+ up * tan(vfov/2) * (1.0 - ((y + 0.5f)/height)*2.0);
rayStart = startVector;
rayDir = normalize(endVector - startVector);
That assumes that screen plane is flat. For extreme field of view angles (fov >= 180 degreess) you might want to make screen plane spherical, and use different formulas.
how would you do transformations
Matrices.