Opengl perspective distortion - opengl

I'm having this weird issue and I'm hoping someone could clear this up for me so I can understand what's wrong and act accordingly. In OpenGL (fixed-function) I'm rendering a tube with inner faces in an orthographic projection.
The image below shows the result. It consists of 4 rings of vertices which are forming triangles using the index pattern shown at the left. I numbered the vertices on the tube for your convenience. At the right is the texture being used:
As you can see the texture is being heavily distorted. As I initially created the tube with only 2 rings of vertices, I thought raising the amount of rings would fix the distortion but no joy. Also glHint doesn't seem to affect this specific problem.
The texture coordinates seem to be alright. Also, the checker pattern seems to render correctly, but I guess the distortion is just not visible in that very specific pattern.
Please ignore the crossed lines as one of those is a non-existent edge; I rendered the wireframe through GL_LINE_LOOP.

This particular effect is caused by the way texture coordinates are interpolated in a triangle. What happens is, that one direction becomes the major component, whereas the other is skewed. Your specimen happens to be very prone to this. This also happens to be a problem with perspective projections and textures on floors or walls. What you need is so called "perspective correct texturing" PCT. There's a glHint for this but I guess, you already tried that.
Frankly the only way to avoid this is by subdividing and applying the perspective correction as well. But fortunately this is easy enough for quadrilateral based geoemtry (like yours). When subdividing the edges interpolate the texture coordinates at the subdivision centers along all 4 edges and use the mean value of a 4 of them. Interpolating the texture coordinate only along one edge is exactly what you want to avoid.
If you want to keep your geometry data untouched you can implement PCT in the fragment shader.

Try some subdivision:
template< typename Vec >
void glVec2d( const Vec& vec )
{
glVertex2f( static_cast<float>( vec.x() ) , static_cast<float>( vec.y() ) );
}
template< typename Vec >
void glTex2d( const Vec& vec )
{
glTexCoord2f( static_cast<float>( vec.x() ) , static_cast<float>( vec.y() ) );
}
template< typename Vec >
void glQuad
(
const Vec& A,
const Vec& B,
const Vec& C,
const Vec& D,
unsigned int divs = 2,
const Vec& At = Vec(0,0),
const Vec& Bt = Vec(1,0),
const Vec& Ct = Vec(1,1),
const Vec& Dt = Vec(0,1)
)
{
// base case
if( divs == 0 )
{
glTex2d( At );
glVec2d( A );
glTex2d( Bt );
glVec2d( B );
glTex2d( Ct );
glVec2d( C );
glTex2d( Dt );
glVec2d( D );
return;
}
Vec AB = (A+B) * 0.5;
Vec BC = (B+C) * 0.5;
Vec CD = (C+D) * 0.5;
Vec AD = (A+D) * 0.5;
Vec ABCD = (AB+CD) * 0.5;
Vec ABt = (At+Bt) * 0.5;
Vec BCt = (Bt+Ct) * 0.5;
Vec CDt = (Ct+Dt) * 0.5;
Vec ADt = (At+Dt) * 0.5;
Vec ABCDt = (ABt+CDt) * 0.5;
// subdivided point layout
// D CD C
//
// AD ABCD BC
//
// A AB B
// subdivide
glQuad2d( A, AB, ABCD, AD, divs - 1, At, ABt, ABCDt, ADt );
glQuad2d( AB, B, BC, ABCD, divs - 1, ABt, Bt, BCt, ABCDt );
glQuad2d( ABCD, BC, C, CD, divs - 1, ABCDt, BCt, Ct, CDt );
glQuad2d( AD, ABCD, CD, D, divs - 1, ADt, ABCDt, CDt, Dt );
}
I usually use Eigen::Vector2f for Vec.

Why are you using an orthographic projection for this? If you were using a perspective projection, OpenGL would be correcting the texture mapping for you.

Related

Why do we choose the "bounding box" method to fill a triangle?

I'm learning a short course "How OpenGL works: software rendering in 500 lines of code" on GitHub. In lesson 2, the author is teaching us how to fill a triangle with color. He comes up with two methods:
Enumerate all the horizontal segments within the triangle, and draw these segments. The author's code is as follows.
void triangle(Vec2i t0, Vec2i t1, Vec2i t2, TGAImage &image, TGAColor color) {
if (t0.y==t1.y && t0.y==t2.y) return; // I dont care about degenerate triangles
// sort the vertices, t0, t1, t2 lower−to−upper (bubblesort yay!)
if (t0.y>t1.y) std::swap(t0, t1);
if (t0.y>t2.y) std::swap(t0, t2);
if (t1.y>t2.y) std::swap(t1, t2);
int total_height = t2.y-t0.y;
for (int i=0; i<total_height; i++) {
bool second_half = i>t1.y-t0.y || t1.y==t0.y;
int segment_height = second_half ? t2.y-t1.y : t1.y-t0.y;
float alpha = (float)i/total_height;
float beta = (float)(i-(second_half ? t1.y-t0.y : 0))/segment_height; // be careful: with above conditions no division by zero here
Vec2i A = t0 + (t2-t0)*alpha;
Vec2i B = second_half ? t1 + (t2-t1)*beta : t0 + (t1-t0)*beta;
if (A.x>B.x) std::swap(A, B);
for (int j=A.x; j<=B.x; j++) {
image.set(j, t0.y+i, color); // attention, due to int casts t0.y+i != A.y
}
}
}
Find the bounding box of the triangle. Enumerate all the points in the bounding box, and use barycentric coordinates to check if the point is within the triangle. If the point is in the triangle, then fill the point with color. The author's code is as follows.
Vec3f barycentric(Vec2i *pts, Vec2i P) {
Vec3f u = cross(Vec3f(pts[2][0]-pts[0][0], pts[1][0]-pts[0][0], pts[0][0]-P[0]), Vec3f(pts[2][1]-pts[0][1], pts[1][1]-pts[0][1], pts[0][1]-P[1]));
if (std::abs(u[2])<1) return Vec3f(-1,1,1); // triangle is degenerate, in this case return smth with negative coordinates
return Vec3f(1.f-(u.x+u.y)/u.z, u.y/u.z, u.x/u.z);
}
void triangle(Vec2i *pts, TGAImage &image, TGAColor color) {
Vec2i bboxmin(image.get_width()-1, image.get_height()-1);
Vec2i bboxmax(0, 0);
Vec2i clamp(image.get_width()-1, image.get_height()-1);
for (int i=0; i<3; i++) {
for (int j=0; j<2; j++) {
bboxmin[j] = std::max(0, std::min(bboxmin[j], pts[i][j]));
bboxmax[j] = std::min(clamp[j], std::max(bboxmax[j], pts[i][j]));
}
}
Vec2i P;
for (P.x=bboxmin.x; P.x<=bboxmax.x; P.x++) {
for (P.y=bboxmin.y; P.y<=bboxmax.y; P.y++) {
Vec3f bc_screen = barycentric(pts, P);
if (bc_screen.x<0 || bc_screen.y<0 || bc_screen.z<0) continue;
image.set(P.x, P.y, color);
}
}
}
The author chooses the second method at the end of lesson 2, but I can't understand why. Is the reason something about efficiency, or it is just because the second method is easier to understand?
Barycentric coordinates are used to interpolate or "smear" values at each vertex of the triangle across the triangle. For example: if I define a triangle ABC, I can give each vertex a color, Red, Green, and Blue respectively. Then as I fill out the triangle, I can use the barycentric coordinates (alpha, beta, gamma) to get a linear combination P = alpha * Red + beta * Blue + gamma * Green to determine what the color at a point inside the triangle should be.
This process is highly optimized and built into GPU hardware. You can smear any values you'd like, including normal vectors (which is often used in per-pixel lighting computations), so it is a very useful operation.
Of course, I have no idea what your teacher is thinking, but I'd hazard to guess that in a future lesson they might talk about that so the second algorithm naturally leads into that discussion.
Source: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates

Optimizing a Ray Tracer

I'm tasked with optimizing the following ray tracer:
void Scene::RayTrace()
{
for (int v = 0; v < fb->h; v++) // all vertical pixels in framebuffer
{
calculateFPS(); // calculates the current fps and prints it
for (int u = 0; u < fb->w; u++) // all horizontal pixels in framebuffer
{
fb->Set(u, v, 0xFFAAAAAA); // background color
fb->SetZ(u, v, FLT_MAX); // sets the Z values to all be maximum at beginning
V3 ray = (ppc->c + ppc->a*((float)u + .5f) + ppc->b*((float)v + .5f)).UnitVector(); // gets the camera ray
for (int tmi = 0; tmi < tmeshesN; tmi++) // iterates over all triangle meshes
{
if (!tmeshes[tmi]->enabled) // doesn't render a tmesh if it's not set to be enabled
continue;
for (int tri = 0; tri < tmeshes[tmi]->trisN; tri++) // iterates over all triangles in the mesh
{
V3 Vs[3]; // triangle vertices
Vs[0] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 0]];
Vs[1] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 1]];
Vs[2] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 2]];
V3 bgt = ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); // I don't entirely understand what this does
if (bgt[2] < 0.0f || bgt[0] < 0.0f || bgt[1] < 0.0f || bgt[0] + bgt[1] > 1.0f)
continue;
if (fb->zb[(fb->h - 1 - v)*fb->w + u] < bgt[2])
continue;
fb->SetZ(u, v, bgt[2]);
float alpha = 1.0f - bgt[0] - bgt[1];
float beta = bgt[0];
float gamma = bgt[1];
V3 Cs[3]; // triangle vertex colors
Cs[0] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 0]];
Cs[1] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 1]];
Cs[2] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 2]];
V3 color = Cs[0] * alpha + Cs[1] * beta + Cs[2] * gamma;
fb->Set(u, v, color.GetColor()); // sets this pixel accordingly
}
}
}
fb->redraw();
Fl::check();
}
}
Two things:
I don't entirely understand what ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); does. Can anyone explain this, in terms of ray-tracing, to me? Here is the function inside my "Planar Pinhole Camera" class (this function was given to me):
V3 V3::IntersectRayWithTriangleWithThisOrigin(V3 r, V3 Vs[3])
{
M33 m; // 3X3 matrix class
m.SetColumn(0, Vs[1] - Vs[0]);
m.SetColumn(1, Vs[2] - Vs[0]);
m.SetColumn(2, r*-1.0f);
V3 ret; // Vector3 class
V3 &C = *this;
ret = m.Inverse() * (C - Vs[0]);
return ret;
}
The basic steps of this are apparent, I just don't see what it's actually doing.
How would I go about optimizing this ray-tracer from here? I've found something online about "kd trees," but I'm unsure how complex they are. Does anyone have some good resources on simple solutions for optimizing this? I've had some difficulty deciphering what's out there.
Thanks!
Probably the largest optimisation by far would be to use some sort of bounding volume hierarchy. Right now the code intersects all rays with all triangles of all objects. With a BVH, we instead ask: "given this ray, which triangles intersect?" This means that for each ray, you generally only need to test for intersection with a handful of primitives and triangles, rather than every single triangle in the scene.
IntersectRayWithTriangleWithThisOrigin
from the look of it
it creates inverse transform matrix from the triangle edges (triangle basis vectors are X,Y)
do not get the Z axis I would expect the ray direction there and not position of pixel (ray origin)
but can be misinterpreting something
anyway the inverse matrix computation is the biggest problem
you are computing it for each triangle per pixel that is a lot
faster would be having computed inverse transform matrix of each triangle before raytracing (once)
where X,Y are the basis and Z is perpendicular to booth of them facing always the same direction to camera
and then just transform your ray into it and check for limits of intersection
that is just matrix*vector and few ifs instead of inverse matrix computation
another way would be to algebraically solve ray vs. plane intersection
that should lead to much simpler equation then matrix inversion
after that is that just a mater of basis vector bound checking

Ray tracing - refraction bug

I am writing a ray tracer. So far I have diffuse, Blinn lighting and reflections. Something has gone wrong with my refractions and I have no idea what. I'm hoping someone can help me out.
I have a big red diffuse + Blinn sphere and a small refractive one with refraction index n = 1.5.
The small one is just really screwed up.
Relevant code:
ReflectiveSurface::ReflectiveSurface(const Color& _n, const Color& _k) :
F0(Color(((_n - 1)*(_n - 1) + _k * _k) / ((_n + 1)*(_n + 1) + _k * _k))) {}
Color ReflectiveSurface::F(const Point& N, const Point& V) const {
float cosa = fabs(N * V);
return F0 + (F0 * (-1) + 1) * pow(1 - cosa, 5);
}
Color ReflectiveSurface::getColor(const Incidence& incidence, const Scene& scene, int traceDepth) const {
Point reflectedDir = reflect(incidence.normal, incidence.direction);
Ray ray = Ray(incidence.point + reflectedDir * epsilon, reflectedDir);
return F(incidence.normal, incidence.direction) * scene.rayTrace(ray, traceDepth + 1);
}
Point ReflectiveSurface::reflect(const Point& N, const Point& V) const {
return V - N * (2 * (N * V));
}
bool RefractiveSurface::refractionDir(Point& T, Point& N, const Point& V) const {
float cosa = -(N * V), cn = n;
if (cosa < 0) { cosa = -cosa; N = N * (-1); cn = 1 / n; }
float disc = 1 - (1 - cosa * cosa) / cn / cn;
if (disc < 0) return false;
T = V / cn + N * (cosa / cn - sqrt(disc));
return true;
}
RefractiveSurface::RefractiveSurface(float _n, const Color& _k) : ReflectiveSurface(Color(1, 1, 1) * _n, _k) {}
Surface* RefractiveSurface::copy() { return new RefractiveSurface(*this); }
Color RefractiveSurface::getColor(const Incidence& incidence, const Scene& scene, int traceDepth) const {
Incidence I = Incidence(incidence);
Color reflectedColor, refractedColor;
Point direction = reflect(I.normal, I.direction);
Ray reflectedRay = Ray(I.point + direction * epsilon, direction);
if (refractionDir(direction, I.normal, I.direction)) {
Ray refractedRay = Ray(I.point + direction * epsilon, direction);
Color colorF = F(I.normal, I.direction);
reflectedColor = colorF * scene.rayTrace(reflectedRay, traceDepth + 1);
refractedColor = (Color(1, 1, 1) - colorF) * scene.rayTrace(refractedRay, traceDepth + 1);
}
else {
reflectedColor = scene.rayTrace(reflectedRay, traceDepth + 1);
}
return reflectedColor + refractedColor;
}
The code is all over the place, since this is a homework and I'm not allowed to include additional headers and I have to send it in in one cpp file, so i had to separate every class into forward declaration, declaration and implementation in that one file. It makes me vomit but I tried to keep it as clean as possible. There is tons of code so I only included what I thought was most related. ReflectiveSurface is RefractiveSurface's parent class. N is the surface normal, V is the ray direction vector this normal, n is the refraction index. The incidence structure holds a point, a normal and a direction vector.
Formulas for the Fersnel approximation and the refraction vector respectively:
You can see in the code that I use an epsilon * ray direction value to avoid shadow acne caused by float imprecision. Something similar seems to be happening to the small sphere, though.
Another screenshot:
As you can see, the sphere doesn't appear transparent, but it does inherit the diffuse sphere's color. It also usually has some white pixels.
Without refraction:
RefractiveSurface::refractionDir takes the normal N by (non-const) reference, and it may invert it. This seems dangerous. It's not clear the caller wants I.normal to be flipped, as it's used in color calculations further down.
Also, refracted_color is not always initialized (unless the Color constructor makes it black).
Try (temporarily) simplifying and just see if the refracted rays hit where you expect. Remove the Fresnel computation and the reflection component and just set refracted_color to the result of the trace of the refracted ray. That will help determine if the bug is in the Fresnel calculation or in the geometry of bending the ray.
A debugging tip: Color the pixels that don't hit anything with something other than black. That makes it easy to distinguish the misses from the shadows (surface acne).
The answer turned out to be pretty simple, but it took me like 3 days of staring at the code to catch the bug. I have a Surface class, I derive from it two classes: RoughSurface (diffuse+blinn) and RelfectiveSurface. Then, RefractiveSurace is derived from RefleciveSurface. ReflectiveSurface's constructor takes the refractive index(n) and the extinction value (k) as parameters, but doesn't store them. (F0) is computed from them during construction, and then they are lost. RefractiveSurface, on the other hand, uses (n) in the refraction angle calculation.
Old constructor:
RefractiveSurface::RefractiveSurface(float _n, const Color& _k) :
ReflectiveSurface(Color(1, 1, 1) * _n, _k) {}
New Constructor:
RefractiveSurface::RefractiveSurface(float _n, const Color& _k) :
ReflectiveSurface(Color(1, 1, 1) * _n, _k), n(_n) {}
As you can see, I forgot to save the (n) value for RefractiveSurface in the constructor.
Small red sphere behind big glass sphere lit from the two sides of the camera:
It looks awesome in motion!D
Thank you for your time, guys. Gotta finish this homework, then I'll rewrite the whole thing and optimize the hell out of it.

Creating arrays using for loops in OpenGL

What I have to do is create a square that is made up of 8 triangles, all the same size, using arrays. The coordinates of the four corners of the square are, (-10, -10, 10), (-10, -10, -10), (10, -10, -10), (10, -10, 10). And that's starting with the upper left and going counter clockwise.
I have already created it before just entering values into the array but now I have to figure out how to do it using for loops in C++. So I know that for each array (I need to create a vertex, index and color array) I need to create a for loop and that that for loop has to have a for loop inside of it.
I like to use Eigen::Vector2f for Vec but anything with a similar interface should work:
template< typename Vec >
void glVec2d( const Vec& vec )
{
glVertex2d( vec.x(), vec.y() );
}
template< typename Vec >
void glTex2d( const Vec& vec )
{
glTexCoord2d( vec.x(), vec.y() );
}
template< typename Vec >
void glQuad2d
(
const Vec& A, // lower left coord
const Vec& B, // lower right coord
const Vec& C, // upper right coord
const Vec& D, // upper left coord
unsigned int divs = 2,
const Vec& At = Vec(0,0),
const Vec& Bt = Vec(1,0),
const Vec& Ct = Vec(1,1),
const Vec& Dt = Vec(0,1)
)
{
// base case
if( divs == 0 )
{
glTex2d( At );
glVec2d( A );
glTex2d( Bt );
glVec2d( B );
glTex2d( Ct );
glVec2d( C );
glTex2d( Dt );
glVec2d( D );
return;
}
Vec AB = (A+B) * 0.5;
Vec BC = (B+C) * 0.5;
Vec CD = (C+D) * 0.5;
Vec AD = (A+D) * 0.5;
Vec ABCD = (AB+CD) * 0.5;
Vec ABt = (At+Bt) * 0.5;
Vec BCt = (Bt+Ct) * 0.5;
Vec CDt = (Ct+Dt) * 0.5;
Vec ADt = (At+Dt) * 0.5;
Vec ABCDt = (ABt+CDt) * 0.5;
// subdivided point layout
// D CD C
//
// AD ABCD BC
//
// A AB B
// subdivide
glQuad2d( A, AB, ABCD, AD, divs - 1, At, ABt, ABCDt, ADt );
glQuad2d( AB, B, BC, ABCD, divs - 1, ABt, Bt, BCt, ABCDt );
glQuad2d( ABCD, BC, C, CD, divs - 1, ABCDt, BCt, Ct, CDt );
glQuad2d( AD, ABCD, CD, D, divs - 1, ADt, ABCDt, CDt, Dt );
}
It's currently recursive but you could always add an explicit stack for some for-loop action.

Vector rotation problem

I'm working on a program with IK and have run into what I had at first thought was a trivial problem but have since had trouble solving it.
Background:
Everything is in 3d space.
I'm using 3d Vectors and Quaternions to represent transforms.
I have a limb which we will call V1.
I want to rotate it onto V2.
I was getting the angle between V1 and V2.
Then the axis for rotation by V1 cross V2.
Then making a Quaternion from the axis and angle.
I then take the limbs current Orientation and multiply it by the axis angle quaternion.
This I believe is my desired local space for the limb.
This limb is attached to a series of other links. To get the world space I traverse up to the root combining the parents local space with the child's local space until I reach the root.
This seems to work grand if the vector that I am rotating to is contained within the X and Y plane or if the body which the limb is attached to hasn't been modified. If anything has been modified, for example rotating the root node, then on the first iteration the vector will rotate very close to the desired vector. After that point though it will begin to spin all over the place and never reach the goal.
I've gone through all the math line by line and it appears to all be correct. I'm not sure if there is something that I do not know about or am simply over looking. Is my logical sound? Or am I unaware of something? Any help is greatly appreciated!
Quaternion::Quaternion(const Vector& axis, const float angle)
{
float sin_half_angle = sinf( angle / 2 );
v.set_x( axis.get_x() * sin_half_angle );
v.set_y( axis.get_y() * sin_half_angle );
v.set_z( axis.get_z() * sin_half_angle );
w = cosf( angle / 2 );
}
Quaternion Quaternion::operator* (const Quaternion& quat) const
{
Quaternion result;
Vector v1( this->v );
Vector v2( quat.v );
float s1 = this->w;
float s2 = quat.w;
result.w = s1 * s2 - v1.Dot(v2);
result.v = v2 * s1 + v1 * s2 + v1.Cross(v2);
result.Normalize();
return result;
}
Vector Quaternion::operator* (const Vector& vec) const
{
Quaternion quat_vec(vec.get_x(), vec.get_y(), vec.get_z(), 0.0f);
Quaternion rotation( *this );
Quaternion rotated_vec = rotation * ( quat_vec * rotation.Conjugate() );
return rotated_vec.v;
}
Quaternion Quaternion::Conjugate()
{
Quaternion result( *this );
result.v = result.v * -1.0f;
return result;
}
Transform Transform::operator*(const Transform tran)
{
return Transform( mOrient * transform.getOrient(), mTrans + ( mOrient * tran.getTrans());
}
Transform Joint::GetWorldSpace()
{
Transform world = local_space;
Joint* par = GetParent();
while ( par )
{
world = par->GetLocalSpace() * world;
par = par->GetParent();
}
return world;
}
void RotLimb()
{
Vector end_effector_worldspace_pos = end_effector->GetWorldSpace().get_Translation();
Vector parent_worldspace_pos = parent->GetWorldSpace().get_Translation();
Vector parent_To_end_effector = ( end_effector_worldspace_pos - parent_worldspace_pos ).Normalize();
Vector parent_To_goal = ( goal_pos - parent_worldspace_pos ).Normalize();
float dot = parent_To_end_effector.Dot( parent_To_goal );
Vector rot_axis(0.0f,0.0f,1.0f);
float angle = 0.0f;
if (1.0f - fabs(dot) > EPSILON)
{
//angle = parent_To_end_effector.Angle( parent_To_goal );
rot_axis = parent_To_end_effector.Cross( parent_To_goal ).Normalize();
parent->RotateJoint( rot_axis, acos(dot) );
}
}
void Joint::Rotate( const Vector& axis, const float rotation )
{
mLocalSpace = mlocalSpace * Quaternion( axis, rotation );
}
You are correct when you write in a comment that the axis should be computed in the local coordinate frame of the joint:
I'm wondering if this issue is occuring because I'm doing the calculations to get the axis and angle in the world space for the joint, but then applying it to the local space.
The rotation of the axis from the world frame to the joint frame will look something like this:
rot_axis = parent->GetWorldSpace().Inverse().get_Rotation() * rot_axis
There can be other issues to debug, but it's the only logical error I can see in the code that you have posted.