What could be the source of this aliasing? - c++

I am manually raytracing a 3D image. I have noticed that, the farther from the 3D image I am, the bigger the aliasing.
This 3D image is basically a voxelized representation of the stanford dragon. I have placed volume centered at the origin (the diagonals cross at (0,0,0)), meaning that one of the corners is at (-cube_dim, -cube_dim, -cube_dim) and the other is at (cube_dim, cube_dim, cube_dim).
At close range the image is fine:
(The minor "aliasing" you see here is due to me doing a ray marching algorithm, this is not the aliasing I am worried about, this was expected and acceptabel)
However if we get far away enough some aliasing starts to be seen:
(This is a completely different kind of aliasing)
The fragment shader sued to generate the image is this:
#version 430
in vec2 f_coord;
out vec4 fragment_color;
uniform layout(binding=0, rgba8) image3D volume_data;
uniform vec3 camera_pos;
uniform float aspect_ratio;
uniform float cube_dim;
uniform int voxel_resolution;
#define EPSILON 0.01
// Check whether the position is inside of the specified box
bool inBoxBounds(vec3 corner, float size, vec3 position)
{
bool inside = true;
//Put the position in the coordinate frame of the box
position-=corner;
//The point is inside only if all of it's components are inside
for(int i=0; i<3; i++)
{
inside = inside && (position[i] > -EPSILON);
inside = inside && (position[i] < size+EPSILON);
}
return inside;
}
//Calculate the distance to the intersection to a box, or inifnity if the bos cannot be hit
float boxIntersection(vec3 origin, vec3 dir, vec3 corner0, float size)
{
//dir = normalize(dir);
//calculate opposite corner
vec3 corner1 = corner0 + vec3(size,size,size);
//Set the ray plane intersections
float coeffs[6];
coeffs[0] = (corner0.x - origin.x)/(dir.x);
coeffs[1] = (corner0.y - origin.y)/(dir.y);
coeffs[2] = (corner0.z - origin.z)/(dir.z);
coeffs[3] = (corner1.x - origin.x)/(dir.x);
coeffs[4] = (corner1.y - origin.y)/(dir.y);
coeffs[5] = (corner1.z - origin.z)/(dir.z);
float t = 1.f/0.f;
//Check for the smallest valid intersection distance
//We allow negative values up to -size to create correct sorting if the origin is
//inside the box
for(uint i=0; i<6; i++)
t = (coeffs[i]>=0) && inBoxBounds(corner0,size,origin+dir*coeffs[i])?
min(coeffs[i],t) : t;
return t;
}
void main()
{
float v_size = cube_dim/voxel_resolution;
vec3 r = (vec3(f_coord.xy,1.f/tan(radians(40))));
r.y /= aspect_ratio;
vec3 dir = normalize(r);//;*v_size*0.5;
r+= camera_pos;
float t = boxIntersection(r, dir, -vec3(cube_dim), cube_dim*2);
if(isinf(t))
discard;
if(!((r.x>=-cube_dim) && (r.x<=cube_dim) && (r.y>=-cube_dim) &&
(r.y<=cube_dim) && (r.z>=-cube_dim) && (r.z<=cube_dim)))
r += dir*t;
vec4 color = vec4(0);
int c=0;
while((r.x>=-cube_dim) && (r.x<=cube_dim) && (r.y>=-cube_dim) &&
(r.y<=cube_dim) && (r.z>=-cube_dim) && (r.z<=cube_dim))
{
r += dir*v_size*0.5;
vec4 val = imageLoad(volume_data, ivec3(((r)*0.5/cube_dim+vec3(0.5))*(voxel_resolution-1)));
if(val.w > 0)
{
color = val;
break;
}
c++;
}
fragment_color = color;
}
Understanding the algorithm
First, we create a ray based on the screen coordiantes (we use the standard raytracing ttechnique, were the focal length is 1/tan(angle)).
We then start the ray at the camera's current position
We check intersection of the ray with the box containing our object (we basically assume that our 3D texture is a big cube in the scene and we check whether we hit it).
If we donlt hit it we discard the fragment. If we do hit it and we're outside we move along the ray until we are at the surface of the box. If we hit it and are inside we stay where we are.
At this point we are guranteed that the position of our ray is inside the box.
Now we move by small segments along the ray until we either find a non zero value or we hit the end of the box.

Related

Wierd Raytracing Artifacts

I am trying to create a ray tracer using Qt, but I have some really weird artifacts going on.
Before I implemented shading, I just had 4 spheres, 3 triangles and 2 bounded planes in my scene. They all showed up as expected and as the color expected however, for my planes, I would see dots the same color as the background. These dots would stay static from my view position, so if I moved the camera around the dots would move around as well. However they only affected the planes and triangles and would never appear on the spheres.
One I implemented shading the issue got worse. The dots now also appear on spheres in the light source, so any part affected by the diffuse.
Also, my one plane of pure blue (RGB 0,0,255) has gone straight black. Since I have two planes I switched their colors and again the blue one went black, so it's a color issue and not a plane issue.
If anyone has any suggestions as to what the problem could be or wants to see any particular code let me know.
#include "plane.h"
#include "intersection.h"
#include <math.h>
#include <iostream>
Plane::Plane(QVector3D bottomLeftVertex, QVector3D topRightVertex, QVector3D normal, QVector3D point, Material *material)
{
minCoords_.setX(qMin(bottomLeftVertex.x(),topRightVertex.x()));
minCoords_.setY(qMin(bottomLeftVertex.y(),topRightVertex.y()));
minCoords_.setZ(qMin(bottomLeftVertex.z(),topRightVertex.z()));
maxCoords_.setX(qMax(bottomLeftVertex.x(),topRightVertex.x()));
maxCoords_.setY(qMax(bottomLeftVertex.y(),topRightVertex.y()));
maxCoords_.setZ(qMax(bottomLeftVertex.z(),topRightVertex.z()));
normal_ = normal;
normal_.normalize();
point_ = point;
material_ = material;
}
Plane::~Plane()
{
}
void Plane::intersect(QVector3D rayOrigin, QVector3D rayDirection, Intersection* result)
{
if(normal_ == QVector3D(0,0,0)) //plane is degenerate
{
cout << "degenerate plane" << endl;
return;
}
float minT;
//t = -Normal*(Origin-Point) / Normal*direction
float numerator = (-1)*QVector3D::dotProduct(normal_, (rayOrigin - point_));
float denominator = QVector3D::dotProduct(normal_, rayDirection);
if (fabs(denominator) < 0.0000001) //plane orthogonal to view
{
return;
}
minT = numerator / denominator;
if (minT < 0.0)
{
return;
}
QVector3D intersectPoint = rayOrigin + (rayDirection * minT);
//check inside plane dimensions
if(intersectPoint.x() < minCoords_.x() || intersectPoint.x() > maxCoords_.x() ||
intersectPoint.y() < minCoords_.y() || intersectPoint.y() > maxCoords_.y() ||
intersectPoint.z() < minCoords_.z() || intersectPoint.z() > maxCoords_.z())
{
return;
}
//only update if closest object
if(result->distance_ > minT)
{
result->hit_ = true;
result->intersectPoint_ = intersectPoint;
result->normalAtIntersect_ = normal_;
result->distance_ = minT;
result->material_ = material_;
}
}
QVector3D MainWindow::traceRay(QVector3D rayOrigin, QVector3D rayDirection, int depth)
{
if(depth > maxDepth)
{
return backgroundColour;
}
Intersection* rayResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayOrigin, rayDirection, rayResult);
}
if(rayResult->hit_ == false)
{
return backgroundColour;
}
else
{
QVector3D intensity = QVector3D(0,0,0);
QVector3D shadowRay = pointLight - rayResult->intersectPoint_;
shadowRay.normalize();
Intersection* shadowResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayResult->intersectPoint_, shadowRay, shadowResult);
}
if(shadowResult->hit_ == true)
{
intensity += shadowResult->material_->diffuse_ * intensityAmbient;
}
else
{
intensity += rayResult->material_->ambient_ * intensityAmbient;
// Diffuse
intensity += rayResult->material_->diffuse_ * intensityLight * qMax(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay), 0.0f);
// Specular
QVector3D R = ((2*(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay))* rayResult->normalAtIntersect_) - shadowRay);
R.normalize();
QVector3D V = rayOrigin - rayResult->intersectPoint_;
V.normalize();
intensity += rayResult->material_->specular_ * intensityLight * pow(qMax(QVector3D::dotProduct(R,V), 0.0f), rayResult->material_->specularExponent_);
}
return intensity;
}
}
So I figured out my issues. They are due to float being terrible at precision, any check for < 0.0 would intermittently fail because of floats precision. I had to add an offset to all my checks so that I was checking for < 0.001.

Refraction in Raytracing?

I've been working on my raytracer again. I added reflection and multithreading support. Currently I am working on adding refractions, but its only half working.
As you can see, there is a center sphere(without specular highlight), a reflecting sphere(to the right) and a refracting sphere(left). I'm pretty happy about reflections, it does look very good. For refractions its kinda working...the light is refracted and all shadows of the spheres are visible in the sphere(refraction index 1.4), but there is an outer black ring.
EDIT: Apparently the black ring gets bigger, and therefore the sphere smaller, when I increase the refraction index of the sphere. On the contrary, when decreasing the index of refraction, the Sphere gets larger and the black ring smaller...until, with index of refraction set to one, the ring totally disappears.
IOR = 1.9
IOR = 1.1
IOR = 1.00001
And interestingly enough at IOR = 1 the sphere loses its transparency and becomes white.
I think I covered total internal reflection and it is not the issue here.
Now the code:
I'm using the operator | for dot product, so (vec|vec) is a dot product and the operator ~ to invert vectors. The objects, both ligths and spheres are stored in Object **objects;.
Raytrace function
Colour raytrace(const Ray &r, const int &depth)
{
//first find the nearest intersection of a ray with an object
Colour finalColour = skyBlue *(r.getDirection()|Vector(0,0,-1)) * SKY_FACTOR;
double t, t_min = INFINITY;
int index_nearObj = -1;
for(int i = 0; i < objSize; i++)
{
if(!dynamic_cast<Light *>(objects[i]))//skip light src
{
t = objects[i]->findParam(r);
if(t > 0 && t < t_min)
{
t_min = t;
index_nearObj = i;
}
}
}
//no intersection
if(index_nearObj < 0)
return finalColour;
Vector intersect = r.getOrigin() + r.getDirection()*t_min;
Vector normal = objects[index_nearObj]->NormalAtIntersect(intersect);
Colour objectColor = objects[index_nearObj]->getColor();
Ray rRefl, rRefr; //reflected and refracted Ray
Colour refl = finalColour, refr = finalColour; //reflected and refracted colours
double reflectance = 0, transmittance = 0;
if(objects[index_nearObj]->isReflective() && depth < MAX_TRACE_DEPTH)
{
//handle reflection
rRefl = objects[index_nearObj]->calcReflectingRay(r, intersect, normal);
refl = raytrace(rRefl, depth + 1);
reflectance = 1;
}
if(objects[index_nearObj]->isRefractive() && depth < MAX_TRACE_DEPTH)
{
//handle transmission
rRefr = objects[index_nearObj]->calcRefractingRay(r, intersect, normal, reflectance, transmittance);
refr = raytrace(rRefr, depth + 1);
}
Ray rShadow; //shadow ray
bool shadowed;
double t_light = -1;
Colour localColour;
Vector tmpv;
//get material properties
double ka = 0.2; //ambient coefficient
double kd; //diffuse coefficient
double ks; //specular coefficient
Colour ambient = ka * objectColor; //ambient component
Colour diffuse, specular;
double brightness;
localColour = ambient;
//look if the object is in shadow or light
//do this by casting a ray from the obj and
// check if there is an intersection with another obj
for(int i = 0; i < objSize; i++)
{
if(dynamic_cast<Light *>(objects[i])) //if object is a light
{
//for each light
shadowed = false;
//create Ray to light
tmpv = objects[i]->getPosition() - intersect;
rShadow = Ray(intersect + (!tmpv) * BIAS, tmpv);
t_light = objects[i]->findParam(rShadow);
if(t_light < 0) //no imtersect, which is quite impossible
continue;
//then we check if that Ray intersects one object that is not a light
for(int j = 0; j < objSize; j++)
{
if(!dynamic_cast<Light *>(objects[j]) && j != index_nearObj)//if obj is not a light
{
t = objects[j]->findParam(rShadow);
//if it is smaller we know the light is behind the object
//--> shadowed by this light
if (t >= 0 && t < t_light)
{
// Set the flag and stop the cycle
shadowed = true;
break;
}
}
}
if(!shadowed)
{
rRefl = objects[index_nearObj]->calcReflectingRay(rShadow, intersect, normal);
//reflected ray from ligh src, for ks
kd = maximum(0.0, (normal|rShadow.getDirection()));
if(objects[index_nearObj]->getShiny() <= 0)
ks = 0;
else
ks = pow(maximum(0.0, (r.getDirection()|rRefl.getDirection())), objects[index_nearObj]->getShiny());
diffuse = kd * objectColor;// * objects[i]->getColour();
specular = ks * objects[i]->getColor();
brightness = 1 /(1 + t_light * DISTANCE_DEPENDENCY_LIGHT);
localColour += brightness * (diffuse + specular);
}
}
}
finalColour = localColour + (transmittance * refr + reflectance * refl);
return finalColour;
}
Now the function that calculates the refracted Ray, I used several different sites for resource, and each had similar algorithms. This is the best I could do so far. It may just be a tiny detail I'm not seeing...
Ray Sphere::calcRefractingRay(const Ray &r, const Vector &intersection,Vector &normal, double & refl, double &trans)const
{
double n1, n2, n;
double cosI = (r.getDirection()|normal);
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;//invert
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
n = n1/n2;
double sinT2 = n*n * (1.0 - cosI * cosI);
double cosT = sqrt(1.0 - sinT2);
//fresnel equations
double rn = (n1 * cosI - n2 * cosT)/(n1 * cosI + n2 * cosT);
double rt = (n2 * cosI - n1 * cosT)/(n2 * cosI + n2 * cosT);
rn *= rn;
rt *= rt;
refl = (rn + rt)*0.5;
trans = 1.0 - refl;
if(n == 1.0)
return r;
if(cosT*cosT < 0.0)//tot inner refl
{
refl = 1;
trans = 0;
return calcReflectingRay(r, intersection, normal);
}
Vector dir = n * r.getDirection() + (n * cosI - cosT)*normal;
return Ray(intersection + dir * BIAS, dir);
}
EDIT: I also changed the refraction index around.From
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
to
if(cosI > 0.0)
{
n1 = getRefrIndex();
n2 = 1.0;
normal = ~normal;
}
else
{
n1 = 1.0;
n2 = getRefrIndex();
cosI = -cosI;
}
Then I get this, and almost the same(still upside down) with an index of refraction at 1!
And the reflection calculation:
Ray Sphere::calcReflectingRay(const Ray &r, const Vector &intersection, const Vector &normal)const
{
Vector rdir = r.getDirection();
Vector dir = rdir - 2 * (rdir|normal) * normal;
return Ray(intersection + dir*BIAS, dir);
//the Ray constructor automatically normalizes directions
}
So my question is: How do I fix the outer black circle? Which version is correct?
Help is greatly appreciated :)
This is compiled on Linux using g++ 4.8.2.
Warning: the following is a guess, not a certainty. I'd have to look at the code in more detail to be sure what's happening and why.
That said, it looks to me like your original code is basically simulating a concave lens instead of convex.
A convex lens is basically a magnifying lens, bringing light rays from a relatively small area into focus on a plane:
This also shows why the corrected code shows an upside-down image. The rays of light coming from the top on one side get projected to the bottom on the other (and vice versa).
Getting back to the concave lens though: a concave lens is a reducing lens that shows a wide angle of picture from in front of the lens:
If you look at the bottom right corner here, it shows what I suspect is the problem: especially with a high index of refraction, the rays of light trying to come into the lens intersect the edge of the lens itself. For all the angles wider than that, you're typically going to see a black ring, because the front edge of the lens is acting as a shade to prevent light from entering.
Increasing the index of refraction increases the width of that black ring, because the light is bent more, so a larger portion at the edges is intersecting the outer edge of the lens.
In case you care about how they avoid this with things like wide-angle camera lenses, the usual route is to use a meniscus lens, at least for the front element:
This isn't a panacea, but does at least prevent incoming light rays from intersecting the outer edge of the front lens element. Depending on exactly how wide an angle the lens needs to cover, it'll often be quite a bit less radical of a meniscus than this (and in some cases it'll be a plano-concave) but you get the general idea.
Final warning: of course, all of these are hand-drawn, and intended only to give general idea, not (for example) reflect the design of any particular lens, an element with any particular index of refraction, etc.
I stumbled across this exact issue as well when working on a ray tracer. #lightxbulb's comment about normalizing the ray direction vector fixed this problem for me.
Firstly, keep your code that computes the refraction indices prior to your edit. In other words, you should be seeing those black rings in your renderings.
Then, in your calcRefractingRay function where you compute cosI, use the dot product of normalize(r.getDirection()) and normal. Currently you're taking the dot product of r.getDirection() and normal.
Secondly, when you compute the refracted ray direction dir, use normalize(r.getDirection()) instead of r.getDirection(). Again, you're currently using
r.getDirection() in your calculation.
Also, there is an issue with the way you're checking for total internal reflection. You should check that the term you're taking the square root of (1.0 - sinT2) is non-negative before actually computing the square root.
Hope that helps!

vtk6.1 shaders in/attribute variable

I have a vtkPolyData filled with points and cells that I want to draw on the screen. My polydata represents brain fibers (list of lines in 3D). A cell is a fiber. It's working, but I need to add colors between all points. We decided to color the polydata using a shader because there will be a lot of coloring methods. My vertex shader is:
vtkShader2 *shader = vtkShader2::New();
shader->SetType(VTK_SHADER_TYPE_VERTEX);
shader->SetSourceCode(R"VertexShader(
#version 120
attribute vec3 next_point;
varying vec3 vColor; // Pass to fragment shader
void main() {
float r = gl_Vertex.x - next_point.x;
float g = gl_Vertex.y - next_point.y;
float b = gl_Vertex.z - next_point.z;
if (r < 0.0) { r *= -1.0; }
if (g < 0.0) { g *= -1.0; }
if (b < 0.0) { b *= -1.0; }
const float norm = 1.0 / sqrt(r*r + g*g + b*b);
vColor = vec3(r * norm, g * norm, b * norm);
gl_Position = ftransform();
}
)VertexShader");
shader->SetContext(shader_program->GetContext());
shader_program->GetShaders()->AddItem(shader);
The goal here is, for each point, get the next point to calculate the color of the line between them. The problem is that I can't find a way to set the value of "next_point". I'm pretty sure it's always filled with 0.0 because the output image is red, blue and green on the sides.
I tried using vtkProperty::AddShaderVariable() but I never saw any change and the method's documentation hints about a "uniform variable" so it's probably not the right way.
// Splitted in 3 because I'm not sure how to pass a vtkPoints object to AddShaderVariable
fibersActor->GetProperty()->AddShaderVariable("next_x", nb_points, next_x);
fibersActor->GetProperty()->AddShaderVariable("next_y", nb_points, next_y);
fibersActor->GetProperty()->AddShaderVariable("next_z", nb_points, next_z);
I also tried using a vtkFloatArray filled with my points, then setting it as a data array.
vtkFloatArray *next_point = vtkFloatArray::New();
next_point->SetName("next_point");
next_point->SetNumberOfComponents(3);
next_point->Resize(nb_points);
// Fill next_point ...
polydata->GetPointData()->AddArray(next_point);
// Tried the vtkAssignAttribute class. Did nothing.
tl;dr Can you please tell me how to pass a list of points into a GLSL attribute variable? Thanks for your time.

Deriving uncertainty values from a noise texture?

I'm trying to implement Sketchy Drawings. I'm at the part of the process which calls for the use of the noise texture to derive uncertainty values that will provide an offset into the edge map.
Here is a picture of my edge map for a torus:
And here is the noise texture I've gotten using the Perlin function as suggested:
I have these saved as textures in edgeTexture and noiseTexture respectively.
Now I'm stuck on the section where you have to offset the texture coordinates of the edge map by uncertainty values derived from the noise texture. This image is from the book:
offs = turbulence(s, t);
offt = turbulence(1 - s, 1 - t);
I'm ignoring the 2x2 matrix for the time being. Here is my current fragment shader attempt and the result it produces:
#version 330
out vec4 vFragColor;
uniform sampler2D edgeTexture;
uniform sampler2D noiseTexture;
smooth in vec2 vTexCoords;
float turbulence(float s, float t)
{
float sum = 0;
float scale = 1;
float s1 = 1;
vec2 coords = vec2(s,t);
for (int i=0; i < 10; i++)
{
vec4 noise = texture(noiseTexture, 0.25 * s1 * coords);
sum += scale * noise.x;
scale = scale / 2;
s1 = s1 * 2;
}
return sum;
}
void main( void )
{
float off_s = turbulence(vTexCoords.s, vTexCoords.t);
float off_t = turbulence(1 - vTexCoords.s, 1 - vTexCoords.t);
vFragColor = texture(edgeTexture, vTexCoords + vec2(off_s, off_t));
}
Clearly my addition to the vTexCoords is way off, but I can't see why. I have tried several other turbulence function definitions but none were close to the desired output so I'm thinking my overall approach is flawed somewhere. Any help here is greatly appreciated, and please comment if I haven't been clear. The desired output for a torus would just look like a roughly drawn circle I would imagine.
Your turbulence function will return values in the range (0,1). Firstly you need to change this to get values centered on 0. This should be done inside the loop in the function or you'll end up with a strange distribution. So firstly, I think you should change the line:
vec4 noise = texture(noiseTexture, 0.25 * s1 * coords);
to
vec4 noise = texture(noiseTexture, 0.25 * s1 * coords) * 2.0 - 1.0;
You then need to scale the offset so that you're not sampling the edge texture too far away from the fragment being drawn. Change:
vFragColor = texture(edgeTexture, vTexCoords + vec2(off_s, off_t));
to
vFragColor = texture(edgeTexture, vTexCoords + vec2(off_s, off_t) * off_scale);
where off_scale is some small value (perhaps around 0.05) chosen by experimentation.

Cell-Shading Outlines: edge mesh writer does not define all desired edges

The program that I am writing takes in the vertex data of a 3D mesh, performs a series of calculations (forgive the vagueness, I'll try to explain in better detail later), and outputs a binary file that defines where the edges are on the mesh. My program then draws a colored line where the edge is. Without the appropriate vertex shader, this would look like a regular triangulated mesh, but once the appropriate vertex shader is applied, only the edges that are "sharp" (the dot product of their normals is greater than something close to zero) have lines drawn on them, along with the edges on the outside of the figure. My implementation for the outline is not correct, as I made the assumption that if an edge wasn't behind the edge, and didn't define a sharp edge, it would be an outline edge. I haven't found a satisfactory answer to this elsewhere, and I didn't want to rely on the old trick of re-drawing the mesh as a solid color, and rendering it to be slightly larger than the original mesh. This approach was to be entirely math-based, relying only on the vertex data of a mesh. I am writing a program that uses the following vertex shader:
uniform mat4 worldMatrix;
uniform mat4 projMatrix;
uniform mat4 viewProjMatrix;
uniform vec4 eyepos;
attribute vec3 a;
attribute vec3 b;
attribute vec3 n1;
attribute vec3 n2;
attribute float w;
void main()
{
float a_vertex = dot(eyepos.xyz - a, n1);
float b_vertex = dot(eyepos.xyz - a, n2);
if (a_vertex * b_vertex > 0.0) // signs are different, edge is behind the object
{
gl_Position = vec4(2.0,2.0,2.0,1.0);
}
else // the outline of the figure
{
if(w == 0.0)
{
vec4 p = vec4(a.x, a.y, a.z, 1.0);
p = p * worldMatrix * viewProjMatrix;
gl_Position = p;
}
else
{
vec4 p = vec4(b.x, b.y, b.z, 1.0);
p = p * worldMatrix * viewProjMatrix;
gl_Position = p;
}
}
if(dot(n1, n2) <= 0.2) // there is a sharp edge
{
if(w == 0.0)
{
vec4 p = vec4(a.x, a.y, a.z, 1.0);
p = p * worldMatrix * viewProjMatrix;
gl_Position = p;
}
else
{
vec4 p = vec4(b.x, b.y, b.z, 1.0);
p = p * worldMatrix * viewProjMatrix;
gl_Position = p;
}
}
}
... to take information from a binary file that is written using this program in C++:
#include <iostream>
#include "llgl.h"
#include <fstream>
#include <vector>
#include "SuperMesh.h"
using namespace std;
using namespace llgl;
struct Vertex
{
float x,y,z,w;
float s,t,p,q;
float nx,ny,nz,nw;
};
bool isFileAlright(string fName)
{
ifstream in(fName.c_str());
if(!in.good())
return false;
return true;
}
int main(int argc, char* argv[])
{
// INPUT FILE NAME //
string fName;
cout << "Enter the path to your spec.mesh file here: ";
cin >> fName;
while(!isFileAlright(fName))
{
cout << "Enter the path to your spec.mesh file here: ";
cin >> fName;
}
SuperMesh* Model = new SuperMesh(fName.c_str());
// END INPUT //
Model->load();
Model->draw();
string fname = Model->fname;
string FileName = fname.substr(0, fname.size() - 10); // supposed to slash the last 10 characters off of the string, removing ".spec.mesh"...
FileName = FileName + ".bin"; //... and then we make it a .bin file*/
cout << FileName << endl;
ofstream out(FileName.c_str(), ios::binary);
for (unsigned w = 0; w < Model->m.size(); w++)
{
vector<float> &vdata = Model->m[w]->vdata;
vector<char> &idata = Model->m[w]->idata;
//Create a vertex and index variable, a map for Edge Mesh, perform two loops to analyze all triangles on a mesh and write out their vertex values to a file.//
Vertex* V = (Vertex*)(&vdata[0]);
unsigned short* I16 = (unsigned short*)(&idata[0]);
unsigned char* I8 = (unsigned char*)(&idata[0]);
unsigned int* I32 = (unsigned int*)(&idata[0]);
map<set<int>, vector<vec3> > EM;
for(unsigned i = 0; i < Model->m[w]->ic; i += 3) // 3 because we're looking at triangles //
{
Mesh* foo = Model->m[w];
int i1;
int i2;
int i3;
if( Model->m[w]->ise == GL_UNSIGNED_BYTE)
{
i1 = I8[i];
i2 = I8[i + 1];
i3 = I8[i + 2];
}
else if( Model->m[w]->ise == GL_UNSIGNED_SHORT)
{
i1 = I16[i];
i2 = I16[i + 1];
i3 = I16[i + 2];
}
else
{
i1 = I32[i];
i2 = I32[i + 1];
i3 = I32[i + 2];
}
vec3 p = vec3(V[i1].x, V[i1].y, V[i1].z); // to represent the point in 3D space of each vertex on every triangle on the mesh
vec3 q = vec3(V[i2].x, V[i2].y, V[i2].z);
vec3 r = vec3(V[i3].x, V[i3].y, V[i3].z);
vec3 v1 = p - q;
vec3 v2 = r - q;
vec3 n = cross(v2,v1); //important to make sure the order is correct here, do VERTEX TWO dot VERTEX ONE//
set<int> tmp;
tmp.insert(i1); tmp.insert(i2);
EM[tmp].push_back(n);
set<int> tmp2;
tmp2.insert(i2); tmp2.insert(i3);
EM[tmp2].push_back(n);
set<int> tmp3;
tmp3.insert(i3); tmp3.insert(i1);
EM[tmp3].push_back(n);
//we have now pushed every needed point into our edge map
}
int edgeNumber = 0;
cout << "There should be 12 edges on a lousy cube." << endl;
for(map<set<int>, vector<vec3> >::iterator it = EM.begin(); it != EM.end(); ++it)
{
//Now we will take our edge map and write its data to the file!//
/* Information is written to the file in this form:
Vertex One, Vertex Two, Normal One, Normal Two, r (where r, depending on its value, determines whether one edge is on top of the other in the case
where two edges are aligned with one another)
*/
set<int>::iterator tmp = it->first.begin();
int pi = *tmp;
tmp++;
int qi = *tmp;
Vertex One = V[pi];
Vertex Two = V[qi];
vec3 norm1 = it->second[0];
vec3 norm2;
if(it->second.size() == 1)
norm2 = -1 * norm1;
else
norm2 = it->second[1];
out.write((char*) &One, 12);
out.write((char*) &Two, 12);
out.write((char*) &norm1, 12);
out.write((char*) &norm2, 12);
float r = 0;
out.write((char*) &r, 4);
out.write((char*) &One, 12);
out.write((char*) &Two, 12);
out.write((char*) &norm1, 12);
out.write((char*) &norm2, 12);
r = 1;
out.write((char*) &r, 4);
edgeNumber++;
cout << "Wrote edge #" << edgeNumber << endl;
}
}
return 0;
}
The problem that this program has is that it does neither of these two essential things in the test case where I use it to draw a simple box with outlines:
It does not draw outlines. The vertex shader is not sufficient to determine anything more than where the edges of the object are. The binary file that makes this happen is pre-computed in a separate program using code from the second snippet posted above, and then it is saved as a .bin file along with the mesh assets to which it belongs. However, raw vertex data would only take me so far, and I seek a way to draw a line around the outside of the mesh without using more traditional methods.
It does not draw ALL of the edges that I need. In my test case, two of the edges are missing, and I cannot figure out for the life of me why. I figure I must have done something wrong in writing the edge map.
A couple notes about the above code:
llgl is an OpenGL wrapper that I have used to simplify many elements of OpenGL. It is not used extensively here, but rather in the creation of meshes, done elsewhere.
Things like Mesh and SuperMesh (a collection of meshes into one rigid body) are meant to be 3D objects in my scene. In my test case, there is only one Mesh in my scene, and defining a SuperMesh of a single Mesh is essentially just creating a single Mesh.
The "draw" call in the second snippet, which pre-computes a Mesh's edge map, does not actually draw anything. It is necessary to gain access to the Mesh's vertex data.
The variable "ise" is taken from the individual Meshes in the SuperMesh, and is a variable found by reading it in from the original Blender .OBJ file. It is related to how much memory should be used to store the important vertex data. It generally isn't a good idea to allocate more space than is needed for these values, as I've been told by friends and mentors who work with Blender.
It isn't well-commented, as I'm not the only one who has worked on this code, and I, unfortunately, have a limited understanding of how the second snippet could iterate through all of the triangles on a mesh and somehow miss the last two edges. Once I understand better what this code should do when properly written, I plan on heavily commenting it and using it in future applications.
Order of multiplication between matrix and vector is not comutative, so
your vertex shader have to output Projection * Model * Vertex and not the opposite.
I solved the mystery of the undrawn lines by allocating more space to write vertex data in a different part of my code. As for my other problems, although the order of multiplication being done in my vertex shader was actually alright, I had messed up another fundamental concept of vector math. The dot product of two face normals will be a negative number when the normals make an obtuse angle... the way a sharp point on my model would. Also, there is the faulty logic above that basically says that if the face is visible, draw all of the lines on it. I re-wrote my shader to test first if a face was visible, and then in that same conditional block I did the test for sharp edges. Now, if a face is visible BUT it doesn't create a sharp edge, the shader will ignore that edge. Also, outlines appear now, just not perfectly. Here is a modified version of the above vertex shader:
uniform mat4 worldMatrix; /* the matrix that defines how to project a point from
object space to world space.*/
uniform mat4 viewProjMatrix; // the view (pertaining to screen size) matrix times the projection (how to project points to 3D) matrix.
uniform vec4 eyepos; // the position of the eye, given by the program.
attribute vec3 a; // one vertex on an edge, having an x,y,z, and w coordinate.
attribute vec3 b; // the other edge vertex.
attribute vec3 n1; // the normal of the face the edge is on.
attribute vec3 n2; // another normal in the case that an edge shares two faces... otherwise, this is the same as n1.
attribute float w; // an attribute given to make a binary choice between two edges when they draw on top of one another.
void main()
{
// WORLD SPACE ATTRIBUTES //
vec4 eye_world = eyepos * worldMatrix;
vec4 a_world = vec4(a.x, a.y,a.z,1.0) * worldMatrix;
vec4 b_world = vec4(b.x, b.y,b.z,1.0) * worldMatrix;
vec4 n1_world = normalize(vec4(n1.x, n1.y,n1.z,0.0) * worldMatrix);
vec4 n2_world = normalize(vec4(n2.x, n2.y,n2.z,0.0) * worldMatrix);
// END WORLD SPACE ATTRIBUTES //
// TEST CASE ATTRIBUTES //
float a_vertex = dot(eye_world - a_world, n1_world);
float b_vertex = dot(eye_world - b_world, n2_world);
float normalDot = dot(n1_world.xyz, n2_world.xyz);
float vertProduct = a_vertex * b_vertex;
float hardness = 0.0; // this would be the value for an object made of sharp angles, like a box. Take a look at its use below.
// END TEST CASE ATTRIBUTES //
gl_Position = vec4(2.0,2.0,2.0,1.0); // if all else fails, keeping this here will discard unwanted data.
if (vertProduct >= 0.1) // NOTE: face is behind the viewable portion of the object, normally uses 0.0 when not checking for silhouette
{
gl_Position = vec4(2.0,2.0,2.0,1.0);
}
else if(vertProduct < 0.1 && vertProduct >= -0.1) // NOTE: face makes almost a right angle with the eye vector
{
if(w == 0.0)
{
vec4 p = vec4(a_world.x, a_world.y, a_world.z, 1.0);
p = p * viewProjMatrix;
gl_Position = p;
}
else
{
vec4 p = vec4(b_world.x, b_world.y, b_world.z, 1.0);
p = p * viewProjMatrix;
gl_Position = p;
}
}
else // NOTE: this is the case where you can very clearly see a face.
{ // NOTE: the number that normalDot compares to should be its "hardness" value. The more negative the value, the smoother the surface.
// a.k.a. the less we care about hard edges (when the normals of the faces make an obtuse angle) on the object, the more negative
// hardness becomes on a scale of 0.0 to -1.0.
if(normalDot <= hardness) // NOTE: the dot product of the two normals is obtuse, so we are looking at a sharp edge.
{
if(w == 0.0)
{
vec4 p = vec4(a_world.x, a_world.y, a_world.z, 1.0);
p = p * viewProjMatrix;
gl_Position = p;
}
else
{
vec4 p = vec4(b_world.x, b_world.y, b_world.z, 1.0);
p = p * viewProjMatrix;
gl_Position = p;
}
}
else // NOTE: not sharp enough, just throw the vertex away
{
gl_Position = vec4(2.0,2.0,2.0,1.0);
}
}
}