I am trying to follow the Ray Tracing in one Weekend tutorial and my normals do not look like i expect them to look.
float hit_sphere(Sphere sphere, Ray r){
vec3 oc = r.origin - sphere.center;
float a = dot(r.direction,r.direction);
float b = 2.0 * dot(oc, r.direction);
float c = dot(oc, oc) - sphere.radius * sphere.radius;
float discriminant = b * b - 4 * a * c;
if(discriminant > 0){
return -1;
}
else{
return (-b -sqrt(discriminant))/(2.0 * a);
}
}
vec3 at(Ray ray, float p){
return ray.origin + p * ray.direction;
}
void main()
{
vec3 camera_origin = vec3(0,0,2);
vec2 st = gl_FragCoord.xy/vec2(x, y);
Ray r = Ray(camera_origin, normalize(vec3(st.x - 0.5, st.y - 0.5, 1.0)));
Sphere sphere = {vec3(0,0,0),0.5};
float p = hit_sphere(sphere, r);
if(p < 0.0){
vec3 N = normalize(at(r, p) - sphere.center);
FragColor = vec4(N.x + 1, N.y + 1, N.z + 1, 1);
}
else{
// FragColor = vec4(st.xy, 1.0, 1.0);
FragColor = vec4(1 - st.y+0.7, 1 - st.y+0.7,1 - st.y+0.9, 1.0);
}
}
Note that the + 1 in each normal color channel is to make it more noticeable to spot the color difference.
This is how my normals look.
Although this is not how i expect these normals to be.
They should be something like this (not exactly like this but close)
What mistake or overseen problem is causing this.
Note : Moving back and forth doesnt change the situation
The expression
FragColor = vec4(N.x + 1, N.y + 1, N.z + 1, 1);
creates colors in range [0, 2]. However, the color channels have to be in range [0, 1]:
FragColor = vec4(N.xyz * 0.5 + 0.5, 1);
Note that you can also use abs to represent the normals:
FragColor = vec4(abs(N.xyz), 1);
or even scale with the reciprocal maximum color channel:
vec3 nv_color = abs(N.xyz);
nv_color /= max(nv_color.x, max(nv_color.y, nv_color.z));
FragColor = vec4(nv_color, 1.0);
You draw the normal vector when discriminant > 0 with p is -1. Actually you always calculate normalize(at(r, -1) - sphere.center). This is wrong, because p needs to be the distance from origin to the pint on the sphere where the ray hits the sphere.
When the ray hits the sphere, p is >= 0. In this case you want to draw the normal vector:
if (p < 0.0)
if (p >= 0.0) {
vec3 N = normalize(at(r, p) - sphere.center);
FragColor = vec4(N.xyz * 0.5 + 0.5, 1);
}
discriminant > 0
if (discriminant < 0){
return -1;
}
else {
return (-b -sqrt(discriminant))/(2.0 * a);
}
Related
I am working on an OpenGL ray-tracer, which is capable of loading obj files and ray-trace it. My application loads the obj file with assimp and then sends all of the triangle faces (with the primitive coordinates and te material coefficients as well) to the fragment shader by using shader storage objects. The basic structure is about to render the results to a quad from the fragment shader.
I have trouble with the ray-tracing part in the fragment shader, but first let's introduce it.
For diffuse light, Lambert's cosine law, for specular light Phong-Blinn model was used. In case of total reflection a weightvariable is used to make the reflected light has an effect on other objects as well. The weight is calculated with approximating the Fresnel equation by Schlick method. In the image below, you can see, that the plane works like a mirror reflecting the image of the cube above.
I would like to make the cube appear as a glass object (like a glass sphere), which has refracting and reflecting effects as well. Or at least refracts the light. In the image above you can see a refracting effect on cube, but it is not as good, as it should be. I searched examples how to implement it, but until now, I recognized fresnel euquation has to be used just like in the reflection part.
Here is fragment my shader:
vec3 Fresnel(vec3 F0, float cosTheta) {
return F0 + (vec3(1, 1, 1) - F0) * pow(1-cosTheta, 5);
}
float schlickApprox(float Ni, float cosTheta){
float F0=pow((1-Ni)/(1+Ni), 2);
return F0 + (1 - F0) * pow((1 - cosTheta), 5);
}
vec3 trace(Ray ray){
vec3 weight = vec3(1, 1, 1);
const float epsilon = 0.0001f;
vec3 outRadiance = vec3(0, 0, 0);
int maxdepth=5;
for (int i=0; i < maxdepth; i++){
Hit hit=traverseBvhTree(ray);
if (hit.t<0){ return weight * lights[0].La; }
vec4 textColor = texture(texture1, vec2(hit.u, hit.v));
Ray shadowRay;
shadowRay.orig = hit.orig + hit.normal * epsilon;
shadowRay.dir = normalize(lights[0].direction);
// Ambient Light
outRadiance+= materials[hit.mat].Ka.xyz * lights[0].La*textColor.xyz * weight;
// Diffuse light based on Lambert's cosine law
float cosTheta = dot(hit.normal, normalize(lights[0].direction));
if (cosTheta>0 && traverseBvhTree(shadowRay).t<0) {
outRadiance +=lights[0].La * materials[hit.mat].Kd.xyz * cosTheta * weight;
// Specular light based on Phong-Blinn model
vec3 halfway = normalize(-ray.dir + lights[0].direction);
float cosDelta = dot(hit.normal, halfway);
if (cosDelta > 0){
outRadiance +=weight * lights[0].Le * materials[hit.mat].Ks.xyz * pow(cosDelta, materials[hit.mat].shininess); }
}
float fresnel=schlickApprox(materials[hit.mat].Ni, cosTheta);
// For refractive materials
if (materials[hit.mat].Ni < 3)
{
/*this is the under contruction part.*/
ray.orig = hit.orig - hit.normal*epsilon;
ray.dir = refract(ray.dir, hit.normal, materials[hit.mat].Ni);
}
// If the refraction index is more than 15, treat the material as mirror.
else if (materials[hit.mat].Ni >= 15) {
weight *= fresnel;
ray.orig=hit.orig+hit.normal*epsilon;
ray.dir=reflect(ray.dir, hit.normal);
}
}
return outRadiance;
}
Update 1
I updated the trace method in the shader. As far as I understand the physics of light, if there is a material, which reflects and refracts the light, I have to treat two cases according to this.
In this case of reflection, I added a weight to the diffuse light calculation: weight *= fresnel
In case of refraction light, the weight is weight*=1-fresnel.
Moreover, I calculated the ray.orig and ray.dir connected to the cases and refraction calculation only happens when it is not the case of total internal reflection (fresnel is smaller than 1).
The modified trace method:
vec3 trace(Ray ray){
vec3 weight = vec3(1, 1, 1);
const float epsilon = 0.0001f;
vec3 outRadiance = vec3(0, 0, 0);
int maxdepth=3;
for (int i=0; i < maxdepth; i++){
Hit hit=traverseBvhTree(ray);
if (hit.t<0){ return weight * lights[0].La; }
vec4 textColor = texture(texture1, vec2(hit.u, hit.v));
Ray shadowRay;
shadowRay.orig = hit.orig + hit.normal * epsilon;
shadowRay.dir = normalize(lights[0].direction);
// Ambient Light
outRadiance+= materials[hit.mat].Ka.xyz * lights[0].La*textColor.xyz * weight;
// Diffuse light based on Lambert's cosine law
float cosTheta = dot(hit.normal, normalize(lights[0].direction));
if (cosTheta>0 && traverseBvhTree(shadowRay).t<0) {
outRadiance +=lights[0].La * materials[hit.mat].Kd.xyz * cosTheta * weight;
// Specular light based on Phong-Blinn model
vec3 halfway = normalize(-ray.dir + lights[0].direction);
float cosDelta = dot(hit.normal, halfway);
if (cosDelta > 0){
outRadiance +=weight * lights[0].Le * materials[hit.mat].Ks.xyz * pow(cosDelta, materials[hit.mat].shininess); }
}
float fresnel=schlickApprox(materials[hit.mat].Ni, cosTheta);
// For refractive/reflective materials
if (materials[hit.mat].Ni < 7)
{
bool outside = dot(ray.dir, hit.normal) < 0;
// compute refraction if it is not a case of total internal reflection
if (fresnel < 1) {
ray.orig = outside ? hit.orig-hit.normal*epsilon : hit.orig+hit.normal*epsilon;
ray.dir = refract(ray.dir, hit.normal,materials[hit.mat].Ni);
weight *= 1-fresnel;
continue;
}
// compute reflection
ray.orig= outside ? hit.orig+hit.normal*epsilon : hit.orig-hit.normal*epsilon;
ray.dir= reflect(ray.dir, hit.normal);
weight *= fresnel;
continue;
}
// If the refraction index is more than 15, treat the material as mirror: total reflection
else if (materials[hit.mat].Ni >= 7) {
weight *= fresnel;
ray.orig=hit.orig+hit.normal*epsilon;
ray.dir=reflect(ray.dir, hit.normal);
}
}
return outRadiance;
}
Here is a snapshot connected to the update. Slightly better, I guess.
Update 2:
I found an iterative algorithm which is using a stack to visualize refractive and reflected rays in opengl: it is on page 68.
I modified my frag shader according to this. Almost fine, except for the back faces, which are completely black. Some pictures attached.
Here is the trace method of my frag shader:
vec3 trace(Ray ray){
vec3 color;
float epsilon=0.001;
Stack stack[8];// max depth
int stackSize = 0;// current depth
int bounceCount = 0;
vec3 coeff = vec3(1, 1, 1);
bool continueLoop = true;
while (continueLoop){
Hit hit = traverseBvhTree(ray);
if (hit.t>0){
bounceCount++;
//----------------------------------------------------------------------------------------------------------------
Ray shadowRay;
shadowRay.orig = hit.orig + hit.normal * epsilon;
shadowRay.dir = normalize(lights[0].direction);
color+= materials[hit.mat].Ka.xyz * lights[0].La * coeff;
// Diffuse light
float cosTheta = dot(hit.normal, normalize(lights[0].direction));// Lambert-féle cosinus törvény alapján.
if (cosTheta>0 && traverseBvhTree(shadowRay).t<0) {
color +=lights[0].La * materials[hit.mat].Kd.xyz * cosTheta * coeff;
vec3 halfway = normalize(-ray.dir + lights[0].direction);
float cosDelta = dot(hit.normal, halfway);
// Specular light
if (cosDelta > 0){
color +=coeff * lights[0].Le * materials[hit.mat].Ks.xyz * pow(cosDelta, materials[hit.mat].shininess); }
}
//---------------------------------------------------------------------------------------------------------------
if (materials[hit.mat].indicator > 3.0 && bounceCount <=2){
float eta = 1.0/materials[hit.mat].Ni;
Ray refractedRay;
refractedRay.dir = dot(ray.dir, hit.normal) <= 0.0 ? refract(ray.dir, hit.normal, eta) : refract(ray.dir, -hit.normal, 1.0/eta);
bool totalInternalReflection = length(refractedRay.dir) < epsilon;
if(!totalInternalReflection){
refractedRay.orig = hit.orig + hit.normal*epsilon*sign(dot(ray.dir, hit.normal));
refractedRay.dir = normalize(refractedRay.dir);
stack[stackSize].coeff = coeff *(1 - schlickApprox(materials[hit.mat].Ni, dot(ray.dir, hit.normal)));
stack[stackSize].depth = bounceCount;
stack[stackSize++].ray = refractedRay;
}
else{
ray.dir = reflect(ray.dir, -hit.normal);
ray.orig = hit.orig - hit.normal*epsilon;
}
}
else if (materials[hit.mat].indicator == 0){
coeff *= schlickApprox(materials[hit.mat].Ni, dot(-ray.dir, hit.normal));
ray.orig=hit.orig+hit.normal*epsilon;
ray.dir=reflect(ray.dir, hit.normal);
}
else { //Diffuse Material
continueLoop=false;
}
}
else {
color+= coeff * lights[0].La;
continueLoop=false;
}
if (!continueLoop && stackSize > 0){
ray = stack[stackSize--].ray;
bounceCount = stack[stackSize].depth;
coeff = stack[stackSize].coeff;
continueLoop = true;
}
}
return color;
}
I am trying to write a real time ray tracer using compute shaders in opengl 4.3. I know that this is a rather popular question.
I have checked this, and this, but the architecture provided over there does not really correspond to my use case.
I am simply trying to transform the ray_color function provided in the P. Shirley's book here into a non recursive function.
The mentioned ray_color function:
color ray_color(const ray& r, const hittable& world, int depth) {
hit_record rec;
// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
return color(0,0,0);
if (world.hit(r, 0.001, infinity, rec)) {
point3 target = rec.p + rec.normal + random_unit_vector();
return 0.5 * ray_color(ray(rec.p, target - rec.p), world, depth-1);
}
vec3 unit_direction = unit_vector(r.direction());
auto t = 0.5*(unit_direction.y() + 1.0);
return (1.0-t)*color(1.0, 1.0, 1.0) + t*color(0.5, 0.7, 1.0);
}
Here is my attempt:
vec3 no_hit_color(in Ray r) {
vec3 dir = normalize(r.direction);
float temp = 0.5 * (dir.y + 1.0);
vec3 cval = vec3(1.0 - temp) + temp * vec3(0.5, 0.7, 1.0);
return cval;
}
vec3 ray_color(in Ray r, in Scene scene, int depth) {
//
Ray r_in;
r_in.origin = r.origin;
r_in.direction = r.direction;
vec3 bcolor = vec3(1);
while (true) {
Ray r_out;
if (depth <= 0) {
//
return vec3(0);
}
HitRecord rec;
if (hit_scene(scene, r_in, 0.001, INFINITY, rec)) {
vec3 target = rec.point + random_in_hemisphere(rec.normal);
r_in = makeRay(rec.point, target - rec.point);
depth--;
bcolor *= 0.5;
} else {
bcolor *= no_hit_color(r_in);
return bcolor;
}
}
}
If I use a static value for depth, with something like #define MAX_DEPTH, I think i can implement the algorithm, by making my own stack, but I would like to keep the depth as dynamic variable where I can let users tweak according to their computing power.
So I would like to implement it using while if possible.
My version produces a black slice near the bottom of the sphere, which does not correspond to the reference picture.
Update 1:
I am slightly convinced that the above implementation is correct but my camera position from which I am generating rays is problematic.
I have confirmed that the implementation is indeed correct. Here is a glsl version and c++ version for future reference.
It should give you a direction for implementing more complex stuff later on.
// glsl version
vec3 ray_color(in Ray r, in Scene scene, int depth) {
//
Ray r_in;
r_in.origin = r.origin;
r_in.direction = r.direction;
vec3 bcolor = vec3(1);
while (true) {
if (depth <= 0) {
//
return vec3(0);
// return bcolor;
}
HitRecord rec;
if (hit_scene(scene, r_in, 0.001, INFINITY, rec)) {
vec3 target = rec.point + random_in_hemisphere(rec.normal);
r_in = makeRay(rec.point, target - rec.point);
depth--;
bcolor *= 0.5;
} else {
vec3 dir = normalize(r_in.direction);
float temp = 0.5 * (dir.y + 1.0);
bcolor *= vec3(1.0 - temp) + temp * vec3(0.5, 0.7, 1.0);
return bcolor;
}
}
}
// cpp version
color ray_color2(const Ray &r, const HittableList &scene, int depth) {
Ray r_in = Ray(r.origin, r.direction);
color rcolor = color(1);
while (true) {
HitRecord record;
if (depth <= 0) {
// final case
return color(0);
}
if (scene.hit(r_in, 0.001, INF, record)) {
// recursive case
point3 target = record.point + random_in_hemisphere(record.normal);
r_in = Ray(record.point, target - record.point);
depth--;
rcolor *= 0.5;
} else {
vec3 direction = to_unit(r_in.direction);
double temp = 0.5 * (direction.y + 1.0);
rcolor *= (1.0 - temp) * color(1.0) + temp * color(0.5, 0.7, 1.0);
return rcolor;
}
}
}
Basically as long as the contribution of rays can be modeled with a linear operator, it should be possible to use while loop to implement the function. Notice that the function does not use a call stack, thus it can be used where the maximum bounce rate, or maximum depth whichever you prefer, for rays is dynamic.
I'm writing a simple ray-tracer with lighting using Phong illumination model. But the problem is that there's part of the sphere display a whole different color. For example, the sphere should be only green in this.
I tried to reduce the light intensity, then it somehow displays correctly like this.
This is the code for primary rays
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
Ray ray(gCamera);
float x = iX + j * pSize;
float y = iY - i * pSize;
ray.v = vec3(x * scale, y * scale, 0) - gCamera;
gPixels[i][j] = trace(ray);
}
}
And this is the code for the intersection (testing with sphere at origin without any transformation)
double findIntersection(const Ray& ray) {
dvec3 u = mXfmInverse * dvec4(ray.u, 1.0);
dvec3 v = mXfmInverse * dvec4(ray.v, 0.0);
double a = glm::dot(v, v);
double b = 2 * glm::dot(u, v);
double c = glm::dot(u, u) - 1;
double delta = b * b - 4 * a * c;
if (delta < 0) return -1;
double root = sqrt(delta);
double t0 = 0.5 * (-b - root) / a;
if (t0 >= 0) return t0;
double t1 = 0.5 * (-b + root) / a;
return t1 >= 0 ? t1 : -1;
}
and calculating Phong illumination
Material material = ray.sphere->getMaterial();
// diffuse
dvec3 center = ray.sphere->getXfm() * vec4(0, 0, 0, 1);
dvec3 normal = glm::normalize(hitPoint - center);
dvec3 lightDir = glm::normalize(light.position - hitPoint);
double lambertian = max(glm::dot(normal, lightDir), 0.0);
// specular
double specular = 0;
if (lambertian > 0) {
dvec3 viewDir = glm::normalize(-ray.v);
dvec3 reflectDir = glm::reflect(-lightDir, normal);
specular = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
}
dvec3 color = lambertian * material.diffuse + specular * material.specular;
return color * light.color;
}
I'm trying to render a Mandelbrot Set using GLSL, but all I get is a full circle...
I have checked the maths a lot, and I simply cannot find the error, so I thought maybe the problem was semantic.
Is anyone able to see what's wrong?
Also, could anyone give me insights on organization, structure, etc? I'm trying to learn proper coding, but it's hard to find material on styling.
Obs.: The shader can be applied over any image
The idea is simple (you may skip this):
checkConvergence returns true if z has not diverged (i.e., abs(z) < 4
sumSquare returns the complex multiplication (z.r, z.i)*(z.r, z.i) - (c.r, c.i), where K.r = Re(K) and K.i = Im(K)
iterate is the actual algorithm: it keeps squaring z until it diverges or it reaches a maximum number of iterations (tol)
clr is parameterized function which returns a RGB array which depends on n
finally, effect is where the magic should happen, by doing 2 things:
It takes the coordinates of the pixel (GLSL normalizes to the interval(0, 1) and normalizes it to the Mandelbrot set size (-2
It calls clr over iterate over the coordinates.
GLSLShader = love.graphics.newShader[[
vec2 c = vec2(1.0, 0.0);
int tol = 100;
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
bool checkConvergence(vec2 z) {
return (z[0]*z[0] + z[1]*z[1] < 4);
}
vec2 sumSquare(vec2 z) {
return vec2(z[0]*z[0] - z[1]*z[1] - c[0], 2 * z[0] * z[1] - c[1]);
}
int iterate(vec2 z) {
int n = 0;
while (checkConvergence(z) && (n < tol)) {
vec2 z = sumSquare(z);
n = n + 1;
}
return n;
}
vec4 clr(int n){
if(n == tol){return vec4(0.0,0.0,0.0,1.0);}
int r, g, b;
r = int(255*n + 47*(1-n));
g = int(180*n + 136*(1-n));
b = int(38*n + 255*(1-n));
if (r > 255) {r = 255;}
else{
if(r<0){r = 0;}
}
if (g > 255) {g = 255;}
else{
if(g<0){g = 0;}
}
if (b > 255) {b = 255;}
else{
if(b<0){b = 0;}
}
return vec4(r, g, b, 1.0);
}
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 screen_coords){
vec2 z = vec2(texture_coords.x*4-2, texture_coords.y*4-2);
return clr(iterate(z));
}
]]
UPDATE
I've tried some suggestions from #WeatherVane:
Add C instead of subtracting it;
Start z as (0.0, 0.0) and pass the starting point of the iteration as C;
But all to no avail, I still get a circle. I also tried using more iterations
I tried to simplify the coding without changing much.
I added a condition to the clr function, which returns green if n is smaller than 0 or greater than 1. The strange thing is that the circle is black and the rest of the screen is white, but I don't see where this white can come from (since clr never returns white for 0 < n < 1 ).
vec2 c = vec2(0.0, 0.0);
int tol = 1000;
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
vec4 green = vec4(0.0, 1.0, 0.0, 1.0);
int iterate(vec2 z) {
int n = 0;
while ( (z[0]*z[0] + z[1]*z[1] < 4) && (n < tol) ) {
vec2 z = vec2( z[0]*z[0] - z[1]*z[1] + c[0], 2 * z[0] * z[1] + c[1] );
n = n + 1;
}
return n;
}
vec4 clr(int n){
n = n / tol;
if(n == 1){return black;}
if(n > 1 || n < 0){return green;}
int r, g, b;
r = int(255*n + 47*(1-n));
g = int(180*n + 136*(1-n));
b = int(38*n + 255*(1-n));
return vec4(r, g, b, 1.0);
}
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 screen_coords){
vec2 z = vec2(texture_coords.x*4-2, texture_coords.y*4-2);
return clr(iterate(z));
}
The starting point of the iteration should be passed as vector c, while z start from {0.0.0.0}.
As you can find in https://en.wikipedia.org/wiki/Mandelbrot_set
a complex number c is part of the Mandelbrot set if, when starting
with z = 0 and applying the iteration z = z² + c repeatedly, the
absolute value of z remains bounded however large n gets.
You are using vec4 for colors but instead of using float, in your code you are calculating the RGB component with integers value. You should cast to float and normalize each component to the (0.0,1.0) range. I tried to correct your code, but I'm afraid I don't really know lua nor love2d and I wasn't able to use texture_coords, so I used screen_coords. The best I could do is this:
function love.load()
GLSLShader = love.graphics.newShader[[
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
int max_iter = 1024;
vec4 clr(int n){
if(n == max_iter){return black;}
float m = float(n)/float(max_iter);
float r = float(mod(n,256))/32;
float g = float(128 - mod(n+64,127))/255;
float b = float(127 + mod(n,64))/255;
if (r > 1.0) {r = 1.0;}
else{
if(r<0){r = 0;}
}
if (g > 1.0) {g = 1.0;}
else{
if(g<0){g = 0;}
}
if (b > 1.0) {b = 1.0;}
else{
if(b<0){b = 0;}
}
return vec4(r, g, b, 1.0);
}
vec4 effect( vec4 color, Image texture, vec2 texture_coords, vec2 screen_coords){
vec2 c = vec2((screen_coords[0]-500)/200,(screen_coords[1]-300)/200);
vec2 z = vec2(0.0,0.0);
vec2 zn = vec2(0.0,0.0);
int n_iter = 0;
while ( (z[0]*z[0] + z[1]*z[1] < 4) && (n_iter < max_iter) ) {
zn[0] = z[0]*z[0] - z[1]*z[1] + c[0];
zn[1] = 2*z[0]*z[1] + c[1];
z[0] = zn[0];
z[1] = zn[1];
n_iter++;
}
return clr(n_iter);
}
]]
end
function love.draw()
love.graphics.setShader(GLSLShader)
love.graphics.rectangle('fill', 0,0,800,600)
love.graphics.setShader()
end
Which gave me this output:
while ( ... ) {
vec2 z = ...
}
Get rid of vec2, you're declaring a new variable each iteration. Besides that, follow some of #Bob__ 's advice (ie. iterate(vec2 c)).
Over the past ~2-3 weeks, i've been learning about Physically Based Shading and I just cannot wrap my head around some of the problems I'm having.
Fragment Shader
#version 430
#define PI 3.14159265358979323846
// Inputs
in vec3 inputNormal;
vec3 fNormal;
// Material
float reflectance = 1.0; // 0 to 1
float roughness = 0.5;
vec3 specularColor = vec3(1.0, 1.0, 1.0); // f0
// Values
vec3 lightVector = vec3(1, 1, 1); // Light (l)
vec3 eyeVector = vec3(2.75, 1.25, 1.25); // Camera (v)
vec3 halfVector = normalize(lightVector + eyeVector); // L + V / |L + V|
out vec4 fColor; // Output Color
// Specular Functions
vec3 D(vec3 h) // Normal Distribution Function - GGX/Trowbridge-Reitz
{
float alpha = roughness * roughness;
float alpha2 = alpha * alpha;
float NoH = dot(fNormal, h);
float finalTerm = ((NoH * NoH) * (alpha2 - 1.0) + 1.0);
return vec3(alpha2 / (PI * (finalTerm * finalTerm)));
}
vec3 Gsub(vec3 v) // Sub Function of G
{
float k = ((roughness + 1.0) * (roughness + 1.0)) / 8;
return vec3(dot(fNormal, v) / ((dot(fNormal, v)) * (1.0 - k) + k));
}
vec3 G(vec3 l, vec3 v, vec3 h) // Geometric Attenuation Term - Schlick Modified (k = a/2)
{
return Gsub(l) * Gsub(v);
}
vec3 F(vec3 v, vec3 h) // Fresnel - Schlick Modified (Spherical Gaussian Approximation)
{
vec3 f0 = specularColor; // right?
return f0 + (1.0 - f0) * pow(2, (-5.55473 * (dot(v, h)) - 6.98316) * (dot(v, h)));
}
vec3 specular()
{
return (D(halfVector) * F(eyeVector, halfVector) * G(lightVector, eyeVector, halfVector)) / 4 * ((dot(fNormal, lightVector)) * (dot(fNormal, eyeVector)));
}
vec3 diffuse()
{
float NoL = dot(fNormal, lightVector);
vec3 result = vec3(reflectance / PI);
return result * NoL;
}
void main()
{
fNormal = normalize(inputNormal);
fColor = vec4(diffuse() + specular(), 1.0);
//fColor = vec4(D(halfVector), 1.0);
}
So far I have been able to fix up some things and now I get a better result.
However it now seems clear that the highlight is way too big; this originates from the normal distribution function (Specular D).
Your coding of GGX/Trowbridge-Reitz is wrong:
vec3 NxH = fNormal * h;
The star * means a term by term product where you want a dot product
Also
float alphaTerm = (alpha * alpha - 1.0) + 1.0;
Is not correct since the formula multiplies n.m by (alpha * alpha - 1.0) before adding 1.0. Yours formula is equal to alpha*alpha!
Try:
// Specular
vec3 D(vec3 h) // Normal Distribution Function - GGX/Trowbridge-Reitz
{
float alpha = roughness * roughness;
float NxH = dot(fNormal,h);
float alpha2 = alpha*alpha;
float t = ((NxH * NxH) * (alpha2 - 1.0) + 1.0);
return alpha2 / (PI * t * t);
}
In many other places you use * instead of dot. You need to correct all these. Also, check for your formulas, many seem incorrect.