I have created a shader that works perfectly in Firefox, but in Chrome the fragment and vertex shader cannot be linked. They compile just fine, but at the linking part something goes wrong. I have localized the problem at the fallowing bit of code :
else if (uLightType[i] == 1) { //point light
NdotL = dot(n, normalize(lightDir[i]));
if (NdotL > 0.0) {
distance = length(lightDir[i]);
att = (1.0 / (uLightAttenuation[i] * distance * distance));
color += vec3(uLightColor[i] * NdotL * uLightIntensity[i] * att);
}
}
This small piece of code calculates the diffuse color reflected from a point light. It's part of a larger for loop. As it is shown here it won't link at all, but if I remove uLightAttenuation from calculating att, like so :
att = (1.0 / (distance * distance));
it works just fine. If I replace it with any other uniform, say uLightIntensity,
att = (1.0 / (uLightIntensity[i] * distance * distance));
again it won't work. If I replace it with a simple constant value / float variabile, strangely enough it compiles. And what is even more strange is, if I remove att from calculating color, but keep the uniform at it's current position, it runs just fine:
att = (1.0 / (uLightAttenuation[i] * distance * distance));
color += vec3(uLightColor[i] * NdotL * uLightIntensity[i]);
The uniform is a float value, and even if it were a problem with type casting it should fail at compilation, not linking.
Here are the complete shaders, maybe I missed something elsewhere in the code.
Fragment Shader
Vertex Shader
I have managed to make it to work, it turns out I had 2 problems. One is with division by 0 when calculating att. It would let me divide something over a float uniform, so I combined uLightAttenuation and uLightIntensity into a single vec2 uniform, after that that part worked. Secondly, when calculating color I had to reference every component individually (color[0], color[1] etc...) and work only with float variables and not vectors. After that it worked correctly in chrome to.
Related
I am using SSAO very nearly as per John Chapman's tutorial here, in fact, using Sascha Willems Vulkan example.
One difference is the fragment position is saved directly to a G-Buffer along with linear depth (so there are x, y, z, and w coordinates, w being the linear depth, calculated in the G-Buffer shader. Depth is calculated like this:
float linearDepth(float depth)
{
return (2.0f * ubo.nearPlane * ubo.farPlane) / (ubo.farPlane + ubo.nearPlane - depth * (ubo.farPlane - ubo.nearPlane));
}
My scene typically consists of a large, flat floor with a model in the centre. By large I mean a lot bigger than the far clip distance.
At high depth values (i.e. at the horizon in my example), the SSAO is generating occlusion where there should really be none - there's nothing out there except a completely flat surface.
Along with that occlusion, there comes some banding as well.
Any ideas for how to prevent these occlusions occurring?
I found this solution while I was writing the question, which works only because I have a flat floor.
I look up the normal value at each kernel sample position, and compare to the current normal, discarding any with a dot product that is close to 1. This means flat planes can't self-occlude.
Any comments on why I shouldn't do this, or better alternatives, would be very welcome!
It works for my current situation but if I happened to have non-flat geometry on the floor I'd be looking for a different solution.
vec3 normal = normalize(texture(samplerNormal, newUV).rgb * 2.0 - 1.0);
<snip>
for(int i = 0; i < SSAO_KERNEL_SIZE; i++)
{
<snip>
float sampleDepth = -texture(samplerPositionDepth, offset.xy).w;
vec3 sampleNormal = normalize(texture(samplerNormal, offset.xy).rgb * 2.0 - 1.0);
if(dot(sampleNormal, normal) > 0.99)
continue;
I'm working on a WebGL project to create isolines on a 3D surface on macOS/amd GPU. My idea is to colour the pixels based on elevation in fragment shader. With some optimizations, I can achieve a relatively consistent line width and I am happy about that. However when I tested it on windows it behaves differently.
Then I figured out it's because of fwidth(). I use fwidth() to prevent fragment shader from coloring the whole horizontal plane when it happens to locate at a isolevel. Please see the screenshot:
I solved this issue by adding the follow glsl line:
if (fwidth(vPositionZ) < 0.001) { /**then do not colour isoline on these pixels**/ };
It works very well on macOS since I got this:
.
However, on windows/nvidia GPU all isolines are gone because fwidth(vPositionZ) always evaluates to 0.0. Which doesn't make sense to me.
What am I doing wrong? Is there any better way to solve the issue presented in the first screenshot? Thank you all!
EDIT:
Here I attach my fragment shader. It's simplified but I think that's all relevant. I know looping is slow but for now I'm not worried about it.
uniform float zmin; // min elevation
uniform vec3 lineColor;
varying float vPositionZ; // elevation value for each vertex
float interval;
vec3 originColor = finalColor.rgb; // original surface color
for ( int i = 0; i < COUNT; i ++ ) {
float elevation = zmin + float( i + 1 ) * interval;
lineColor = mix( originColor, lineColor, step( 0.001, fwidth(vPositionZ)));
if ( vPositionZ <= elevation + lineWidth && vPositionZ >= elevation - lineWidth ) {
finalColor.rgb = lineColor;
}
// same thing but without condition:
// finalColor.rgb = mix( mix( originColor, lineColor, step(elevation - lineWidth, vPositionZ) ),
// originColor,
// step(elevation + lineWidth, vPositionZ) );
}
gl_FragColor = finalColor;
Environment: WebGL2.0, es version 300, chrome browser.
Put fwidth(vPosistionZ) before the loop will work. Otherwise, fwidth() evaluates anything to 0 if it's inside a loop.
I suspect this is a bug with Nvidia GPU.
I am currently working on a raytracer just for fun and I have trouble with the refraction handling.
The code source of the whole raytracer can be found on Github EDIT: The code migrated to Gitlab.
Here is an image of the render:
The right sphere is set to have a refraction indice of 1.5 (glass).
On top of the refraction, I want to handle a "transparency" coefficient which is defined as such :
0 --> Object is 100% opaque
1 --> Object is 100% transparent (no trace of the original object's color)
This sphere has a transparency of 1.
Here is the code handling the refraction part. It can be found on github here.
Color handleTransparency(const Scene& scene,
const Ray& ray,
const IntersectionData& data,
uint8 depth)
{
Ray refracted(RayType::Transparency, data.point, ray.getDirection());
Float_t eta = data.material->getRefraction();
if (eta != 1 && eta > Globals::Epsilon)
refracted.setDirection(Tools::Refract(ray.getDirection(), data.normal, eta));
refracted.setOrigin(data.point + Globals::Epsilon * refracted.getDirection());
return inter(scene, refracted, depth + 1);
}
// http://graphics.stanford.edu/courses/cs148-10-summer/docs/2006--degreve--reflection_refraction.pdf
Float_t getFresnelReflectance(const IntersectionData& data, const Ray& ray)
{
Float_t n = data.material->getRefraction();
Float_t cosI = -Tools::DotProduct(ray.getDirection(), data.normal);
Float_t sin2T = n * n * (Float_t(1.0) - cosI * cosI);
if (sin2T > 1.0)
return 1.0;
using std::sqrt;
Float_t cosT = sqrt(1.0 - sin2T);
Float_t rPer = (n * cosI - cosT) / (n * cosI + cosT);
Float_t rPar = (cosI - n * cosT) / (cosI + n * cosT);
return (rPer * rPer + rPar * rPar) / Float_t(2.0);
}
Color handleReflectionAndRefraction(const Scene& scene,
const Ray& ray,
const IntersectionData& data,
uint8 depth)
{
bool hasReflexion = data.material->getReflexion() > Globals::Epsilon;
bool hasTransparency = data.material->getTransparency() > Globals::Epsilon;
if (!(hasReflexion || hasTransparency) || depth >= MAX_DEPTH)
return 0;
Float_t reflectance = data.material->getReflexion();
Float_t transmittance = data.material->getTransparency();
Color reflexion;
Color transparency;
if (hasReflexion && hasTransparency)
{
reflectance = getFresnelReflectance(data, ray);
transmittance = 1.0 - reflectance;
}
if (hasReflexion)
reflexion = handleReflection(scene, ray, data, depth) * reflectance;
if (hasTransparency)
transparency = handleTransparency(scene, ray, data, depth) * transmittance;
return reflexion + transparency;
}
Tools::Refract is simply calling glm::refract internally. (So that I can change easily if I want)
I don't handle notions of n1 and n2: n2 is considered to always be 1 for air.
Am I mising something obvious ?
EDIT
After adding a way to know if a ray is inside an object (and negating the normal if so) I have this :
While looking around to find help, I stumbled upon this post but I don't think the answer answers anything. By reading it, I don't understand what I'm supposed to do at all.
EDIT 2
I've tried a lot of things and I am currently at this point :
It's better but I'm still not sure if it's right. I'm using this image as an inspiration :
But this one is using two indexes of refraction (To be closer to reality) while I want to simplify and always consider air as the second (in or out) material.
What I essentially changed in my code is here :
inline Vec_t Refract(Vec_t v, const IntersectionData& data, Float_t eta)
{
Float_t n = eta;
if (data.isInside)
n = 1.0 / n;
double cosI = Tools::DotProduct(v, data.normal);
return v * n - data.normal * (-cosI + n * cosI);
}
Here is another view of the same spheres :
EDIT: I've figured that the previous version of this was not entirely correct so I edit the answer.
After reading all the comments, the new versions of the question and doing some experimentation myself I produced the following version of refract routine:
float3 refract(float3 i, float3 n, float eta)
{
eta = 2.0f - eta;
float cosi = dot(n, i);
float3 o = (i * eta - n * (-cosi + eta * cosi));
return o;
}
This time calling it does not require any additional operations:
float3 refr = refract(rayDirection, normal, refrIdx);
The only thing I am still not sure is the inverting of the refractive index when doing the inside ray intersection. In my test the produced image haven't differ much no matter I inverted the index or not.
Below some images with different indices:
For more images see the link, because the site do not allow me to put more of them here.
I am answering this as a physicist rather than a programmer as haven't had time to read all the code so won't be giving the code to do the fix just the general idea.
From what you have said above the black ring is for when n_object is less than n_air. This is only usually true if you are inside an object say if you were inside water or the like but materials have been constructed with weird properties like that and it should be supported.
In this type of situation there are rays of light that can't be diffracted as the diffraction formula put the refracted ray on the SAME side of the interface between the materials, which obviously doesn't make sense as diffraction. In this situation the surface will instead act like it's a reflective surface. This is the situation that is often referred to as total internal reflection.
If being fully exact then almost ever refractive object will also partially reflective too and the fraction of light that is reflected or transmitted (and therefore refracted) is given by the Fresnel equations. For this case though it would still be a good approximation to just treat is as reflective if the angle is too far and transmitting (and therefore refractive) otherwise.
Also there are situations where this black ring effect can be seen if reflection is not possible (due to it being dark in those directions) but light that is transmitted being possible. This could be done by say taking a tube of card that fits tightly to the edge of the object and is pointed directly away and only shining light inside the tube not outside.
I am currently working on my first OpenGL based game engine. I need normal mapping as a feature, but it isn't working correctly.
Here is an animation of what is Happening
The artifacts are affected by the angle between the light and the normals on the surface. Camera movement does not affect it in any way. I am also (at least for now) going the route of the less efficient method where the normal extracted from the normal map is converted into view space rather than converting everything to tangent space.
Here are the relevant pieces of my code:
Generating Tangents and Bitangents
for(int k=0;k<(int)mb->getIndexCount();k+=3)
{
unsigned int i1 = mb->getIndex(k);
unsigned int i2 = mb->getIndex(k+1);
unsigned int i3 = mb->getIndex(k+2);
JGE_v3f v0 = mb->getVertexPosition(i1);
JGE_v3f v1 = mb->getVertexPosition(i2);
JGE_v3f v2 = mb->getVertexPosition(i3);
JGE_v2f uv0 = mb->getVertexUV(i1);
JGE_v2f uv1 = mb->getVertexUV(i2);
JGE_v2f uv2 = mb->getVertexUV(i3);
JGE_v3f deltaPos1 = v1-v0;
JGE_v3f deltaPos2 = v2-v0;
JGE_v2f deltaUV1 = uv1-uv0;
JGE_v2f deltaUV2 = uv2-uv0;
float ur = deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x;
if(ur != 0)
{
float r = 1.0 / ur;
JGE_v3f tangent;
JGE_v3f bitangent;
tangent = ((deltaPos1 * deltaUV2.y) - (deltaPos2 * deltaUV1.y)) * r;
tangent.normalize();
bitangent = ((deltaPos1 * -deltaUV2.x) + (deltaPos2 * deltaUV1.x)) * r;
bitangent.normalize();
tans[i1] += tangent;
tans[i2] += tangent;
tans[i3] += tangent;
btans[i1] += bitangent;
btans[i2] += bitangent;
btans[i3] += bitangent;
}
}
Calculating the TBN matrix in the Vertex Shader
(mNormal corrects the normal for non-uniform scales)
vec3 T = normalize((mVW * vec4(tangent, 0.0)).xyz);
tnormal = normalize((mNormal * n).xyz);
vec3 B = normalize((mVW * vec4(bitangent, 0.0)).xyz);
tmTBN = transpose(mat3(
T.x, B.x, tnormal.x,
T.y, B.y, tnormal.y,
T.z, B.z, tnormal.z));
Finally here is where I use the sampled normal from the normal map and attempt to convert it to view space in the Fragment Shader
fnormal = normalize(nmapcolor.xyz * 2.0 - 1.0);
fnormal = normalize(tmTBN * fnormal);
"nmapcolor" is the sampled color from the normal map.
"fnormal" is then used like normal in the lighting calculations.
I have been trying to solve this for so long and have absolutely no idea how to get this working. Any help would be greatly appreciated.
EDIT - I slightly modified the code to work in world space and outputted the results. The big platform does not have normal mapping (and it works correctly) while the smaller platform does.
I added in what direction the normals are facing. They should both be generally the same color, but they're clearly different. Seems the mTBN matrix isn't transforming the tangent space normal into world (and normally view) space properly.
Well... I solved the problem. Turns out my normal mapping implementation was perfect. The problem actually was in my texture class. This is, of course, my first time writing an OpenGL rendering engine, and I did not realize that the unlock() function in my texture class saved ALL my textures as GL_SRGB_ALPHA including normal maps. Only diffuse map textures should be GL_SRGB_ALPHA. Temporarily forcing all textures to load as GL_RGBA fixed the problem.
Can't believe I had this problem for 11 months, only to find it was something so small.
I have been searching online for a while now on why my geometric attenuation term for my physically based shader (Which I posted a question about not too long ago) and I cannot seem to come up with a result. The function I'm trying to implement can be found here: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf
This is my current iteration of the function.
vec3 Gsub(vec3 v) // Sub Function of G
{
float k = ((roughness + 1) * (roughness + 1)) / 8;
float fdotv = dot(fNormal, v);
return vec3((fdotv) / ((fdotv) * (1.0 - k) + k));
}
vec3 G(vec3 l, vec3 v, vec3 h) // Geometric Attenuation Term - Schlick Modified (k = a/2)
{
return Gsub(l) * Gsub(v);
}
This is the current result of the above in my application:
You can clearly see the strange artifacts on the left side, which should not be present.
One of the things I thought was an issue was my normals. I believe this is the issue, because whenever I put the same function into the Disney BRDF editor (http://www.disneyanimation.com/technology/brdf.html) I get correct results. I believe it is the normals because whenever I view the normals in Disney's application, I get this.
These normals differ from my normals, which -should- be correct:
I use the same model in both applications, and the normals are stored inside the model file. Can anyone give any insight into this?
Additionally I'd like to mention that these are the operations done on my normals:
Vertex Shader
mat3 normalMatrix = mat3(transpose(inverse(ModelView)));
inputNormal = normalize(normalMatrix * vNormal);
Fragment Shader
fNormal = normalize(inputNormal);
P.S. Please excuse my rushy-code, I've been trying to get this to work for a while.