how to convert shadershop formula into glsl - glsl

I have been learning some basic of shaders recently and I came up with a great visual tools : shadershop
But I am having trouble to convert the formula I created in this site into glsl.
A simple example, I created a formula in this site:
And I am able to convert this in glsl:
And then I moved on, I created a two dimension formula on shadershop :
But this I just have no clue how to convert this formula into glsl just like I did before.
Any advice will be appreciated, thanks :)
UPDATE
I tried again to convert the formula according to #Rabbid76 's advice:
But still I am having trouble to understan :
how to split the formula to U and V
how to deal with the matrix in the formula

The formula of shadershop can be expressed as follows:
vec2 x1x2 = inverse(m) * vec2(x1, x2);
float x = -sin(x1x2.x - x1x2.y);
where m is a 2x2 matrix.
e.g.
mat2 m = mat2(
0.1, 0.0,
0.5, 1.0
);
For the formula for the inverse matrix see www.mathwords.com (in GLSL ES 1.00 there is no function for the inverse matrix) :
float det_m = m[0][0]*m[1][1] - m[0][1]*m[1][0];
mat2 inv_m = mat2(m[1][1], -m[0][1], -m[1][0], m[0][0]) / det_m;
The full fragment shader code may look like this:
void main()
{
vec2 st = 2.0 * gl_FragCoord.xy / resolution.xy - 1.0;
vec2 scale = vec2(1.5, 1.5);
st *= scale;
mat2 m = mat2(
0.1, 0.0,
0.5, 1.0
);
vec2 x1x2 = vec2(st.x, 0.0);
float det_m = m[0][0]*m[1][1] - m[0][1]*m[1][0];
if ( det_m != 0.0 )
{
mat2 inv_m = mat2(m[1][1], -m[0][1], -m[1][0], m[0][0]) / det_m;
x1x2 = inv_m * st.xy;
}
float x = -sin(x1x2.x - x1x2.y);
vec3 color = vec3( x, x, abs(x) );
gl_FragColor = vec4(color, 1.0);
}
See the preview:

Related

How to get a smooth result with RSM (Reflective Shadow Mapping)?

I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.

Smooth normals using only the vertex shader after displacment using perlin noise

I am displacing vertices to form a 3D planet, using a random noise function:
float hash( float n ) { return fract(sin(n)*753.5453123); }
float snoise( in vec3 x )
{
vec3 p = floor(x);
vec3 f = fract(x);
f = f*f*(3.0-2.0*f);
float n = p.x + p.y*157.0 + 113.0*p.z;
return mix(mix(mix( hash(n+ 0.0), hash(n+ 1.0),f.x),
mix( hash(n+157.0), hash(n+158.0),f.x),f.y),
mix(mix( hash(n+113.0), hash(n+114.0),f.x),
mix( hash(n+270.0), hash(n+271.0),f.x),f.y),f.z);
}
As the points are calculated on the GPU I have no way of calculating smooth normals ( apart from flat normals using a geometry shader). I have found various methods of doing this i.e using the neighbour method, however this leaves lots of artefacts on the terrain while performing the lightning calculations.
I am currently calculating the normals using this function with varying different theta values however the lighting has loads of patched areas of bright light :
vec3 calcNormal(vec3 pos)
{
float theta = Theta;
vec3 vecTangent = normalize(cross(pos, vec3(1.0, 0.0, 0.0))
+ cross(pos, vec3(0.0, 1.0, 0.0)));
vec3 vecBitangent = normalize(cross(vecTangent, pos));
vec3 ptTangentSample = getPos(normalize(pos + theta * normalize(vecTangent)));
vec3 ptBitangentSample = getPos(normalize(pos + theta * normalize(vecBitangent)));
return normalize(cross(ptTangentSample - pos, ptBitangentSample - pos));
}

Few problems with BRDF using Beckmann and GGX/Trowbridge-Reitz distribution for comparison

I have been trying to wrap my head around physical based rendering these last 2.5 weeks and so far I managed to learn a lot, ask a lot of questions, and have some results, although I still have few problems that I would like to fix but the last few days I am stuck. I am want to continue working/learning more but now I don't know what else to do or how to proceed further, thus I need some guidance :(
One of the first problems that I can not figure out what is happening is when I get close to a shape. There is a cut-off problem with BRDF function that I have implemented. The second and third row are BRDF functions using Spherical Gaussian for Fresnel, and Schlick approximation. The second row Beckmann distribution NDF and the third one uses GGX/Trowbridge-Reitz as NDF.
I started implementing this referring to "Real Shading in Unreal Engine 4" and few other posts found while Google-ing.
What I believe the remaining things to do are:
How to blend diffuse, reflection, and speculal better
Fix the problem with the BRDF cut-off problem
Evaluate if my shaders are producing good results based on the equation (it is the first time for me going this way and some comments would be very helpful as a guide on how to proceed in tweaking things)
Fix specular factor in Phong (first row) shader, now I use material roughness as a blend factor when I mix Phong, skybox reflection and diffuse
The code I use for BRDF's is
// geometry term Cook Torrance
float G(float NdotH, float NdotV, float VdotH, float NdotL) {
float NH2 = 2.0 * NdotH;
float g1 = (NH2 * NdotV) / VdotH;
float g2 = (NH2 * NdotL) / VdotH;
return min(1.0, min(g1, g2));
}
// Fresnel reflection term, Schlick approximation
float R_Fresnel(float VdotH) {
return F0 + (1.0 - F0) * pow(2, (-5.55473 * (VdotH)-6.98316) * (VdotH));
}
// Normal distribution function, GGX/Trowbridge-Reitz
float D_GGX(float NdotH, float roughtness2) {
float a = roughtness2 * roughtness2;
float a2 = a*a;
float t = ((NdotH * NdotH) * (a2 - 1.0) + 1.0);
return a2 / (PI * t * t);
}
// Normal distribution function, Beckmann distribution
float D_Beckmann(float NdotH, float mSquared) {
float r1 = 1.0 / (4.0 * mSquared * pow(NdotH, 4.0));
float r2 = (NdotH * NdotH - 1.0) / (mSquared * NdotH * NdotH);
return (r1 * exp(r2));
}
// COOK TORRANCE BRDF
vec4 cookTorrance(Light light, vec3 direction, vec3 normal) {
// do the lighting calculation for each fragment.
float NdotL = max(dot(normal, direction), 0.0);
float specular = 0.0;
if (NdotL > 0.0)
{
vec3 eyeDir = normalize(cameraPosition);
// calculate intermediary values
vec3 halfVector = normalize(direction + eyeDir);
float NdotH = max(dot(normal, halfVector), 0.0);
float NdotV = max(dot(normal, eyeDir), 0.0);
float VdotH = max(dot(eyeDir, halfVector), 0.0);
float matShininess = (material.shininess / 1000.0);
float mSquared = (0.99 - matShininess) * (0.99 - matShininess);
float geoAtt = G(NdotH, NdotV, VdotH, NdotL);
float roughness = D_Beckmann(NdotH, mSquared);
float fresnel = R_Fresnel(VdotH);
specular = (fresnel * geoAtt * roughness) / (NdotV * NdotL * PI);
}
vec3 finalValue = light.color * NdotL * (k + specular * (1.0 - k));
return vec4(finalValue, 1.0);
}
vec4 cookTorrance_GGX(Light light, vec3 direction, vec3 normal) {
// do the lighting calculation for each fragment.
float NdotL = max(dot(normal, direction), 0.0);
float specular = 0.0;
if (NdotL > 0.0)
{
vec3 eyeDir = normalize(cameraPosition);
// calculate intermediary values
vec3 halfVector = normalize(direction + eyeDir);
float NdotH = max(dot(normal, halfVector), 0.0);
float NdotV = max(dot(normal, eyeDir), 0.0);
float VdotH = max(dot(eyeDir, halfVector), 0.0);
float matShininess = (material.shininess / 1000.0);
float mSquared = (0.99 - matShininess) * (0.99 - matShininess);
float geoAtt = G(NdotH, NdotV, VdotH, NdotL);
// NDF CHANGED TO GGX
float roughness = D_GGX(NdotH, mSquared);
float fresnel = R_Fresnel(VdotH);
specular = (fresnel * geoAtt * roughness) / (NdotV * NdotL * PI);
}
vec3 finalValue = light.color * NdotL * (k + specular * (1.0 - k));
return vec4(finalValue, 1.0);
}
void main() {
//vec4 tempColor = vec4(material.diffuse, 1.0);
vec4 tempColor = vec4(0.1);
// interpolating normals will change the length of the normal, so renormalize the normal.
vec3 normal = normalize(Normal);
vec3 I = normalize(Position - cameraPosition);
vec3 R = reflect(I, normalize(Normal));
vec4 reflection = texture(skybox, R);
// fix blending
float shininess = (material.shininess / 1000.0);
vec4 tempFinalDiffuse = mix(tempColor, reflection, shininess);
vec4 finalValue = cookTorrance_GGX(directionalLight.light, directionalLight.position, normal) + tempFinalDiffuse;
// OR FOR COOK TORRANCE IN THE OTHER SHADER PROGRAM
//vec4 finalValue = cookTorrance(directionalLight.light, directionalLight.position, normal) + tempFinalDiffuse;
gl_FragColor = finalValue;
//gl_FragColor = vec4(1.0); // TESTING AND DEBUGGING FRAG OUT
}
The results i have so far are lik in pictures below
EDIT :: I managed to solve few problems and implement environment sampling given in "Real Shading in Unreal Engine 4" but still I just cant figure out why I have that cut-off problem and I have a problem with reflection now after sampling. :(
Also I moved Phong that i tough in books and online tutorial to BDRF Blinn-Phong for better comparison.
My shader now looks like this.
vec4 brdf_GGX(Light light, vec3 direction, vec3 normal) {
float specular = 0.0;
float matShininess = 1.0 - (material.shininess / 1000.0);
vec2 randomPoint;
vec4 finalColor = vec4(0.0);
vec4 totalLambert = vec4(0.0);
const uint numberSamples = 32;
for (uint sampleIndex = 0; sampleIndex < numberSamples; sampleIndex++)
{
randomPoint = hammersley2d(sampleIndex, numberSamples);
vec3 H = ImportanceSampleGGX(randomPoint, matShininess, normal);
vec3 L = 2.0 * dot(normal, H) * H - normal;
vec3 R = reflect(L, normalize(normal));
totalLambert += texture(skybox, -R);
}
totalLambert = totalLambert / numberSamples;
float NdotL = max(dot(normal, direction), 0.0);
if (NdotL > 0.0)
{
vec3 eyeDir = normalize(cameraPosition);
// calculate intermediary values
vec3 halfVector = normalize(direction + eyeDir);
float NdotH = max(dot(normal, halfVector), 0.0);
float NdotV = max(dot(normal, eyeDir), 0.0);
float VdotH = max(dot(eyeDir, halfVector), 0.0);
float mSquared = clamp(matShininess * matShininess, 0.01, 0.99);
float geoAtt = G(NdotH, NdotV, VdotH, NdotL);
float roughness = D_Beckmann(NdotH, mSquared);
float fresnel = R_Fresnel(VdotH);
specular = (fresnel * geoAtt * roughness) / (NdotV * NdotL * PI);
}
vec3 finalValue = light.color * NdotL * (k + specular * (1.0 - k));
return vec4(finalValue, 1.0) * totalLambert;
}
Current results look like this (NOTE: I used skybox sampling only in the third GGX model, do the same for other shaders tomorrow)
EDIT:: OK i am figuring out what is happening but still i can not fix it. I have problems when sampling. I have no idea how to translate normalized ray to proper cube map reflection after sampling. If you can notice in pictures I lost the correct reflection that sphere does to environment map. I just have a simple/flat texture on each sphere and now I have no idea how to fix that.

WebGL Normal calculations from position texture

Iam trying to create a procedural water puddle in webGL with "water ripples" by vertex displacement.
The problem I'm having is that I get a noise I can't explain.
Below is the first pass vertex shader where I calculate the vertex positions that i later render to a texture that i then use in the second pass.
void main() {
float damping = 0.5;
vNormal = normal;
// wave radius
float timemod = 0.55;
float ttime = mod(time , timemod);
float frequency = 2.0*PI/waveWidth;
float phase = frequency * 0.21;
vec4 v = vec4(position,1.0);
// Loop through array of start positions
for(int i = 0; i < 200; i++){
float cCenterX = ripplePos[i].x;
float cCenterY = ripplePos[i].y;
vec2 center = vec2(cCenterX, cCenterY) ;
if(center.x == 0.0 && center.y == 0.0)
center = normalize(center);
// wave width
float tolerance = 0.005;
radius = sqrt(pow( uv.x - center.x , 2.0) + pow( uv.y -center.y, 2.0));
// Creating a ripple
float w_height = (tolerance - (min(tolerance,pow(ripplePos[i].z-radius*10.0,2.0)) )) * (1.0-ripplePos[i].z/timemod) *5.82;
// -2.07 in the end to keep plane at right height. Trial and error solution
v.z += waveHeight*(1.0+w_height/tolerance) / 2.0 - 2.07;
vNormal = normal+v.z;
}
vPosition = v.xyz;
gl_Position = projectionMatrix * modelViewMatrix * v;
}
And the first pass fragment shader that writes to the texture:
void main()
{
vec3 p = normalize(vPosition);
p.x = (p.x+1.0)*0.5;
p.y = (p.y+1.0)*0.5;
gl_FragColor = vec4( normalize(p), 1.0);
}
The second vertex shader is a standard passthrough.
Second pass fragmentshader is where I try to calculate the normals to be used for light calculations.
void main() {
float w = 1.0 / 200.0;
float h = 1.0 / 200.0;
// Nearest Nieghbours
vec3 p0 = texture2D(rttTexture, vUV).xyz;
vec3 p1 = texture2D(rttTexture, vUV + vec2(-w, 0)).xyz;
vec3 p2 = texture2D(rttTexture, vUV + vec2( w, 0)).xyz;
vec3 p3 = texture2D(rttTexture, vUV + vec2( 0, h)).xyz;
vec3 p4 = texture2D(rttTexture, vUV + vec2( 0, -h)).xyz;
vec3 nVec1 = p2 - p0;
vec3 nVec2 = p3 - p0;
vec3 vNormal = cross(nVec1, nVec2);
vec3 N = normalize(vNormal);
float theZ = texture2D(rttTexture, vUV).r;
//gl_FragColor = vec4(1.,.0,1.,1.);
//gl_FragColor = texture2D(tDiffuse, vUV);
gl_FragColor = vec4(vec3(N), 1.0);
}
The result is this:
The image displays the normalmap and the noise I'm refering to is the inconsistency of the blue.
Here is a live demonstration:
http://oskarhavsvik.se/jonasgerling_water_ripple/waterRTT-clean.html
I appreciate any tips and pointers, not only fixes for this problem. But the code in genereal, I'm here to learn.
After a brief look it seems like your problem is in storing x/y positions.
gl_FragColor = vec4(vec3(p0*0.5+0.5), 1.0);
You don't need to store them anyway, because the texel position implicitly gives the x/y value. Just change your normal points to something like this...
vec3 p2 = vec3(1, 0, texture2D(rttTexture, vUV + vec2(w, 0)).z);
Rather than 1, 0 you will want to use a scale appropriate to the size of your displayed quad relative to the wave height. Anyway, the result now looks like this.
The height/z seems to be scaled by distance from the centre, so I went looking for a normalize() and removed it...
vec3 p = vPosition;
gl_FragColor = vec4(p*0.5+0.5, 1.0);
The normals now look like this...

XNA or OpenGL sphere texture mapping

I'm trying to map a completly normal texture into a sphere.
I can't change my texture to a wrapped one, so I need to find some mapping function.
This is my vertex shader code:
vec3 north = vec3(0.0, 0.0, 1.0);
vec3 equator = vec3(0.0, 1.0, 0.0);
vec3 northEquatorCross = cross(equator, north);
vec3 vertexRay = normalize(gl_Vertex.xyz);
float phi = acos(vertexRay.z);
float tv = (phi / (PI*tiling));
float tu = 0.0;
if (vertexRay.z == 1.0 || vertexRay.z == -1.0) {
tu = 0.5;
} else {
float ang_hor = acos(max(min(vertexRay.y / sin(phi), 1.0), -1.0));
float temp = ang_hor / ((2.0*tiling) * PI);
tu = (vertexRay.x >= 0.0) ? temp : 1.0 - temp;
}
texPosition = vec2(tu, tv);
its straight from here:
http://blogs.msdn.com/coding4fun/archive/2006/10/31/912562.aspx
This is my fragment shader:
color = texture2D(debugTex, texPosition);
As you can see in this screenshot: http://img189.imageshack.us/img189/4695/sphereproblem.png,
it shows a crack in the sphere... and this is what I'm trying to fix.
(the texture used: http://img197.imageshack.us/img197/56/debug.jpg)
The first comment in the XNA website really fixes the problems using:
device.RenderState.Wrap0 = WrapCoordinates.Zero;
But because I don't understand enough about XNA internals, I can't understand what this solves in this particular problem.
Around the web, some have experienced the same, and reported to be about interpolation errors, but because I'm implementing this directly as a fragment shaders (per-pixel/frag), I shouldn't have this problem (no interpolation in the texture uv).
Any info/solution on this?
It's funny ! I've had the same problem, but it got fixed, I suspect there are some floating point issues... here is my shader that works!
uniform float r;
void main(void) {
float deg1, deg2, rr, sdeg1, sdeg2, cdeg2;
deg1 = (gl_Vertex.y / " "32" ".0) * 2.0 * 3.1415926;
deg2 = (gl_Vertex.x / " "32" ".0) * 2.0 * 3.1415926;
sdeg1 = sin(deg1);
sdeg2 = sin(deg2);
cdeg2 = cos(deg2);
gl_Vertex.y = r*sdeg1;
rr = r*cos(deg1);
if(rr < 0.0001) rr = 0.0001;
gl_Vertex.x = rr*sdeg2;
gl_Vertex.z = rr*cdeg2;
vec3 vertexRay = normalize(gl_Vertex.xyz);
float phi = acos(vertexRay.y);
float tv = (phi / (3.1415926));
float sphi = sin(phi);
float theta = 0.5;
float temp = vertexRay.z / sphi;
if(temp > 1.0) temp = 1.0;
if(temp < -1.0) temp = -1.0;
theta = acos(temp) / (2.0*3.1415926);
float tu = 0.0;
if(deg2 > 3.1415926) tu = theta;
else tu = 1.0 - theta;
gl_TexCoord[0].x = tu;
gl_TexCoord[0].y = tv;
gl_Position = ftransform();
}
WrapCoordinates.Zero effectively signifies that the texture is wrapped horizontally, not vertically, from the perspective of the texture.