I'm rather new to shaderlab with unity. I am trying to distort the vertices so that they are push backwards and towards the camera almost like looking at the camera from a 45 degree angle. I am replicating an effect from a game for fun. This is the code used for the effect
ive tried implenting the code into a shader script like so:
float4 vert(appdata v){
float3 position = mul(unity_ObjectToWorld, v.vertex).xyz;
float y = position.y;
float z = position.x;
float3 parentTranslation = ParentMatrix._m30_m31_m32;
position -= parentTranslation;
position.z += AlternateLayeringScale;
position.z -= y;
position.y += position.z;
position += parentTranslation + float3(0,parentTranslation.z,0);
return position;
}
however i get an error stating it cannot convert from float3 to float4, i am not sure how it was implemented
The fourth part is 'w' also called the inverse stretching factor. To convert from vec4 to vec3 it's best to do position.xyz / position.w and to put it back in a vec4 you can write return fixed4(position, 1)
Related
I chose the quite old, but sufficient method of shadow mapping, which is OK overall, but I quickly discovered some self-shadowing problems:
It seems, this problem appears because of the bias offset, which is necessary to eliminate shadow acne artifacts.
After some googling, it seems that there is no easy solution to this, so I tried some shader tricks which worked, but not very well.
My first idea was to perform a calculation of a dot multiplication between a light direction vector and a normal vector. If the result is lower than 0, the angle between vectors is >90 degrees, so this surface is pointing outward at the light source, hence it is not illuminated. This works good, except shadows may appear too sharp and hard:
After I was not satisfied with the results, I tried another trick, by multiplying the shadow value by the abs value of the dot product of light direction and normal vector (based on the normal map), and it did work (hard shadows from the previous image got smooth transition from shadow to regular diffuse color), except it created another artifact in situations, when the normal map normal vector is pointing somewhat at the sun, but the face normal vector does not. It also made self-shadows much brighter (but it is fixable):
Can I do something about it, or should I just choose the lesser evil?
Shader shadows code for example 1:
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
vec4 fragPosLightSpace = lightSpaceMatrix * vec4(FragPos, 1.0);
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// keep the shadow at 0.0 when outside the far_plane region of the light's frustum.
if (currentDepth > 1.0)
{
return 0.0;
}
// calculate bias (based on depth map resolution and slope)
float bias = max(0.005 * (1.0 - dot(normal, lightDir)), 0.0005);
vec2 texelSize = 1.0 / vec2(textureSize(material.texture_shadow, 0));
const int sampleRadius = 2;
const float sampleRadiusCount = pow(sampleRadius * 2 + 1, 2);
for(int x = -sampleRadius; x <= sampleRadius; ++x)
{
for(int y = -sampleRadius; y <= sampleRadius; ++y)
{
float pcfDepth = texture(material.texture_shadow, vec3(projCoords.xy + vec2(x, y) * texelSize, layer)).r;
shadow += (currentDepth - bias) > pcfDepth ? ambientShadow : 0.0;
}
}
shadow /= sampleRadiusCount;
Hard self shadows trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
// "Normal" is a face normal vector, "normal" is calculated based on normal map. I know there is a naming problem with that))
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = max(abs(vectorNormalDot), ambientShadow);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...
}
Dot product multiplication trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = ambientShadow * abs(vectorNormalDot);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...
Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.
The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.
I made a spotlight that
Projects 3d models onto a render target from each light POV to simulate shadows
Cuts a circle out of the square of light that has been projected onto the render target as a result of the light frustum, then only lights up the pixels inside that circle (except the shadowed parts of course), so you dont see the square edges of the projected frustum.
After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95 to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value, which should range between .95 and 1.0.
This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However, there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander and let me know, please help, thank you so much.
float CalculateSpotLightIntensity(
float3 LightPos_VertexSpace,
float3 LightDirection_WS,
float3 SurfaceNormal_WS)
{
//float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace);
float3 lightToVertex_WS = -LightPos_VertexSpace;
float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS)));
// METALLIC EFFECT (deactivate for now)
float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));
if(dotProduct > .95 /*&& metalEffect > .55*/)
{
return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));
//return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct;
//return dotProduct;
}
else
{
return 0;
}
}
float4 LightPixelShader(PixelInputType input) : SV_TARGET
{
float2 projectTexCoord;
float depthValue;
float lightDepthValue;
float4 textureColor;
// Set the bias value for fixing the floating point precision issues.
float bias = 0.001f;
// Set the default output color to the ambient light value for all pixels.
float4 lightColor = cb_ambientColor;
/////////////////// NORMAL MAPPING //////////////////
float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex);
// Expand the range of the normal value from (0, +1) to (-1, +1).
bumpMap = (bumpMap * 2.0f) - 1.0f;
// Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal!
float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal));
//////////////// LIGHT LOOP ////////////////
for(int i = 0; i < NUM_LIGHTS; ++i)
{
// Calculate the projected texture coordinates.
projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;
projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;
if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))
{
// Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location.
depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r;
// Calculate the depth of the light.
lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w;
// Subtract the bias from the lightDepthValue.
lightDepthValue = lightDepthValue - bias;
float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue );
// Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel.
// If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it.
if(lightDepthValue < depthValue)
{
// Calculate the amount of light on this pixel.
float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i])));
if(lightIntensity > 0.0f)
{
// Determine the final diffuse color based on the diffuse color and the amount of light intensity.
float spotLightIntensity = CalculateSpotLightIntensity(
input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!!
cb_lights[i].lightDirection,
bumpNormal/*input.normal*/);
lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight
//lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light
}
}
}
}
// Saturate the final light color.
lightColor = saturate(lightColor);
// lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection));
// TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location.
input.tex.x += textureTranslation;
// BLENDING
float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex);
float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex);
float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex);
textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2));
// Combine the light and texture color.
float4 finalColor = lightColor * textureColor;
/////// TRANSPARENCY /////////
//finalColor.a = 0.2f;
return finalColor;
}
Oops! It's because the range of 0.95 - 1.0 was too small to make a difference! So I had to expand the range to 0~1 by doing
float expandedRange = (dotProduct - .95)/.05f;
return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))*expandedRange*expandedRange);
Now it has a soft edge. A little too soft for me honestly. Now I'm just doing a quadratic falloff by squaring the expanded range as you can see. Any tips on making it look nicer? Let me know, thanks.
I followed a paper called "GPU Based Algorithms for Terrain Texturing" and it says the following:
The main algorithm to apply triplanar texturing is fairly simple.
First, we check whether the slope is relatively large in the same way
that we do with slope based texturing. These regions with high slope
will be the only regions aected by the algorithm. We then check what
the larger component of the normal is, out of x and z. If x is the
larger component, we use the geometry z coordinate as the texture
coordinate s, and the geometry y coordinate as the texture coordinate
t. If z is the larger component, we use the geometry x coordinate as
the texture coordinate s, and the geometry y coordinate as the texture
coordinate t.
So I tried to implement it. This is my heightmap:
Note that I added white lines in the borders just for the experiment, so now I have maximum-height walls surrounding my map.
Now following the articles, here's the implementation in the vertex shader:
#version 430
uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform vec3 scale;
layout(location = 0) in vec3 vertex;
layout(location = 1) in vec3 normal;
out vec3 fsVertex;
out vec3 fsNormal;
out vec2 fsUvs;
void main()
{
fsVertex = vertex;
fsNormal = normalize(normal);
if(fsNormal.y < 0.75) {
if(fsNormal.x > fsNormal.z)
fsUvs = vertex.zy * scale.zy;
else
fsUvs = vertex.xy * scale.xy;
}
else
fsUvs = vertex.xz * scale.xz;
gl_Position = ProjectionMatrix * CameraMatrix * vec4(vertex * scale, 1.0);
}
Here's the fragment shader, if it helps.
This is what I get:
Here's a further look, for proportion.
The top and left walls (of the heightmap) are rendered ok, and the bottom and right walls still suffer from stretching. I also get these weird stretches spots next to the beginning of the walls.
What could be the cause of this?
If you want to check if the normal's x or z coordinate are longer, you should use the abs function:
if(abs(fsNormal.x) > abs(fsNormal.z))
Furthermore, the y > 0.75 seems like a coarse approximation, which is probably good enough in most cases. Actually, the maximum of abs(x), abs(y), abs(z) gives you the correct plane.
Here is a DX11/HLSL implementation i used. GLSL conversion should be easy.
With the exponent value you can tune the blending speed at the borders. i used something like 3.
float3 SampleTriplanarTexture(Texture2D<float4> tex1, Texture2D<float4> tex2, Texture2D<float4> tex3, float3 normal, float3 pos, float exponent)
{
//triplanar projection
float mXY = pow(abs(normal.z), exponent);
float mXZ = pow(abs(normal.y), exponent);
float mYZ = pow(abs(normal.x), exponent);
float total = 1.0f / (mXY + mXZ + mYZ);
mXY *= total;
mXZ *= total;
mYZ *= total;
return tex1.SampleLevel(linearSampler2, pos.xz, 0) * mXZ +
tex2.SampleLevel(linearSampler2, pos.xy, 0) * mXY +
tex3.SampleLevel(linearSampler2, pos.yz, 0) * mYZ;
}
I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position.
Vertex shader
world_coordinate_ = gl_Vertex;
Fragment shader
vec3 direction = (world_coordinate_.xyz - cameraPosition_);
direction = normalize(direction);
for (float k = 0.0; k < steps; k += 1.0) {
....
pos += direction*delta_step;
float thisLum = texture3D(texture3_, pos).r;
if(thisLum > surface_)
...
}
Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written.
So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this:
pos *= 10;
pos.z += 10;
pos.z *= -1;
vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0);
float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5;
gl_FragDepth = depth;
The solution was:
vec4 depth_vec = ViewMatrix * gl_ProjectionMatrix * vec4(pos.xyz, 1.0);
float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5;
gl_FragDepth = depth;
One solution you might try is to draw a cube that is directly on top of the cube you're trying to raytrace. Send the cube's position in the same space as you get from your ray-tracing algorithm, and perform the same transforms to compute your "depth_vec", only do it in the vertex shader.
This way, you can see where your problems are coming from. Once you get this part of the transform to work, then you can back-port this transformation sequence into your raytracer. If that doesn't fix everything, then it would only be because your ray-tracing algorithm isn't outputting positions in the space that you think it is in.