OpenGL - Water waves (with noise) - c++

I am currently in the process of making water waves, so basically I am starting from the beginning. I have created a mesh which is basically a flat square and have animated it in the vertex shader (below is the code which achieves that)
vtx.y = (sin(2.0 * vtx.x + a_time/1000.0 ) * cos(1.5 * vtx.y + a_time/1000.0) * 0.2);
Basically just moving the y position based on a sin and cos function, the results of this can be observed here!
I then tried adding some Perlin noise (as per the Perlin noise functions by Ian McEwan, available here github.com/ashima/webgl-noise) as follows
vtx.y = vtx.y + 0.1*cnoise((a_time/5000.0)*a_vertex.yz);
the results of this can be observed here!
As you can plainly observe there is no real "random" effect that I was looking for (simulate some basic random roughness of an ocean).
I was wondering how it would be possible for me to achieve this (also any suggestions on how to improve either of the functions that change y would also be appreciated).

The simplest solution, is to use texture, that contain needed noise. If the displacement is kept in the texture, then it is possible to apply the displacement in the vertex shader, so there would be no need to modify vertex buffer. To make the waves moving, your may add some animated offset.
There are plenty of ways to fake, as you say, the "random" effect. You make take two samples from texture, using differently changing offsets and then simply add two displacements.
For example, see the following vertex shader:
uniform sampler2D u_heightMap;
uniform float u_time;
uniform mat4 modelViewMatrix
uniform mat4 projectionMatrix
attribute vec3 position;
void main()
{
vec3 pos = position;
vec2 offset1 = vec2(0.8, 0.4) * u_time * 0.1;
vec2 offset2 = vec2(0.6, 1.1) * u_time * 0.1;
float hight1 = texture2D(u_heightMap, uv + offset1).r * 0.02;
float hight2 = texture2D(u_heightMap, uv + offset2).r * 0.02;
pos.z += hight1 + hight2;
vec4 mvPosition = modelViewMatrix * vec4( pos, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
I've made a simple example using threejs:
var container;
var camera, scene, renderer;
var mesh;
var uniforms;
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.getElementById('container');
camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 0.1, 100);
camera.position.z = 0.6;
camera.position.y = 0.2;
camera.rotation.x = -0.45;
scene = new THREE.Scene();
var boxGeometry = new THREE.PlaneGeometry(0.75, 0.75, 100, 100);
var heightMap = THREE.ImageUtils.loadTexture("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAMAAACdt4HsAAAA81BMVEVZWVlRUVFdXV1EREQ8PDxBQUFWVlY3NzdKSkoyMjJqamphYWFOTk5mZmZHR0dxcXEtLS0qKip0dHRubm4aGhpTU1M+Pj4vLy8nJyeCgoJ3d3chISE0NDSKiop6eno5OTkBAQEjIyMeHh5/f38SEhKOjo6FhYUWFhaRkZELCwsPDw+bm5t8fHyfn59jY2OVlZUICAiwsLCHh4elpaWioqL///+qqqqYmJitra20tLS3t7fIyMi5ubnR0dHh4eHDw8P8/PzKysq9vb2np6fw8PDd3d3Ozs67u7vFxcX09PTY2Ni/v7/p6enV1dXT09Pl5eXt7e2kvPjWAAAMM0lEQVRYwxVW1ZbjOBAtsSwzsx1jOOlAc/f0MO7M7v9/zWr0kNg+xyVX6RIQXOhlUyIJNQwOLc/vL1ZiHuzATLhtIppQ1DzA0vd7S9nz9oBiExE/JXFo7zACFOzscM5l6yDfk/bs2vmTKhc3i8U6xIbfIGSlkUgXC5FGnft2N0kvrSQRWV+qu0sBYLu7g3t9VMicmjAPTJ74sjbWm5ubcb1YLKq84JH+X9MQW2l1UNl4s8wQmCJbRik0ISCLGKL4+OHty5dfs6qWSxNxbErO07L0Or8KV0G6Xqw3ozzvsFe1ka4cwdDEwr/pKMGgcJxg+/L6/iolQknps3A4YAwmlY4jTZsaVNJoHMvEXU2IGMnYlWy3u+YAhjDDAMwmbIrTdapdSc0d8bO1P20ftnf7cw2x4RNa9pgZWVa26PxpBZaJwh0eF34xTG5gOxJY2Ewr7ASgJ+uB2d14op22p6Obh0XBx3W/ZIbXZ2VmUFoZguV3B7JZjFFassASkkCJcte0XFl56/W6LSxAfuQLQnzq0E7G/lJiK3a4KNebLBPUi7dyUzIwyXgz9g7rgArn+OHj9wsA66N1imza+d5ybR0nZ1x6hA+fvv7a72sb4RDzUJG0XES4niblezFYqYSVGmSxYwdUdWPUdZIloifksH/5fE42y943XHW4m54en3ZStEHI/cWid39+yR1awf2+vofpEWddaRimSUVkYFwtE4SbOYjrQmyWCam6UrT3hD992CL59DzQzuPN+cizZWe4H8wYwr1LPU9YZeK0rTD6UiQOtUyOAbwUbBOzvo/asePzu8/P777lppNI7GCU9dGIODWhcMMGjzebxUYQ4J7HsMIi8x0RyfwwBNNRtuVmsSgttX/3/vVch6hbgxEZ41JA7x2PoLh7541RST20gtQXzJOp0abZJm2J0W0ialneuKFhuJ3P1/PPy1GXLzuNC0NQvP3yCPo8kV3n+/3L6U5q9N8Y9bUuWFkuS6Mro9TzWgrSygxNhfPzg54xW0bS5YT60sxdC8RyoxtRIW7qGnfjYt1b4XNAdakxivqR0l5YabpZLLPFgj/ysW9R1eNVAbL6S4sOCE2cmNlNiHmctE6/Xi7TIXfKm81yXZIqhkjIuKXdQq/eZkaC40oYohXM2tx41IAiABvTivIAIezixCCpwMgoe0OZeHu9uF5WalCi5WKTprpxSaw2FQTpIkJ0lgmMolwdYr+VVhi4EvKpgY3ejDoV4c3l363NgZLCZcRUjEVrQzq0L9OKAJaQ+AaIBI55aONs9ORQn1fb08usC9yMFTMhYTg2qMXB1LN3MC03m43gnHj6XBkn1Ep1ARKqmHApxCHYDXFrn8EV65v1jR8x22qFV/nJbOl3aBuDPy4W0VKifoyYSZhDIg8YSip/mWUonC5bRTUf1l2y7ERKMQKHVb4npMpDb7MsRdmLaLmOLMCBerzEAaIGAxckt6zVarLPH3dm3y83ZRQl2MR4hRh1uptlgucT3oyGqEoBiMhD0BThXBz328u0v0KtQLnHl+fvbwVaZlqN12vNV6NiBLY1OONms2T2Fja9Ly1GdK+4WM21DTaC1afH+xwIIvbd55c9kFHz31+ON4sxth0mHHUeBuJXZd8GuO/85KBsKh1zdVXD3cN1dw2GoKgo8ObudPf0NbCiyG/x9owT4blnzQrPiI2KKMvnOYrGzdLMV9cZIXUZWuIwf+wxB6UUbKdpMJvcFglthtDEF62Gz2+4186wiNDqEnIr3/IkExgVc1JW9lygm/X6RviRCNS0BYfFtrJsxVXeAG37ClZPL2fUZfqjYf969/j87eXnt0KFgCLfJ7LJtZM50Wi7iB+G6QAoZozELiBk0ZKApt3+muO/Kpzwu5eCBMXx3fvbr4UNO9iUfKpr5Q6BhbanUy454qA3PyRVGJiEbqKyz8ZEPYLTj55D1A6NN1WAv/337z+Px+2kCMWItn7mmMfT3bv/QsmMFg52UtEkPD4fbS2rN2sRfz9NceZ3bJ7jxU2bImybNsYuQmHhKq+PNlFCLGxyvksMCzQbQ3fIH75fAiQ8itzA3P/zFErDE6oxhKUcaUlbxa1RahG1jdIgRqZp2Qs+vbtO+xXUjTtd5nnlMhTM27v71fDz94dQwIFIpDe2Vyo0CRzizjFJXJhukA9BwJkF7n34+WO9Ah7s5ppbCQNrdfrw7c9//7wCKI4B47/yIGeXeZHk1CpQOVK3Pq12NkpbhuqZHziXMOXqfLFJi8xw//3Pc93MtuHxWZnFoFztzQSnUZU4+NuVk8TrYXCZWGqFs0BhCKfVX1kPVT27zbHev3tXMM/POmZpUEzHWg12DI4hUL67f/56zevaPXj6Vk+COUmCr48PGFoNeyPyK6i/fnzVTQU4tiSliDCiXO35gtkxTR3HaupZqYb3kSFISjiXzpCbioCnvZQ5whAymH7e3n6+B5+kBuNWlxqWqof6cm2cMtJ6pZSNB8W0coATK+UwSCoLg9Uv1x7nseNu/32vMXe7d/+y9uEcKowydv73z92ESRYRbB63zc6EYrsN5tx19UQSjALYKTFW5OCi/NfXwDu+//Hpvs7d/Y/feWyxg7tz3WBYHffDPP+lCqPu5elpfz/rpwE2HYjBA95VEuai2BeYGSiE+8un6XR7+zs/nn6/MXqwcazViwS29kgf3On0Whcu1qdFDMvKgApKWULAdiHYNlhWlcFEan/bPf73/vb91gJu+GnmM2kd6txzAqLVObCZPdT51vJKDxqKwKGG/m2mxmR9JHiFsMKE28+3ty/HXKub74G6P33+8mrTMvMMDjkedmE+BwRvoSA6hVCHgxsWw/0Rinx2z9PAqqq5PP+4vT2FE6Ya5pdfU/2xNmhMKFqtZGu0hM+rx6+gmDH6aDBtVyKz/vr99OFej5kIqzn+0pz9kJ8lsfwKx1QyrjCHWBI0x8A1+1tyvAODGilXZtjYRiRgdfy+Wj1NWkTBfTg1deAwD4Gkhu9nPkVAKqrpiQLdJKlSqS/0o86PMfINjqm53a+29vx6+2ffTA+PEGUMMx4eV670U89oK4tbrcdkDPevv1f2AasaQcGdlFJheL6Q3Aw/PQz17Zv55f2P18KJqAWmO086SQaWkzexgzUKERKr28fhCvbOfXmAlY2AaUN0rJgSaYY/f/z4ZxbFz482zQgarvuAW1V6/PP0cMndMC9chM3w231uEeJA+PoCPNIaXbr1ZCETcOITrdSaj8Pp0NJWIkfZUre6DS+BaTLeFDYcuASpaFr1LQpruOmjxciITzPR7L982M+qSvvN0joNDInUaw2PplxrrkYTjRk16+GuoR3zNFctWq1v4KZtxwTEZpPZ725vb3/93m8VCpW1uuIYLODmsMLKBrGMKC+GsLh/Uq4rHXnY8n6hFzSxKNNy3UvzSb9/sdx9PX96e97vG7Oe9w/Pbw/bq465CUeWjKWggJZekxfXDy+usV6mJrxNiREtIw/uv+zvPuXuITyfVvn5wBzIn17efw544I06xBJBE+RQp/RoxFa/7pv648qepgfAwRzjsHH8kgnD2plMI2Jlsgo50jz9889b0S82y2WpByGEPRyhlUTGUMill/94fdg5sNSJULjsb44z5tj3urXROrIVndfKrU4t+zjTrE+9WO2K4/T8scCAcZxlXSQVMUQGSBuvUYmu07lkx6Wvw6Omn5+21DGyw+ntIRa4thGWxHE4cXI9QNTQ5XrZ6bxrMBOwY5rI1zhNLHgwZSVKnbA2elfVmKaN+OlyeSxahzBEkeWPKXA7v8PES9p+HXUsBoPX9zkhCHMA+2ozq+vWm5So+bRVyupb6e5mg2B+KAi3TFJVaD7PJlYPqutSwQxAcZryYfq+B+KN6GVozttd0+htmj9vCgh1uCPIbudCPQ2ftkbUd8Gx0OyfJ2XvsHRAYhsxmI4h0G45trtaw2CyqaCwP+0swqg8X0+fv15Q/e+7LzPnfsaGFdEmzbTLhZyCidydaWtqYG7YivkSm5yM6z6zTDwo5CRseL19Bs8zHx+vu0AVu3prAltvRPEtsHcNWFlcNIqbYMeQa1KmiRnwFoamkdFGYEYAn59A9yeYYlXkcQhWduBaGi/MtoMQGMP1dXBNAGkqXrUeWjWA5rsBmEcTy4pN4fDr/qx44iciIweQQTCbgAgjDA7ofwxocDxlnyf5AAAAAElFTkSuQmCC");
heightMap.wrapT = heightMap.wrapS = THREE.RepeatWrapping;
uniforms = {u_time: {type: "f", value: 0.0 }, u_heightMap: {type: "t",value:heightMap} };
var material = new THREE.ShaderMaterial({
uniforms: uniforms,
side: THREE.DoubleSide,
wireframe: true,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragment_shader').textContent
});
mesh = new THREE.Mesh(boxGeometry, material);
mesh.rotation.x = 3.14 / 2.0;
scene.add(mesh);
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0xffffff, 1 );
container.appendChild(renderer.domElement);
onWindowResize();
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize(event) {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
var delta = clock.getDelta();
uniforms.u_time.value += delta;
mesh.rotation.z += delta * 0.5;
renderer.render(scene, camera);
}
body { margin: 0px; overflow: hidden; }
<script src="http://threejs.org/build/three.min.js"></script>
<div id="container"></div>
<script id="fragment_shader" type="x-shader/x-fragment">
void main( void )
{
gl_FragColor = vec4(vec3(0.0), 1.0);
}
</script>
<script id="vertexShader" type="x-shader/x-vertex">
uniform lowp sampler2D u_heightMap;
uniform float u_time;
void main()
{
vec3 pos = position;
vec2 offset1 = vec2(1.0, 0.5) * u_time * 0.1;
vec2 offset2 = vec2(0.5, 1.0) * u_time * 0.1;
float hight1 = texture2D(u_heightMap, uv + offset1).r * 0.02;
float hight2 = texture2D(u_heightMap, uv + offset2).r * 0.02;
pos.z += hight1 + hight2;
vec4 mvPosition = modelViewMatrix * vec4( pos, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
</script>
Using better displacement texture, or even using two different textures for two offsets you may achieve better results.

Related

'variable' : is not available in current GLSL version gl_TexCoord

I have coded a fragment shader in vizard IDE and its not working. The code is free of compilation errors except for one which says, " ERROR: 0:? : 'variable' : is not available in current GLSL version gl_TexCoord."
FYI the gl_TexCoord is the output of the vertex shader which is in build to vizard. Can someone help me to fix it. here is the code:
#version 440
// All uniforms as provided by Vizard
uniform sampler2D vizpp_InputDepthTex; // Depth texture
uniform sampler2D vizpp_InputTex; // Color texture
uniform ivec2 vizpp_InputSize; // Render size of screen in pixels
uniform ivec2 vizpp_InputPixelSize; // Pixel size (1.0/vizpp_InputSize)
uniform mat4 osg_ViewMatrix; // View matrix of camera
uniform mat4 osg_ViewMatrixInverse; // Inverse of view matrix
// Your own uniforms
//uniform sampler2D u_texture;
//uniform sampler2D u_normalTexture;
uniform sampler2D g_FinalSSAO;
const bool onlyAO = false; //Only show AO pass for debugging
const bool externalBlur = false; //Store AO in alpha slot for a later blur
struct ASSAOConstants
{
vec2 ViewportPixelSize; // .zw == 1.0 / ViewportSize.xy
vec2 HalfViewportPixelSize; // .zw == 1.0 / ViewportHalfSize.xy
vec2 DepthUnpackConsts;
vec2 CameraTanHalfFOV;
vec2 NDCToViewMul;
vec2 NDCToViewAdd;
ivec2 PerPassFullResCoordOffset;
vec2 PerPassFullResUVOffset;
vec2 Viewport2xPixelSize;
vec2 Viewport2xPixelSize_x_025; // Viewport2xPixelSize * 0.25 (for fusing add+mul into mad)
float EffectRadius; // world (viewspace) maximum size of the shadow
float EffectShadowStrength; // global strength of the effect (0 - 5)
float EffectShadowPow;
float EffectShadowClamp;
float EffectFadeOutMul; // effect fade out from distance (ex. 25)
float EffectFadeOutAdd; // effect fade out to distance (ex. 100)
float EffectHorizonAngleThreshold; // limit errors on slopes and caused by insufficient geometry tessellation (0.05 to 0.5)
float EffectSamplingRadiusNearLimitRec; // if viewspace pixel closer than this, don't enlarge shadow sampling radius anymore (makes no sense to grow beyond some distance, not enough samples to cover everything, so just limit the shadow growth; could be SSAOSettingsFadeOutFrom * 0.1 or less)
float DepthPrecisionOffsetMod;
float NegRecEffectRadius; // -1.0 / EffectRadius
float LoadCounterAvgDiv; // 1.0 / ( halfDepthMip[SSAO_DEPTH_MIP_LEVELS-1].sizeX * halfDepthMip[SSAO_DEPTH_MIP_LEVELS-1].sizeY )
float AdaptiveSampleCountLimit;
float InvSharpness;
int PassIndex;
vec2 QuarterResPixelSize; // used for importance map only
vec4 PatternRotScaleMatrices[5];
float NormalsUnpackMul;
float NormalsUnpackAdd;
float DetailAOStrength;
float Dummy0;
mat4 NormalsWorldToViewspaceMatrix;
};
uniform ASSAOConstants g_ASSAOConsts;
float PSApply( in vec4 inPos, in vec2 inUV)
{ //inPos = gl_FragCoord;
float ao;
uvec2 pixPos = uvec2(inPos.xy);
uvec2 pixPosHalf = pixPos / uvec2(2, 2);
// calculate index in the four deinterleaved source array texture
int mx = int (pixPos.x % 2);
int my = int (pixPos.y % 2);
int ic = mx + my * 2; // center index
int ih = (1-mx) + my * 2; // neighbouring, horizontal
int iv = mx + (1-my) * 2; // neighbouring, vertical
int id = (1-mx) + (1-my)*2; // diagonal
vec2 centerVal = texelFetchOffset( g_FinalSSAO, ivec2(pixPosHalf), 0, ivec2(ic, 0 ) ).xy;
ao = centerVal.x;
if (true){ // change to 0 if you want to disable last pass high-res blur (for debugging purposes, etc.)
vec4 edgesLRTB = UnpackEdges( centerVal.y );
// convert index shifts to sampling offsets
float fmx = mx;
float fmy = my;
// in case of an edge, push sampling offsets away from the edge (towards pixel center)
float fmxe = (edgesLRTB.y - edgesLRTB.x);
float fmye = (edgesLRTB.w - edgesLRTB.z);
// calculate final sampling offsets and sample using bilinear filter
vec2 uvH = (inPos.xy + vec2( fmx + fmxe - 0.5, 0.5 - fmy ) ) * 0.5 * g_ASSAOConsts.HalfViewportPixelSize;
float aoH = textureLodOffset( g_FinalSSAO, uvH, 0, ivec2(ih , 0) ).x;
vec2 uvV = (inPos.xy + vec2( 0.5 - fmx, fmy - 0.5 + fmye ) ) * 0.5 * g_ASSAOConsts.HalfViewportPixelSize;
float aoV = textureLodOffset( g_FinalSSAO, uvV, 0, ivec2( iv , 0) ).x;
vec2 uvD = (inPos.xy + vec2( fmx - 0.5 + fmxe, fmy - 0.5 + fmye ) ) * 0.5 * g_ASSAOConsts.HalfViewportPixelSize;
float aoD = textureLodOffset( g_FinalSSAO, uvD, 0, ivec2( id , 0) ).x;
// reduce weight for samples near edge - if the edge is on both sides, weight goes to 0
vec4 blendWeights;
blendWeights.x = 1.0;
blendWeights.y = (edgesLRTB.x + edgesLRTB.y) * 0.5;
blendWeights.z = (edgesLRTB.z + edgesLRTB.w) * 0.5;
blendWeights.w = (blendWeights.y + blendWeights.z) * 0.5;
// calculate weighted average
float blendWeightsSum = dot( blendWeights, vec4( 1.0, 1.0, 1.0, 1.0 ) );
ao = dot( vec4( ao, aoH, aoV, aoD ), blendWeights ) / blendWeightsSum;
}
return ao;
}
void main(void)
{
// Get base values
vec2 texCoord = gl_TexCoord[0].st;
vec4 color = texture2D(vizpp_InputTex,texCoord);
float depth = texture2D(vizpp_InputDepthTex,texCoord).x;
// Do not calculate if nothing visible (for VR for instance)
if (depth>=1.0)
{
gl_FragColor = color;
return;
}
float ao = PSApply(gl_FragCoord, texCoord);
// Output the result
if(externalBlur) {
gl_FragColor.rgb = color.rgb;
gl_FragColor.a = ao;
}
else if(onlyAO) {
gl_FragColor.rgb = vec3(ao,ao,ao);
gl_FragColor.a = 1.0;
}
else {
gl_FragColor.rgb = ao*color.rgb;
gl_FragColor.a = 1.0;
}
}
gl_TexCoord is a deprecated Compatibility Profile Built-In Language Variables and is removed after GLSL Version 1.20.
If you want to use gl_TexCoord then you would have to use GLSL version 1.20 (#version 120).
But, you don't need the deprecated compatibility profile built-in language variable at all. Define a Vertex shader output texCoord and use this output rather than gl_TexCoord:
#version 440
out vec2 texCoord;
void main()
{
texCoord = ...;
// [...]
}
Specify a corresponding input in the fragment shader:
#version 440
in vec2 texCoord;
void main()
{
vec4 color = texture2D(vizpp_InputTex, texCoord.st);
// [...]
}

Alpha channel fade animation

Im pretty new to shaders and I've been attempting to create a shader that would do a alpha revel of a texture and I've gotten close but Im pretty sure there is a much better way.
This is what I have to far https://codepen.io/tkmoney/pen/REYrpV
varying vec2 vUv;
precision highp float;
precision highp int;
uniform sampler2D texture;
uniform float mask_position;
uniform float fade_size;
void main(void) {
float mask_starting_point = (0.0 - fade_size);
float mask_ending_point = (1.0 - fade_size);
vec4 orig_color = texture2D(texture, vUv);
vec4 color = texture2D(texture, vUv);
float mask_p = smoothstep(mask_starting_point, mask_ending_point, mask_position);
//color.a *= (distance(vUv.x, split_center_point));
vec2 p = vUv;
if (p.x > (mask_p)){
color.a = 0.0;
}else{
color.a *= (smoothstep(mask_position, (mask_position - fade_size), p.x ));
}
gl_FragColor = color;
}
the fade in does not completely reveal the entire image as you can see. Any insight on a better way to tackle this would be great. Thanks!
What you want to do is to make the area at the left of mask_position visible, but you want to hide the are at the right. this can be achieved by step:
color.a *= 1.0 - step(fade_size, p);
If you want a smooth transition from the visible to the invisible area, then you have to use smoothstep. The start of the fading is a certain amount before mask_position and the end a certain amount after mask_position:
float start_p = mask_position-fade_size;
float end_p = mask_position+fade_size;
color.a *= 1.0 - smoothstep(start_p, end_p, vUv.x);
This will cause that the start and the end of the image will never fade in completely. TO compensate this the [vUV.x] has to be mapped from the range [0.0, 1.0] to the range [fade_size, 1.0-fade_size]. This can be calculated by mix with ease:
float p = vUv.x * (1.0-2.0*fade_size) + fade_size;
color.a *= 1.0 - smoothstep(start_p, end_p, p;
If the alpha channel of the final culler is below a tiny threshold, the fragment can be discarded:
if ( color.a < 0.01 )
discard;
Final shader:
varying vec2 vUv;
precision highp float;
precision highp int;
uniform sampler2D texture;
uniform float mask_position;
uniform float fade_size;
void main(void) {
vec4 color = texture2D(texture, vUv);
float start_p = mask_position-fade_size;
float end_p = mask_position+fade_size;
float p = mix(fade_size, 1.0-fade_size, vUv.x);
color.a *= 1.0 - smoothstep(start_p, end_p, p);
if ( color.a < 0.01 )
discard;
gl_FragColor = color;
}
See the example:
var container;
var camera, scene, renderer;
var uniforms;
init();
animate();
function init() {
container = document.getElementById( 'container' );
camera = new THREE.Camera();
camera.position.z = 1;
scene = new THREE.Scene();
var geometry = new THREE.PlaneBufferGeometry( 2, 2 );
var texture = new THREE.TextureLoader().load( 'https://raw.githubusercontent.com/Rabbid76/graphics-snippets/master/resource/texture/background.jpg' );
uniforms = {
u_time: { type: "f", value: 1.0 },
u_resolution: { type: "v2", value: new THREE.Vector2() },
u_mouse: { type: "v2", value: new THREE.Vector2() },
texture: {type: 't', value: texture},
fade_size: { type: 'f', value: 0.2 },
mask_position: { type: 'f', value: 0 }
};
var material = new THREE.ShaderMaterial( {
uniforms: uniforms,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent
} );
var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
renderer = new THREE.WebGLRenderer( {alpha : true} );
renderer.setClearColor(0xffffff, 0.0);
renderer.setPixelRatio( window.devicePixelRatio );
container.appendChild( renderer.domElement );
onWindowResize();
window.addEventListener( 'resize', onWindowResize, false );
document.onmousemove = function(e){
uniforms.u_mouse.value.x = e.pageX
uniforms.u_mouse.value.y = e.pageY
}
}
function onWindowResize( event ) {
renderer.setSize( window.innerWidth, window.innerHeight );
uniforms.u_resolution.value.x = renderer.domElement.width;
uniforms.u_resolution.value.y = renderer.domElement.height;
}
function animate() {
requestAnimationFrame( animate );
render();
}
var mask_step = 0.01;
var mask_val = 0.0;
function render() {
if ( mask_val >= 1.0) { mask_val = 1.0; mask_step = -0.01; }
else if ( mask_val <= -0.0) { mask_val = 0.0; mask_step = 0.01; }
mask_val += mask_step;
uniforms.mask_position.value = mask_val;
uniforms.u_time.value += 0.05;
renderer.render( scene, camera );
}
<script id="vertexShader" type="x-shader/x-vertex">
void main() {
gl_Position = vec4( position, 1.0 );
}
</script>
<script id="fragmentShader" type="x-shader/x-fragment">
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D texture;
uniform float fade_size;
uniform float mask_position;
void main() {
vec2 vUv = gl_FragCoord.xy/u_resolution.xy;
vec4 color = texture2D(texture, vUv);
float start_p = mask_position-fade_size;
float end_p = mask_position+fade_size;
float p = mix(fade_size, 1.0-fade_size, vUv.x);
color.rgba *= 1.0 - smoothstep(start_p, end_p, p);
if ( color.a < 0.01 )
discard;
gl_FragColor = color;
}
</script>
<div id="container"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/100/three.min.js"></script>

Billboarding using Qt3D 2.0

I am looking for the best way to create a billboard in Qt3D. I would like a plane which faces the camera wherever it is and does not change sized when the camera dollies forward or back. I have read how to do this using GLSL vertex and geometry shaders, but I am looking for the Qt3D way, unless customer shaders is the most efficient and best way of billboarding.
I have looked, and it appears I can set the Matrix on a QTransform via properties, but it isn't clear to me how I would manipulate the matrix, or perhaps there is a better way? I am using the C++ api, but a QML answer would do. I could port it to C++.
If you want to draw just one billboard, you can add a plane and rotate it whenever the camera moves. However, if you want to do this efficiently with thousands or millions of billboards, I recommend using custom shaders. We did this to draw impostor spheres in Qt3D.
However, we didn't use a geometry shader because we were targeting systems that didn't support geometry shaders. Instead, we used only the vertex shader by placing four vertices in the origin and moved these on the shader. To create many copies, we used instanced drawing. We moved each set of four vertices according to the positions of the spheres. Finally, we moved each of the four vertices of each sphere such that they result in a billboard that is always facing the camera.
Start out by subclassing QGeometry and created a buffer functor that creates four points, all in the origin (see spherespointgeometry.cpp). Give each point an ID that we can use later. If you use geometry shaders, the ID is not needed and you can get away with creating only one vertex.
class SpheresPointVertexDataFunctor : public Qt3DRender::QBufferDataGenerator
{
public:
SpheresPointVertexDataFunctor()
{
}
QByteArray operator ()() Q_DECL_OVERRIDE
{
const int verticesCount = 4;
// vec3 pos
const quint32 vertexSize = (3+1) * sizeof(float);
QByteArray verticesData;
verticesData.resize(vertexSize*verticesCount);
float *verticesPtr = reinterpret_cast<float*>(verticesData.data());
// Vertex 1
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 1
*verticesPtr++ = 0.0;
// Vertex 2
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 2
*verticesPtr++ = 1.0;
// Vertex 3
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID3
*verticesPtr++ = 2.0;
// Vertex 4
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
*verticesPtr++ = 0.0;
// VertexID 4
*verticesPtr++ = 3.0;
return verticesData;
}
bool operator ==(const QBufferDataGenerator &other) const Q_DECL_OVERRIDE
{
Q_UNUSED(other);
return true;
}
QT3D_FUNCTOR(SpheresPointVertexDataFunctor)
};
For the real positions, we used a separate QBuffer. We also set color and scale, but I have omitted those here (see spheredata.cpp):
void SphereData::setPositions(QVector<QVector3D> positions, QVector3D color, float scale)
{
QByteArray ba;
ba.resize(positions.size() * sizeof(QVector3D));
SphereVBOData *vboData = reinterpret_cast<QVector3D *>(ba.data());
for(int i=0; i<positions.size(); i++) {
QVector3D &position = vboData[i];
position = positions[i];
}
m_buffer->setData(ba);
m_count = positions.count();
}
Then, in QML, we connected the geometry with the buffer in a QGeometryRenderer. This can also be done in C++, if you prefer (see
Spheres.qml):
GeometryRenderer {
id: spheresMeshInstanced
primitiveType: GeometryRenderer.TriangleStrip
enabled: instanceCount != 0
instanceCount: sphereData.count
geometry: SpheresPointGeometry {
attributes: [
Attribute {
name: "pos"
attributeType: Attribute.VertexAttribute
vertexBaseType: Attribute.Float
vertexSize: 3
byteOffset: 0
byteStride: (3 + 3 + 1) * 4
divisor: 1
buffer: sphereData ? sphereData.buffer : null
}
]
}
}
Finally, we created custom shaders to draw the billboards. Note that because we were drawing impostor spheres, the billboard size was increased to handle raytracing in the fragment shader from awkward angles. You likely do not need the 2.0*0.6 factor in general.
Vertex shader:
#version 330
in vec3 vertexPosition;
in float vertexId;
in vec3 pos;
in vec3 col;
in float scale;
uniform vec3 eyePosition = vec3(0.0, 0.0, 0.0);
uniform mat4 modelMatrix;
uniform mat4 mvp;
out vec3 modelSpherePosition;
out vec3 modelPosition;
out vec3 color;
out vec2 planePosition;
out float radius;
vec3 makePerpendicular(vec3 v) {
if(v.x == 0.0 && v.y == 0.0) {
if(v.z == 0.0) {
return vec3(0.0, 0.0, 0.0);
}
return vec3(0.0, 1.0, 0.0);
}
return vec3(-v.y, v.x, 0.0);
}
void main() {
vec3 position = vertexPosition + pos;
color = col;
radius = scale;
modelSpherePosition = (modelMatrix * vec4(position, 1.0)).xyz;
vec3 view = normalize(position - eyePosition);
vec3 right = normalize(makePerpendicular(view));
vec3 up = cross(right, view);
float texCoordX = 1.0 - 2.0*(float(vertexId==0.0) + float(vertexId==2.0));
float texCoordY = 1.0 - 2.0*(float(vertexId==0.0) + float(vertexId==1.0));
planePosition = vec2(texCoordX, texCoordY);
position += 2*0.6*(-up - right)*(scale*float(vertexId==0.0));
position += 2*0.6*(-up + right)*(scale*float(vertexId==1.0));
position += 2*0.6*(up - right)*(scale*float(vertexId==2.0));
position += 2*0.6*(up + right)*(scale*float(vertexId==3.0));
vec4 modelPositionTmp = modelMatrix * vec4(position, 1.0);
modelPosition = modelPositionTmp.xyz;
gl_Position = mvp*vec4(position, 1.0);
}
Fragment shader:
#version 330
in vec3 modelPosition;
in vec3 modelSpherePosition;
in vec3 color;
in vec2 planePosition;
in float radius;
out vec4 fragColor;
uniform mat4 modelView;
uniform mat4 inverseModelView;
uniform mat4 inverseViewMatrix;
uniform vec3 eyePosition;
uniform vec3 viewVector;
void main(void) {
vec3 rayDirection = eyePosition - modelPosition;
vec3 rayOrigin = modelPosition - modelSpherePosition;
vec3 E = rayOrigin;
vec3 D = rayDirection;
// Sphere equation
// x^2 + y^2 + z^2 = r^2
// Ray equation is
// P(t) = E + t*D
// We substitute ray into sphere equation to get
// (Ex + Dx * t)^2 + (Ey + Dy * t)^2 + (Ez + Dz * t)^2 = r^2
float r2 = radius*radius;
float a = D.x*D.x + D.y*D.y + D.z*D.z;
float b = 2.0*E.x*D.x + 2.0*E.y*D.y + 2.0*E.z*D.z;
float c = E.x*E.x + E.y*E.y + E.z*E.z - r2;
// discriminant of sphere equation
float d = b*b - 4.0*a*c;
if(d < 0.0) {
discard;
}
float t = (-b + sqrt(d))/(2.0*a);
vec3 sphereIntersection = rayOrigin + t * rayDirection;
vec3 normal = normalize(sphereIntersection);
vec3 normalDotCamera = color*dot(normal, normalize(rayDirection));
float pi = 3.1415926535897932384626433832795;
vec3 position = modelSpherePosition + sphereIntersection;
// flat red
fragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
It has been some time since we first implemented this, and there might be easier ways to do it now, but this should give you an idea of the pieces you need.

Oren-Nayar lighting in OpenGL (how to calculate view direction in fragment shader)

I'm trying to implement Oren-Nayar lighting in the fragment shader as shown here.
However, I'm getting some strange lighting effects on the terrain as shown below.
I am currently sending the shader the 'view direction' uniform as the camera's 'front' vector. I am not sure if this is correct, as moving the camera around changes the artifacts.
Multiplying the 'front' vector by the MVP matrix gives a better result, but the artifacts are still very noticable when viewing the terrain from some angles. It is particularly noticable in dark areas and around the edges of the screen.
What could be causing this effect?
Artifact example
How the scene should look
Vertex Shader
#version 450
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
out VS_OUT {
vec3 normal;
} vert_out;
void main() {
vert_out.normal = normal;
gl_Position = vec4(position, 1.0);
}
Tesselation Control Shader
#version 450
layout(vertices = 3) out;
in VS_OUT {
vec3 normal;
} tesc_in[];
out TESC_OUT {
vec3 normal;
} tesc_out[];
void main() {
if(gl_InvocationID == 0) {
gl_TessLevelInner[0] = 1.0;
gl_TessLevelInner[1] = 1.0;
gl_TessLevelOuter[0] = 1.0;
gl_TessLevelOuter[1] = 1.0;
gl_TessLevelOuter[2] = 1.0;
gl_TessLevelOuter[3] = 1.0;
}
tesc_out[gl_InvocationID].normal = tesc_in[gl_InvocationID].normal;
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
Tesselation Evaluation Shader
#version 450
layout(triangles, equal_spacing) in;
in TESC_OUT {
vec3 normal;
} tesc_in[];
out TESE_OUT {
vec3 normal;
float height;
vec4 shadow_position;
} tesc_out;
uniform mat4 model_view;
uniform mat4 model_view_perspective;
uniform mat3 normal_matrix;
uniform mat4 depth_matrix;
vec3 lerp(vec3 v0, vec3 v1, vec3 v2) {
return (
(vec3(gl_TessCoord.x) * v0) +
(vec3(gl_TessCoord.y) * v1) +
(vec3(gl_TessCoord.z) * v2)
);
}
vec4 lerp(vec4 v0, vec4 v1, vec4 v2) {
return (
(vec4(gl_TessCoord.x) * v0) +
(vec4(gl_TessCoord.y) * v1) +
(vec4(gl_TessCoord.z) * v2)
);
}
void main() {
gl_Position = lerp(
gl_in[0].gl_Position,
gl_in[1].gl_Position,
gl_in[2].gl_Position
);
tesc_out.normal = normal_matrix * lerp(
tesc_in[0].normal,
tesc_in[1].normal,
tesc_in[2].normal
);
tesc_out.height = gl_Position.y;
tesc_out.shadow_position = depth_matrix * gl_Position;
gl_Position = model_view_perspective * gl_Position;
}
Fragment Shader
#version 450
in TESE_OUT {
vec3 normal;
float height;
vec4 shadow_position;
} frag_in;
out vec4 colour;
uniform vec3 view_direction;
uniform vec3 light_position;
#define PI 3.141592653589793
void main() {
const vec3 ambient = vec3(0.1, 0.1, 0.1);
const float roughness = 0.8;
const vec4 water = vec4(0.0, 0.0, 0.8, 1.0);
const vec4 sand = vec4(0.93, 0.87, 0.51, 1.0);
const vec4 grass = vec4(0.0, 0.8, 0.0, 1.0);
const vec4 ground = vec4(0.49, 0.27, 0.08, 1.0);
const vec4 snow = vec4(0.9, 0.9, 0.9, 1.0);
if(frag_in.height == 0.0) {
colour = water;
} else if(frag_in.height < 0.2) {
colour = sand;
} else if(frag_in.height < 0.575) {
colour = grass;
} else if(frag_in.height < 0.8) {
colour = ground;
} else {
colour = snow;
}
vec3 normal = normalize(frag_in.normal);
vec3 view_dir = normalize(view_direction);
vec3 light_dir = normalize(light_position);
float NdotL = dot(normal, light_dir);
float NdotV = dot(normal, view_dir);
float angleVN = acos(NdotV);
float angleLN = acos(NdotL);
float alpha = max(angleVN, angleLN);
float beta = min(angleVN, angleLN);
float gamma = dot(view_dir - normal * dot(view_dir, normal), light_dir - normal * dot(light_dir, normal));
float roughnessSquared = roughness * roughness;
float roughnessSquared9 = (roughnessSquared / (roughnessSquared + 0.09));
// calculate C1, C2 and C3
float C1 = 1.0 - 0.5 * (roughnessSquared / (roughnessSquared + 0.33));
float C2 = 0.45 * roughnessSquared9;
if(gamma >= 0.0) {
C2 *= sin(alpha);
} else {
C2 *= (sin(alpha) - pow((2.0 * beta) / PI, 3.0));
}
float powValue = (4.0 * alpha * beta) / (PI * PI);
float C3 = 0.125 * roughnessSquared9 * powValue * powValue;
// now calculate both main parts of the formula
float A = gamma * C2 * tan(beta);
float B = (1.0 - abs(gamma)) * C3 * tan((alpha + beta) / 2.0);
// put it all together
float L1 = max(0.0, NdotL) * (C1 + A + B);
// also calculate interreflection
float twoBetaPi = 2.0 * beta / PI;
float L2 = 0.17 * max(0.0, NdotL) * (roughnessSquared / (roughnessSquared + 0.13)) * (1.0 - gamma * twoBetaPi * twoBetaPi);
colour = vec4(colour.xyz * (L1 + L2), 1.0);
}
First I've plugged your fragment shader into my renderer with my view/normal/light vectors and it works perfectly. So the problem has to be in the way you calculate those vectors.
Next, you say that you set view_dir to your camera's front vector. I assume that you meant "camera's front vector in the world space" which would be incorrect. Since you calculate the dot products with vectors in the camera space, the view_dir must be in the camera space too. That is vec3(0,0,1) would be an easy way to check that. If it works -- we found your problem.
However, using (0,0,1) for the view direction is not strictly correct when you do perspective projection, because the direction from the fragment to the camera then depends on the location of the fragment on the screen. The correct formula then would be view_dir = normalize(-pos) where pos is the fragment's position in camera space (that is with model-view matrix applied without the projection). Further, this quantity now depends only on the fragment location on the screen, so you can calculate it as:
view_dir = normalize(vec3(-(gl_FragCoord.xy - frame_size/2) / (frame_width/2), flen))
flen is the focal length of your camera, which you can calculate as flen = cot(fovx/2).
I know this is a long dead thread, but I've been having the same problem (for several years), and finally found the solution...
It can be partially solved by fixing the orientation of the surface normals to match the polygon winding direction, but you can also get rid of the artifacts in the shader, by changing the following two lines...
float angleVN = acos(cos_nv);
float angleLN = acos(cos_nl);
to this...
float angleVN = acos(clamp(cos_nv, -1.0, 1.0));
float angleLN = acos(clamp(cos_nl, -1.0, 1.0));
Tada!

Shader - Simple SSS lighting issue

I am trying to create a simple subsurface scattering effect using a shader but I am facing a small issue.
Look at those screenshots. The three images represents three lighting states (above surface, really close to surface, subsurface) with various lighting colors (red and blue) and always the same subsurface color (red).
As you might notice when the light is above the surface and really close to this surface its influence appears to minimize which is the expected behavior. But the problem is that is behaves the same for the subsurface part, this is normal according to my shader code but in my opinion the subsurface light influence should be higher when going close to the surface. I suggest you to look at the screenshot for the expected result.
How can I do that ?
Here is the simplified shader code.
half ndotl = max(0.0f, dot(normalWorld, lightDir));
half inversendotl = max(0.0f, dot(normalWorld, -lightDir));
half3 lightColor = _LightColor0.rgb * ndotl; // This is the normal light color calculation
half3 subsurfacecolor = translucencyColor.rgb * inversendotl; // This is the subsurface color
half3 topsubsurfacecolor = translucencyColor.rgb; // This is used for adding subsurface color to top surface
final = subsurfacescolor + lerp(lightColor, topsubsurfacecolor * 0.5, 1 - ndotl - inversendotl);
The way, how you have implemented subsurface scattering effect is very rough. It is hard to achieve nice result using so simple approach.
Staying within selected approach, I would recommend you the following things:
Take into account distance to the light source accordingly to the inverse square law. This applies to both components, direct light and subsurface.
Once the light is behind the surface, it is better to ignore the dot product of the inner normal and direction to the light, because you never know how the light would travel through the object. One more reason is that because of the law of refraction (assuming that refraction coefficient of the object is higher than one of the air) makes this dot product less influential. You may just use a step function to turn on subsurface component once the light source is behind the surface.
So, the modified version of your shader would be as follows:
half3 toLightVector = u_lightPos - v_fragmentPos;
half lightDistanceSQ = dot(toLightVector, toLightVector);
half3 lightDir = normalize(toLightVector);
half ndotl = max(0.0, dot(v_normal, lightDir));
half inversendotl = step(0.0, dot(v_normal, -lightDir));
half3 lightColor = _LightColor0.rgb * ndotl / lightDistanceSQ * _LightIntensity0;
half3 subsurfacecolor = translucencyColor.rgb * inversendotl / lightDistanceSQ * _LightIntensity0;
half3 final = subsurfacecolor + lightColor;
Where u_lightPos - uniform that contains position of the light source, v_fragmentPos - varying that contains position of the fragment.
Here is an example in glsl using three.js:
var container;
var camera, scene, renderer;
var sssMesh;
var lightSourceMesh;
var sssUniforms;
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.getElementById('container');
camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 1, 3000);
camera.position.z = 4;
camera.position.y = 2;
camera.rotation.x = -0.45;
scene = new THREE.Scene();
var boxGeometry = new THREE.CubeGeometry(0.75, 0.75, 0.75);
var lightSourceGeometry = new THREE.CubeGeometry(0.1, 0.1, 0.1);
sssUniforms = {
u_lightPos: {
type: "v3",
value: new THREE.Vector3()
}
};
var sssMaterial = new THREE.ShaderMaterial({
uniforms: sssUniforms,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragment_shader').textContent
});
var lightSourceMaterial = new THREE.MeshBasicMaterial();
sssMesh = new THREE.Mesh(boxGeometry, sssMaterial);
sssMesh.position.x = 0;
sssMesh.position.y = 0;
scene.add(sssMesh);
lightSourceMesh = new THREE.Mesh(lightSourceGeometry, lightSourceMaterial);
lightSourceMesh.position.x = 0;
lightSourceMesh.position.y = 0;
scene.add(lightSourceMesh);
renderer = new THREE.WebGLRenderer();
container.appendChild(renderer.domElement);
onWindowResize();
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize(event) {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
var delta = clock.getDelta();
var lightHeight = Math.sin(clock.elapsedTime * 1.0) * 0.5 + 0.7;
lightSourceMesh.position.y = lightHeight;
sssUniforms.u_lightPos.value.y = lightHeight;
sssMesh.rotation.y += delta * 0.5;
renderer.render(scene, camera);
}
body {
color: #ffffff;
background-color: #050505;
margin: 0px;
overflow: hidden;
}
<script src="http://threejs.org/build/three.min.js"></script>
<div id="container"></div>
<script id="fragment_shader" type="x-shader/x-fragment">
varying vec3 v_fragmentPos;
varying vec3 v_normal;
uniform vec3 u_lightPos;
void main(void)
{
vec3 _LightColor0 = vec3(1.0,0.5,0.5);
float _LightIntensity0 = 0.2;
vec3 translucencyColor = vec3(0.8,0.2,0.2);
vec3 toLightVector = u_lightPos - v_fragmentPos;
float lightDistanceSQ = dot(toLightVector, toLightVector);
vec3 lightDir = normalize(toLightVector);
float ndotl = max(0.0, dot(v_normal, lightDir));
float inversendotl = step(0.0, dot(v_normal, -lightDir));
vec3 lightColor = _LightColor0.rgb * ndotl / lightDistanceSQ * _LightIntensity0;
vec3 subsurfacecolor = translucencyColor.rgb * inversendotl / lightDistanceSQ * _LightIntensity0;
vec3 final = subsurfacecolor + lightColor;
gl_FragColor=vec4(final,1.0);
}
</script>
<script id="vertexShader" type="x-shader/x-vertex">
varying vec3 v_fragmentPos;
varying vec3 v_normal;
void main()
{
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
v_fragmentPos = (modelMatrix * vec4( position, 1.0 )).xyz;
v_normal = (modelMatrix * vec4( normal, 0.0 )).xyz;
gl_Position = projectionMatrix * mvPosition;
}
</script>
There are large amount of different techniques of simulation of SSS.
Texture-space diffusion and shadowmap-based translucency are the most frequently used techniques.
Check this article from GPU Gems, it describes mentioned techniques.
Also, you can find interesting this presentation from EA. It mentions approach that is very close to yours for rendering plants.
Spherical harmonics also works well for static geometry, but this approach is very complicated and it needs precomputed irradiance transfer. Check this article, that shows use of spherical harmonics to approximate SSS.