ALBEDO is vec3 and COLOR is vec4.. I need make pass COLOR to ALBEDO on Godot.. This shader work on shadertype itemscanvas but not working on spatial material..
shader_type spatial;
uniform float amp = 0.1;
uniform vec4 tint_color = vec4(0.0, 0.5,0.99, 1);
uniform sampler2D iChannel0;
void fragment ()
{
vec2 uv = FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy;// (1.0/SCREEN_PIXEL_SIZE) for shader_type canvas_item
vec2 p = uv +
(vec2(.5)-texture(iChannel0, uv*0.3+vec2(TIME*0.05, TIME*0.025)).xy)*amp +
(vec2(.5)-texture(iChannel0, uv*0.3-vec2(-TIME*0.005, TIME*0.0125)).xy)*amp;
vec4 a = texture(iChannel0, p)*tint_color;
ALBEDO = a.xyz; //the w channel is not important, works without it on shader_type canvas_item but if used this on 3d spatial the effect no pass.. whats problem?
}
For ALPHA and ALBEDO: ALBEDO = a.xyz; is correct. For a.w, usually you would do this: ALPHA = a.w;. However, in this case, it appears that a.w is always 1. So there is no point.
I'll pick on the rest of the code. Keep in mind that I do not know how it should look like, nor have any idea of the texture for the sampler2D (I'm guessing noise texture, seamless).
Check your render mode. Being from ShaderToy, there is a chance you want render_mode unshaded;, which will make lights not affect the material. see Render Modes.
For ease of use, You can use hints. In particular, Write the tint color like this:
uniform vec4 tint_color: hint_color = vec4(0.0, 0.5,0.99, 1);
So Godot gives you a color picker in the shader parameters. See Uniforms.
You could also use hint_range(0, 1) for amp. However, I'm not sure about that.
Double check your coordinates. I suspect this FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy should be SCREEN_UV (or UV, if it should stay with the object that has the material).
Was the original like this:
vec2 i_resolution = 1.0/SCREEN_PIXEL_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
As I said in the prior answer, 1.0 / SCREEN_PIXEL_SIZE is VIEWPORT_SIZE. Replace it. We have:
vec2 i_resolution = VIEWPORT_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
Inline:
vec2 uv = FRAGCOORD.xy/VIEWPORT_SIZE;
As I said in the prior answer, FRAGCOORD.xy/VIEWPORT_SIZE is SCREEN_UV (or UV if you don't want the material to depend on the position on screen). Replace it. We have:
vec2 uv = SCREEN_UV;
Even if that is not what you want, it is a good for testing.
Try moving the camera. Is that what you want? No? Try vec2 uv = UV; instead. In fact, a variable is hard to justify at that point.
Related
I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);
I'm working on shadows for a 2D overhead game. Right now, the shadows are just sprites with the color (0,0,0,0.1) drawn on a layer above the tiles.
The problem: When many entities or trees get clumped together, the shadows overlap, forming unnatural-looking dark areas.
I've tried drawing the shadows to a framebuffer and using a simple shader to prevent overlapping, but that lead to other problems, including layering issues.
Is it possible to enable a certain blend function for the shadows that prevents "stacking", or a better way to use a shader?
If you don't want to deal with sorting issues, I think you could do this with a shader. But every object will have to be either affected by shadow or not. So tall trees could be marked as not shadow receiving, while the ground, grass, and characters would be shadow receiving.
First make a frame buffer with clear color white. Draw all your shadows on it as pure black.
Then make a shadow mapping shader to draw everything in your world. This relies on you not needing all four channels of the sprite's color, because we need one of those channels to mark each sprite as shadow receiving or not. For example, if you aren't using RGB to tint your sprites, we could use the R channel. Or if you aren't fading them in and out, we could use A. I'll assume the latter here:
Vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform mat4 u_projTrans;
void main()
{
v_texCoords = a_texCoord0;
v_color = a_color;
v_color.a = v_color.a * (255.0/254.0); //this is a correction due to color float precision (see SpriteBatch's default shader)
vec3 screenPosition = u_projTrans * a_position;
v_texCoordsShadowmap = (screenPosition.xy * 0.5) + 0.5;
gl_Position = screenPosition;
}
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform sampler2D u_texture;
uniform sampler2D u_textureShadowmap;
void main()
{
vec4 textureColor = texture2D(u_texture, v_texCoords);
float shadowColor = texture2D(u_textureShadowmap, v_texCoordsShadowmap).r;
shadowColor = mix(shadowColor, 1.0, v_color.a);
textureColor.rgb *= shadowColor * v_color.rgb;
gl_FragColor = textureColor;
}
These are completely untested and probably have bugs. Make sure you assign the frame buffer's color texture to "u_textureShadowmap". And for all your sprites, set their color's alpha based on how much shadow you want them to have cast on them, which will generally always be 0 or 0.1 (based on the brightness you were using before).
Draw your shadows to fbo with disabled blending.
Draw background e.g. grass
Draw shadows texture from fbo
Draw all other sprites
Lets say we texturing quad (two triangles). I think what this question is similiar to texture splatting like in next example
precision lowp float;
uniform sampler2D Terrain;
uniform sampler2D Grass;
uniform sampler2D Stone;
uniform sampler2D Rock;
varying vec2 tex_coord;
void main(void)
{
vec4 terrain = texture2D(Terrain, tex_coord);
vec4 tex0 = texture2D(Grass, tex_coord * 4.0); // Tile
vec4 tex1 = texture2D(Rock, tex_coord * 4.0); // Tile
vec4 tex2 = texture2D(Stone, tex_coord * 4.0); // Tile
tex0 *= terrain.r; // Red channel - puts grass
tex1 = mix( tex0, tex1, terrain.g ); // Green channel - puts rock and mix with grass
vec4 outColor = mix( tex1, tex2, terrain.b ); // Blue channel - puts stone and mix with others
gl_FragColor = outColor; //final color
}
But i want to just place a 1 decal on base quad texture in desired place.
Algorithm is just the same, but i think we don't need extra texture with 1 filled layer to hold positions(e.g. where red layer != 0) of decal, some how we must generate our own "terrain.r"(is this float?) variable and mix base texture and decal texture with it.
precision lowp float;
uniform sampler2D base;
uniform sampler2D decal;
uniform vec2 decal_location; //where we want place decal (e.g. 0.5, 0.5 is center of quad)
varying vec2 base_tex_coord;
varying vec2 decal_tex_coord;
void main(void)
{
vec4 v_base = texture2D(base, base_tex_coord);
vec4 v_decal = texture2D(Grass, decal_tex_coord);
float decal_layer = /*somehow get our decal_layer based on decal_position*/
gl_FragColor = mix(v_base, v_decal, decal_layer);
}
How achieve such thing?
Or i may just generate splat texture on opengl side and pass it to first shader? This will give me up to 4 various decals on quad but will be slow for frequent updates (e.g. machine gun hits wall)
float decal_layer = /*somehow get our decal_layer based on decal_position*/
Well, it's up to you, how you interpret decal_position. I think a simple distance metric would suffice. but this also requires the size of the quad. Let's assume you provide this through an additional uniform decal_radius. Then we can use
decal_layer = clamp(length(decal_position - vec2(0.5, 0.5)) / decal_radius, 0., 1.);
Yes, decal_layer is a float as you've described. Its range is 0 to 1. But you don't have quite enough info, here you've specified decal_location but no size for the decal. You also don't know where this fragment falls in the quad, you'll need a varying vec2 quad_coord; or similar input from the vertex shader if you want to know where this fragment is relative to the quad being rendered.
But let's try a different approach. Edit the top of your 2nd example to include these uniforms:
uniform vec2 decal_location; // Location of decal relative to base_tex_coord
uniform float decal_size; // Size of decal relative to base_tex_coord
Now, in main(), you should be able to compute decal_layer with something like this:
float decal_layer = 1.0 - smoothstep(decal_size - 0.01, decal_size, max(abs(decal_location.x - base_tex_coord.x), abs(decal_location.y - base_tex_coord.y)));
Basically you're trying to get decal_layer to be 1.0 within the decal, and 0.0 outside the decal. I've added a 0.01 fuzzy edge at the boundary that you can play with. Good luck!
I am trying to make a custom light shader and was trying a lot of different things over time.
Some of the solutions I found work better, others worse. For this question I'm using the solution which worked best so far.
My problem is, that if I move the "camera" around, the light positions seems to move around, too. This solution has very slight but noticeable movement in it and the light position seems to be above where it should be.
Default OpenGL lighting (w/o any shaders) works fine (steady light positions) but I need the shader for multitexturing and I'm planning on using portions of it for lighting effects once it's working.
Vertex Source:
varying vec3 vlp, vn;
void main(void)
{
gl_Position = ftransform();
vn = normalize(gl_NormalMatrix * -gl_Normal);
vlp = normalize(vec3(gl_LightSource[0].position.xyz) - vec3(gl_ModelViewMatrix * -gl_Vertex));
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment Source:
uniform sampler2D baseTexture;
uniform sampler2D teamTexture;
uniform vec4 teamColor;
varying vec3 vlp, vn;
void main(void)
{
vec4 newColor = texture2D(teamTexture, vec2(gl_TexCoord[0]));
newColor = newColor * teamColor;
float teamBlend = newColor.a;
// mixing the textures and colorizing them. this works, I tested it w/o lighting!
vec4 outColor = mix(texture2D(baseTexture, vec2(gl_TexCoord[0])), newColor, teamBlend);
// apply lighting
outColor *= max(dot(vn, vlp), 0.0);
outColor.a = texture2D(baseTexture, vec2(gl_TexCoord[0])).a;
gl_FragColor = outColor;
}
What am I doing wrong?
I can't be certain any of these are the problem, but they could cause one.
First, you need to normalize your per-vertex vn and vlp (BTW, try to use more descriptive variable names. viewLightPosition is a lot easier to understand than vlp). I know you normalized them in the vertex shader, but the fragment shader interpolation will denormalize them.
Second, this isn't particularly wrong so much as redundant. vec3(gl_LightSource[0].position.xyz). The "position.xyz" is already a vec3, since the swizzle mask (".xyz") only has 3 components. You don't need to cast it to a vec3 again.
As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.