#version 140
in vec2 textureCoords;
out vec4 out_Color;
float alpha = 0.5;
uniform sampler2D guiTexture;
void main(void){
out_Color = texture(guiTexture,textureCoords);
}
I am pretty (very) new to GLSL.
I want to basically add a transparency value (float) to the following code above (don't bother running it I just had to get it in). The value float should be the float a (4th component) in the out_Color variable. However due to the code currently in there which is 4 components I am not sure how to. Is there a function that will allow me to do this.
You should take a look at pretty much any basic GLSL tutorial
out_Color = vec4(texture(guiTexture,textureCoords).rgb, alpha);
Related
ALBEDO is vec3 and COLOR is vec4.. I need make pass COLOR to ALBEDO on Godot.. This shader work on shadertype itemscanvas but not working on spatial material..
shader_type spatial;
uniform float amp = 0.1;
uniform vec4 tint_color = vec4(0.0, 0.5,0.99, 1);
uniform sampler2D iChannel0;
void fragment ()
{
vec2 uv = FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy;// (1.0/SCREEN_PIXEL_SIZE) for shader_type canvas_item
vec2 p = uv +
(vec2(.5)-texture(iChannel0, uv*0.3+vec2(TIME*0.05, TIME*0.025)).xy)*amp +
(vec2(.5)-texture(iChannel0, uv*0.3-vec2(-TIME*0.005, TIME*0.0125)).xy)*amp;
vec4 a = texture(iChannel0, p)*tint_color;
ALBEDO = a.xyz; //the w channel is not important, works without it on shader_type canvas_item but if used this on 3d spatial the effect no pass.. whats problem?
}
For ALPHA and ALBEDO: ALBEDO = a.xyz; is correct. For a.w, usually you would do this: ALPHA = a.w;. However, in this case, it appears that a.w is always 1. So there is no point.
I'll pick on the rest of the code. Keep in mind that I do not know how it should look like, nor have any idea of the texture for the sampler2D (I'm guessing noise texture, seamless).
Check your render mode. Being from ShaderToy, there is a chance you want render_mode unshaded;, which will make lights not affect the material. see Render Modes.
For ease of use, You can use hints. In particular, Write the tint color like this:
uniform vec4 tint_color: hint_color = vec4(0.0, 0.5,0.99, 1);
So Godot gives you a color picker in the shader parameters. See Uniforms.
You could also use hint_range(0, 1) for amp. However, I'm not sure about that.
Double check your coordinates. I suspect this FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy should be SCREEN_UV (or UV, if it should stay with the object that has the material).
Was the original like this:
vec2 i_resolution = 1.0/SCREEN_PIXEL_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
As I said in the prior answer, 1.0 / SCREEN_PIXEL_SIZE is VIEWPORT_SIZE. Replace it. We have:
vec2 i_resolution = VIEWPORT_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
Inline:
vec2 uv = FRAGCOORD.xy/VIEWPORT_SIZE;
As I said in the prior answer, FRAGCOORD.xy/VIEWPORT_SIZE is SCREEN_UV (or UV if you don't want the material to depend on the position on screen). Replace it. We have:
vec2 uv = SCREEN_UV;
Even if that is not what you want, it is a good for testing.
Try moving the camera. Is that what you want? No? Try vec2 uv = UV; instead. In fact, a variable is hard to justify at that point.
I've been trying to study opengl for a fun side project and ran into some issue while learning.
Below is a fragment shader:
#version 330 core
in vec3 Normal;
in vec3 Position;
in vec2 TexCoords;
out vec4 color;
uniform vec3 cameraPos;
uniform samplerCube skybox;
struct Material{
sampler2D diffuse0;
sampler2D specular0;
sampler2D emitter0;
sampler2D reflection0;
float shininess;
};
uniform Material material;
void main(){
vec3 I = normalize(Position - cameraPos);
vec3 R = reflect(I, normalize(Normal));
float intensity = 0;
intensity += texture(material.reflection0, TexCoords).x;
vec4 rfl = texture(skybox, R);
//this line doesnt produce anything
color = rfl * intensity;
}
When i used the code above, my model is completely gone from view.
But, if i debug it out separately such as changing the line
color = rfl * intensity;
to
color = rfl;
This actually renders and returns the following picture
And changing that line to
color = vec4(intensity);
It renders and returns the following picture
I've tried changing
color = rfl * some constant
//or
color = vec4(0.5) * intensity
And both rendered my model normally. I'm stumped as to why it doesnt render when i tried multiplying both rfl and intensity together. I think it might be because there are values that the multiplication to fail, but i have no idea what they might be.
Then you change
color = rfl * intensity;
to
color = rfl;
GLSL compiler will drop
uniform Material material;
due optimization. Same happens with skybox when you change line to:
color = intensity;
Make sure that you binding of texture uniforms is correct.
I am trying to implement Blur effect in my game on mobile devices using GLSL shader. I don't have any former experience with writing shaders. And I don't understand if my shader is enough good. Actually I have copyied the GLSL code from a tutorial and I don't know it this tutorial is for vivid demo or also can be used in practice. Here is the code of two pass blur shader that uses Gaussian weights (http://www.cocos2d-x.org/wiki/User_Tutorial-RenderTexture_Plus_Blur):
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform vec2 pixelSize;
uniform vec2 direction;
uniform int radius;
uniform float weights[64];
void main()
{
gl_FragColor = texture2D(CC_Texture0, v_texCoord)*weights[0];
for (int i = 1; i < radius; i++) {
vec2 offset = vec2(float(i)*pixelSize.x*direction.x, float(i)*pixelSize.y*direction.y);
gl_FragColor += texture2D(CC_Texture0, v_texCoord + offset)*weights[i];
gl_FragColor += texture2D(CC_Texture0, v_texCoord - offset)*weights[i];
}
}
I run this shader on each frame update (60 times in a sec) and my game framerate for only one pass drops down to 22 FPS on iPhone 5S (not a bad device). I think this is very-very strange. it seems it has not to much instruction. Why this is so heavy?
P.S. Blur radius is 50, step is 1.
Main reasons why your shader is heavy:
1: This two calculations: v_texCoord + offset and v_texCoord - offset. because the uv coordonates are computed in the fragment shader the texture data has to be loaded from memory on the spot causing cache miss.
What is a dependent texture read?
2: radius is way to large.
How to make it faster/better:
1: Calculate as much as possible in the vertex shader. Ideally if you calculate all the UV's in the vertex shader the GPU can move the texture memory in cache before calling fragment shaders, drastically improving performance.
2: reduce Radius to accommodate let's say 8-16 texture2D calls. This will probably not give you the result you are expecting, and to solve this you can have 2 textures, blurring texture A into B , then blur again B into texture A and so on, as mush as you need. This will give very good results, i remember crisys 1 used it for motion blur , but i can't find the paper.
3: eliminate those 64 uniforms, have all the data hardcoded in the shader. I know that this is not that nice but you will gain some extra performance.
4: If you carefully calculate the UV coordinates you can take great advantage of texture interpolation. Basically never sample a pixel on it's center, always sample in between pixels and the hardware will do and avrage of the near 4 pixels:
https://en.wikipedia.org/wiki/Bilinear_filtering
5: This line: precision mediump float; does everything have to be mediump? I would suggest to remove it and do some testing with lowp on as much as you can.
Edit:
For you shader, here is a simplified version of what you need to do:
Vertex shader:
attribute highp vec4 Position;
attribute mediump vec2 texture0UV;
varying mediump vec2 v_texCoord0;
varying mediump vec2 v_texCoord1;
varying mediump vec2 v_texCoord2;
varying mediump vec2 v_texCoord3;
varying mediump vec2 v_texCoord5;
uniform mediump vec2 texture_size;
void main()
{
gl_Position = Position;
vec2 pixel_size = vec2(1.0) / texture_size;
vec2 offset;
v_texCoord0 = texture0UV;
v_texCoord1 = texture0UV + vec2(-1.0,0.0) / texture_size + pixel_size * 0.5;
v_texCoord2 = texture0UV + vec2(0.0,-1.0) / texture_size + pixel_size * 0.5;
v_texCoord3 = texture0UV + vec2(1.0,0.0) / texture_size - pixel_size * 0.5;
v_texCoord4 = texture0UV + vec2(0.0,1.0) / texture_size - pixel_size * 0.5;
}
The last operation pixel_size * 0.5 is required to take maximum advantage of linear interpolation. In this example the position you pick for sampling are trivial but there is an entire discussion on how you should pick your sampling positions that is way out of the scope of this question.
Fragment shader:
varying mediump vec2 v_texCoord0;
varying mediump vec2 v_texCoord1;
varying mediump vec2 v_texCoord2;
varying mediump vec2 v_texCoord3;
varying mediump vec2 v_texCoord5;
uniform lowp sampler2D CC_Texture0;
void main()
{
mediump vec4 final_color = vec4(0.0);
final_color += texture2D(CC_Texture0,v_texCoord0);
final_color += texture2D(CC_Texture0,v_texCoord1);
final_color += texture2D(CC_Texture0,v_texCoord2);
final_color += texture2D(CC_Texture0,v_texCoord3);
final_color += texture2D(CC_Texture0,v_texCoord4);
gl_FragColor = final_color / 5.0;//weights have to go, use fixed values instead, in this case it's 1/5 for each sample
}
For this to look good you need to blur the texture multiple times, even if you blur the texture 2 times you should see a notable difference.
To speed up you can:
Make radius a const to allow shader compiler to unroll the loop
Precompute pixelSize * direction
Decrease radius, I think 50 is too big for mobile device
I am trying to make a custom light shader and was trying a lot of different things over time.
Some of the solutions I found work better, others worse. For this question I'm using the solution which worked best so far.
My problem is, that if I move the "camera" around, the light positions seems to move around, too. This solution has very slight but noticeable movement in it and the light position seems to be above where it should be.
Default OpenGL lighting (w/o any shaders) works fine (steady light positions) but I need the shader for multitexturing and I'm planning on using portions of it for lighting effects once it's working.
Vertex Source:
varying vec3 vlp, vn;
void main(void)
{
gl_Position = ftransform();
vn = normalize(gl_NormalMatrix * -gl_Normal);
vlp = normalize(vec3(gl_LightSource[0].position.xyz) - vec3(gl_ModelViewMatrix * -gl_Vertex));
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment Source:
uniform sampler2D baseTexture;
uniform sampler2D teamTexture;
uniform vec4 teamColor;
varying vec3 vlp, vn;
void main(void)
{
vec4 newColor = texture2D(teamTexture, vec2(gl_TexCoord[0]));
newColor = newColor * teamColor;
float teamBlend = newColor.a;
// mixing the textures and colorizing them. this works, I tested it w/o lighting!
vec4 outColor = mix(texture2D(baseTexture, vec2(gl_TexCoord[0])), newColor, teamBlend);
// apply lighting
outColor *= max(dot(vn, vlp), 0.0);
outColor.a = texture2D(baseTexture, vec2(gl_TexCoord[0])).a;
gl_FragColor = outColor;
}
What am I doing wrong?
I can't be certain any of these are the problem, but they could cause one.
First, you need to normalize your per-vertex vn and vlp (BTW, try to use more descriptive variable names. viewLightPosition is a lot easier to understand than vlp). I know you normalized them in the vertex shader, but the fragment shader interpolation will denormalize them.
Second, this isn't particularly wrong so much as redundant. vec3(gl_LightSource[0].position.xyz). The "position.xyz" is already a vec3, since the swizzle mask (".xyz") only has 3 components. You don't need to cast it to a vec3 again.
As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.