I am trying to make a custom light shader and was trying a lot of different things over time.
Some of the solutions I found work better, others worse. For this question I'm using the solution which worked best so far.
My problem is, that if I move the "camera" around, the light positions seems to move around, too. This solution has very slight but noticeable movement in it and the light position seems to be above where it should be.
Default OpenGL lighting (w/o any shaders) works fine (steady light positions) but I need the shader for multitexturing and I'm planning on using portions of it for lighting effects once it's working.
Vertex Source:
varying vec3 vlp, vn;
void main(void)
{
gl_Position = ftransform();
vn = normalize(gl_NormalMatrix * -gl_Normal);
vlp = normalize(vec3(gl_LightSource[0].position.xyz) - vec3(gl_ModelViewMatrix * -gl_Vertex));
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment Source:
uniform sampler2D baseTexture;
uniform sampler2D teamTexture;
uniform vec4 teamColor;
varying vec3 vlp, vn;
void main(void)
{
vec4 newColor = texture2D(teamTexture, vec2(gl_TexCoord[0]));
newColor = newColor * teamColor;
float teamBlend = newColor.a;
// mixing the textures and colorizing them. this works, I tested it w/o lighting!
vec4 outColor = mix(texture2D(baseTexture, vec2(gl_TexCoord[0])), newColor, teamBlend);
// apply lighting
outColor *= max(dot(vn, vlp), 0.0);
outColor.a = texture2D(baseTexture, vec2(gl_TexCoord[0])).a;
gl_FragColor = outColor;
}
What am I doing wrong?
I can't be certain any of these are the problem, but they could cause one.
First, you need to normalize your per-vertex vn and vlp (BTW, try to use more descriptive variable names. viewLightPosition is a lot easier to understand than vlp). I know you normalized them in the vertex shader, but the fragment shader interpolation will denormalize them.
Second, this isn't particularly wrong so much as redundant. vec3(gl_LightSource[0].position.xyz). The "position.xyz" is already a vec3, since the swizzle mask (".xyz") only has 3 components. You don't need to cast it to a vec3 again.
Related
ALBEDO is vec3 and COLOR is vec4.. I need make pass COLOR to ALBEDO on Godot.. This shader work on shadertype itemscanvas but not working on spatial material..
shader_type spatial;
uniform float amp = 0.1;
uniform vec4 tint_color = vec4(0.0, 0.5,0.99, 1);
uniform sampler2D iChannel0;
void fragment ()
{
vec2 uv = FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy;// (1.0/SCREEN_PIXEL_SIZE) for shader_type canvas_item
vec2 p = uv +
(vec2(.5)-texture(iChannel0, uv*0.3+vec2(TIME*0.05, TIME*0.025)).xy)*amp +
(vec2(.5)-texture(iChannel0, uv*0.3-vec2(-TIME*0.005, TIME*0.0125)).xy)*amp;
vec4 a = texture(iChannel0, p)*tint_color;
ALBEDO = a.xyz; //the w channel is not important, works without it on shader_type canvas_item but if used this on 3d spatial the effect no pass.. whats problem?
}
For ALPHA and ALBEDO: ALBEDO = a.xyz; is correct. For a.w, usually you would do this: ALPHA = a.w;. However, in this case, it appears that a.w is always 1. So there is no point.
I'll pick on the rest of the code. Keep in mind that I do not know how it should look like, nor have any idea of the texture for the sampler2D (I'm guessing noise texture, seamless).
Check your render mode. Being from ShaderToy, there is a chance you want render_mode unshaded;, which will make lights not affect the material. see Render Modes.
For ease of use, You can use hints. In particular, Write the tint color like this:
uniform vec4 tint_color: hint_color = vec4(0.0, 0.5,0.99, 1);
So Godot gives you a color picker in the shader parameters. See Uniforms.
You could also use hint_range(0, 1) for amp. However, I'm not sure about that.
Double check your coordinates. I suspect this FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy should be SCREEN_UV (or UV, if it should stay with the object that has the material).
Was the original like this:
vec2 i_resolution = 1.0/SCREEN_PIXEL_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
As I said in the prior answer, 1.0 / SCREEN_PIXEL_SIZE is VIEWPORT_SIZE. Replace it. We have:
vec2 i_resolution = VIEWPORT_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
Inline:
vec2 uv = FRAGCOORD.xy/VIEWPORT_SIZE;
As I said in the prior answer, FRAGCOORD.xy/VIEWPORT_SIZE is SCREEN_UV (or UV if you don't want the material to depend on the position on screen). Replace it. We have:
vec2 uv = SCREEN_UV;
Even if that is not what you want, it is a good for testing.
Try moving the camera. Is that what you want? No? Try vec2 uv = UV; instead. In fact, a variable is hard to justify at that point.
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH
I recently wrote a Phong shader in GLSL as part of a school assignment. I started with tutorials, then played around with the code until I got it working. It works perfectly fine as far as I can tell, but there's one line in particular I wrote where I don't understand why it does work.
The vertex shader:
#version 330
layout (location = 0) in vec3 Position; // Vertex position
layout (location = 1) in vec3 Normal; // Vertex normal
out vec3 Norm;
out vec3 Pos;
out vec3 LightDir;
uniform mat3 NormalMatrix; // ModelView matrix without the translation component, and inverted
uniform mat4 MVP; // ModelViewProjection Matrix
uniform mat4 ModelView; // ModelView matrix
uniform vec3 light_pos; // Position of the light
void main()
{
Norm = normalize(NormalMatrix * Normal);
Pos = Position;
LightDir = NormalMatrix * (light_pos - Position);
gl_Position = MVP * vec4(Position, 1.0);
}
The fragment shader:
#version 330
in vec3 Norm;
in vec3 Pos;
in vec3 LightDir;
layout (location = 0) out vec4 FragColor;
uniform mat3 NormalMatrix;
uniform mat4 ModelView;
void main()
{
vec3 normalDirCameraCoords = normalize(Norm);
vec3 vertexPosLocalCoords = normalize(Pos);
vec3 lightDirCameraCoords = normalize(LightDir);
float dist = max(length(LightDir), 1.0);
float intensity = max(dot(normalDirCameraCoords, lightDirCameraCoords), 0.0) / pow(dist, 1.001);
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
float intSpec = max(dot(h, normalDirCameraCoords), 0.0);
vec4 spec = vec4(0.9, 0.9, 0.9, 1.0) * (pow(intSpec, 100) / pow(dist, 1.2));
FragColor = max((intensity * vec4(0.7, 0.7, 0.7, 1.0)) + spec, vec4(0.07, 0.07, 0.07, 1.0));
}
So I'm doing the method where you calculate the half vector between the light vector and the camera vector, then dot it with the normal. That's all good. However, I do two things that are strange.
Normally, everything is done in eye coordinates. However, Position, which I pass from the vertex shader to the fragment shader, is in local coordinates.
This is the part that baffles me. On the line vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords); I'm subtracting the light vector in camera coordinates with the vertex position in local coordinates. This seems utterly wrong.
In short, I understand what this code is supposed to be doing, and how the half vector method of phong shading works.
But why does this code work?
EDIT: The starter code we were provided is open source, so you can download the completed project and look at it directly if you'd like. The project is for VS 2012 on Windows (you'll need to set up GLEW, GLM, and freeGLUT), and should work on GCC with no code changes (maybe a change or two to the makefile library paths).
Note that in the source files, "light_pos" is called "gem_pos", as our light source is the little gem you move around with WSADXC. Press M to get Phong with multiple lights.
The reason this works is happenstance, but it's interesting to see why it still works.
Phong shading is three techniques in one
With phong shading, we have three terms: specular, diffuse, and ambient; these three terms represent the three techniques used in phong shading.
None of these terms strictly require a vector space; you can make phong shading work in world, local, or camera spaces as long as you are consistant. Eye space is usually used for lighting, as it is easier to work with and the conversions are simple.
But what if you are at origin? Now you are multiplying by zero; it's easy to see that there's no difference between any of the vector spaces at origin. By coincidence, at origin, it doesn't matter what vector space you are in; it'll work.
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
Notice that it's basically subtracting 0; this is the only time local is used, and it's used in the one place that it can do the least damage. Since the object is at origin, all it's vertices should be at or very close to origin as well. At origin, the approximation is exact; all vector spaces converge. Very close to origin, it's very close to exact; even if we used exact reals, it'd be a very small divergence, but we don't use exact reals, we use floats, compounding the issue.
Basically, you got lucky; this wouldn't work if the object wasn't at origin. Try moving it and see!
Also, you aren't using Phong shading; you are using Blinn-Phong shading (that's the name for the replacement of reflect() with a half vector, just for reference).
As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.