I am currently trying to apply a normal map in my shader but the shading in the final image is way off.
Surfaces that should be shaded are completely bright, surfaces that should be bright are completely shaded and the top surface, which should have the same shade regardless of rotation of the y-axis, is alternating between bright and dark.
After some trial and error i found out that i can get the correct shading by changing this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
Diffuse and specular lighting are now working correctly,
but obviously without the normal map applied. I honestly have no idea where exactly the error is originating. I am quite new to shader programming and was following this tutorial. Below are the shader sources, with all irrelevant parts cut.
Vertex shader:
#version 450
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 bitangent;
layout(location = 4) in vec2 texture_coordinates;
layout(location = 0) out mat3 normal_matrix;
layout(location = 3) out vec2 texture_coordinates_out;
layout(location = 4) out vec4 vertex_position_viewspace;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
void main() {
mat4 worldview = uniforms.view * uniforms.world;
normal_matrix = mat3(worldview) * mat3(normalize(tangent), normalize(bitangent), normalize(normal));
vec4 vertex_position_worldspace = uniforms.world * vec4(position, 1.0);
vertex_position_viewspace = uniforms.view * vertex_position_worldspace;
gl_Position = uniforms.projection * vertex_position_viewspace;
texture_coordinates_out = texture_coordinates;
}
Fragment shader:
#version 450
layout(location = 0) in mat3 normal_matrix;
layout(location = 3) in vec2 texture_coordinates;
layout(location = 4) in vec4 vertex_position_viewspace;
layout(location = 0) out vec4 fragment_color;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
// ...
layout (set = 0, binding = 2) uniform sampler2D normal_map;
// ...
const vec4 LIGHT = vec4(1.25, 3.0, 3.0, 1.0);
void main() {
// ...
vec4 normal_color = texture(normal_map, texture_coordinates);
// ...
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
vec4 light_position_viewspace = uniforms.view * LIGHT;
vec3 light_direction_viewspace = normalize((light_position_viewspace - vertex_position_viewspace).xyz);
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
vec3 light_color_intensity = vec3(1.0, 1.0, 1.0) * 7.0;
float distance_from_light = distance(vertex_position_viewspace, light_position_viewspace);
float diffuse_strength = clamp(dot(normal_viewspace, light_direction_viewspace), 0.0, 1.0);
vec3 diffuse_light = (light_color_intensity * diffuse_strength) / (distance_from_light * distance_from_light);
// ...
fragment_color.rgb = (diffuse_color.rgb * diffuse_light);
fragment_color.a = diffuse_color.a;
}
There are some things i am a bit uncertain about. For example i noticed that in the tutorial, the light is called lightPosition_worldSpace, making me think i need to multiply the light by the world matrix first, but doing so only makes my light rotate with the cube and still doesn't fix my lighting issue.
Any help or ideas on what i could be doing wrong would be greatly appreciated.
I'm the one who created the tutorial site you're referencing.
If possible could you share a link to your normal map as well? When you say that when you changed the line where the normal of the fragment is calculated using the normal map from this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to one where you hardcode a value like this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
and that fixes the rendering issue, it seems to indicate an issue with the normal map itself.
One way to verify this is to set your entire normal map image to the RGB value (128, 128, 255), which is exactly the same as the vec3(0.0, 0.0, 1.0) value you were using in your changed line. If this results in the object rendering correctly the same as when you were using a hardcoded value, that means you were using a bad normal map.
The normal map is just a texture/image that stores the directions of the normals of your object in "tangent-space" (think of it as like if you had to flatten out your entire object into a 2D surface, and then the normals for each point of that surface is plotted on the map). For each pixel, the red channel represents the X-axis, the green channel represents the Y-axis, and the blue channel represents the Z-axis.
With colors, the range of colors in a normal map goes from (0, 0, 128) to (255, 255, 255) (for images where each color channel uses 8 bits/1 byte), but in GLSL this would be a range from (0.0, 0.0, 0.5) to (1.0, 1.0, 1.0). Let's just work with the range that is used in GLSL for the sake of simplicity.
When looking at the actual possible values for normals, their range actually is (-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0) because you can have a normal direction be either forwards or backwards in either the X-axis or Y-axis.
So when we have a color value of (0.0, 0.0, 0.5), we're actually talking about a normal direction vector (-1.0, -1.0, 0.0). Similarly, a color value of (0.5, 0.5, 0.5) means the normal direction vector (0.0, 0.0, 0.0), and a color value of (1.0, 1.0, 1.0) means a normal value of (1.0, 1.0, 1.0).
So the goal now becomes transforming the value from the normal map from the color value range ((0.0, 0.0, 0.5) to (1.0, 1.0, 1.0)) to the actual range for normals ((-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0)).
If you multiply a value from a normal map by 2.0, you change the possible range of the value from (0.0, 0.0, 0.5) - (1.0, 1.0, 1.0) to (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0). And then if you subtract 1.0 from the result, the range now changes from (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0) to (-1.0, -1.0, 0.0) - (1.0, 1.0, 1.0), which is exactly the possible range of the normals of an object.
So you have to make sure that when you're creating your normal map, the range of the RGB color values is between (0, 0, 128) - (255, 255, 255).
Side note: As for why the range of the blue channel (Z-axis) in the normal map can only be between 128 to 255, a value less than 128 means that a negative value on the Z-axis, meaning that the normal of the fragment is pointing into the surface, not out of it. Since a normal map is supposed to represent the values of the normals when the surface of the object is flattened and facing towards you, having a normal with a negative Z-axis value would mean that at that point the surface is actually facing away from you, which doesn't really make sense, hence why negative values are not allowed.
You could still try having the blue channel be a value less than 128 and see what interesting results pop out.
Also with regards to the doubt you mentioned in the end and in the comments:
What does lightPosition_worldSpace mean?
lightPosition_worldSpace represents the coordinate at which light is present relative to the center of the world (relative to the entire world you're rendering), hence the world-space suffix. You just need to multiply this position with the your view-matrix if you wish to know the position of the light is view-space (relative to your camera).
If you have a coordinate that is relative to the center of the object you're rendering, then you should multiply it with your model matrix (uniforms.world) to transform that coordinate from one that's relative to the center of your model to one that's relative to the center of the world. Since the lightPosition_worldSpace is the position of the light already relative to the center of the world, you don't need to multiply them. This is why you saw the behavior of the light moving with the cube when you did try to do so (the light was moved since its coordinates were thought to be placed relative to the cube itself).
Your comment regarding confusion with the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
This is bad on my part for not representing what vec3(0.0, 0.0, 0.0) is with a variable. This is supposed to represent the position of the camera in view-space. Since in view-space the camera is at the center, its coordinate is vec3(0.0, 0.0, 0.0).
As for why I'm doing
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
when
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
is simpler and is basically the same thing, I had written it so to make it more obvious what was happening (which it appears I failed to do).
Typically, when you have two coordinates and you want to find the direction from a source coordinate to a destination coordinate you subtract the two coordinates to get their direction + magnitude. By normalizing that difference, you then just the directional component, with the magnitude part removed. So the equation for finding a direction from a source coordinate to a destination coordinate becomes:
direction = normalize(destination coordinate - source coordinate)
view_direction_viewspace Is supposed to represent the direction from the camera towards the fragment. To calculate this, we can just subtract the position of the camera (vec3(0.0, 0.0, 0.0)) from the position of the fragment (vertex_position_viewspace.xyz) and then run normalize(...) on the difference to get that result.
I've generally tried to maintain this consistency where when I'm calculating a direction using two coordinates I always have a destination and source coordinate explicitly written out, hence why you see the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0)); in the fragment shader code.
I've updated the code by setting vec3(0.0, 0.0, 0.0) to a variable cameraPosition_viewSpace and using that to better clarify this intention.
Feel free to reach out through GitHub issues if you want to ask anything else or help improve the tutorial.
I haven't updated this post in a while because i have completely shifted away from using normal mapping (for now) but still wanted to post an answer, in case that someone else runs into the same problem. I still can't be 100% sure but i am fairly certain, that this behavior was caused by the library i was using to load the normal map. Special thanks to sabarnac who has been a huge help to me in solving this.
I'm trying to implement specular reflection using values sampled from a grayscale, 1D texture as the multiplicative term.
I've implemented a toggle so I can see the difference between the two, but for some reason when the sampled color is enabled, the areas of the scene with no light display as a light gray, where without the sampling those same areas display as black.
Why is this? Here's the area where I'm setting the fragment color.
if(u_specularRamp == 1)
{
specularAmount = clamp(specularAmount, 0.0, 1.0);
vec2 texCoords = vec2(specularAmount, 0.5);
vec4 sampledColor = texture(u_ramp_tex, texCoords);
specularReflection = vec3(0.3 * sampledColor.x);
}
else
{
specularReflection = vec3(0.3 * specularAmount);
}
FragColor = vec4(specularReflection, 1.0);
u_specularRamp is an integer uniform I'm passing in to toggle the sampled color on and off.
I've fixed the issue, in case anyone runs into this problem, this behaviour was caused by the texture parameter being set to GL_REPEAT. If you want black where there isn't any light, set that parameter to GL_CLAMP_TO_EDGE
I see that you can pass variables from javaScript to GLSL but is it possible to go the other way around. Basically I have a shader that converts a texture to 3 colors red, green, and blue based on the alpha channel.
if (texture.a == 1.0) {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
if (texture.a == 0.0) {
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
call0nce = 1;
}
if (texture.a > 0.0 && texture.a < 1.0) {
"gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);",
}
I have been using the colors to help visualize things better. What I'm really trying to do is pick a random point with with an alpha of 1.0 and a random point with an alpha of 0.0 and output the texture coordinates of both in a way that I can access them from javaScript. How would I go about this?
You can not directly pass values from GLSL back to JavaScript. GLSL, at least in WebGL 1.0, can only output pixels to textures, renderbuffers, and the backbuffer.
You can read the pixels though by calling gl.readPixels so you can indirectly get data from GLSL back into JavaScript by writing the values you want and then calling gl.readPixels.
I am working on a test project where one big 3D quad is intersected by several other 3D quads (rotated differently). All quads have transparency (ranging from fully opaque to fully transparent). The small quads never overlap, well, they might, but the camera placement and the Z-buffer make sure that only those parts that may overlap actually overlap.
To render this, I first render the big quad to a different rgba framebuffer for later lookup.
Now I start rendering to the screen. A skybox is first rendered, then the small quads are rendered with alpha blending enabled: GL_SRC_ALHPA and GL_ONE_MINUS_SRC_ALPHA. After that I render the big quad again with alpha blending also enabled this time. Of course, only the pixels in front of the quads will be rendered because of the Z-buffer.
That's why, in the fragment shader of the small quads, I do a raytrace to find the intersection with the big quad. If the intersection point is BEHIND the small quad, I fetch the texel from the originally created framebuffer and blend this texel manually with the calculated small quad texel color.
But the result is not the same: the colors in front are rendered correctly (GPU handles the blending there), the colors behind them are "weird". They are lighter or darker but I never get the same result. After consideration, I think this must be because I am not emitting the correct alpha value from my small-quads shader, so that the GPU performed blending changes the colors even more.
So, how exactly is the alpha value calculated when blending on the GPU when using the above mentioned blending method. In the opengl manual, I find: A * Srgba + (1 - A) * Drgba. When I emit that result, the blending is not the same. I am quite sure that this is because the result is again passing through the GPU blending again.
The ray tracing is correct, I'm certain of that.
So, what should I do with my manual blending?
Or, should I use another method to get the right effect?
Not really necessary I believe, but here is some code (optimization is for later):
vec4 vRayOrg = vec4(0.0, 0.0, 0.0, 1.0);
vec4 vRayDir = vec4(normalize(vBBPosition.xyz), 0.0);
vec4 vPlaneOrg = mView * vec4(0.0, 0.0, 0.0, 1.0);
vec4 vPlaneNormal = mView * vec4(0.0, 1.0, 0.0, 0.0);
float div = dot(vRayDir.xyz, vPlaneNormal.xyz);
float t = dot(vPlaneOrg.xyz - vRayOrg.xyz, vPlaneNormal.xyz) / div;
vec4 pIntersection = vRayOrg + t * vRayDir;
vec3 normal = normalize(vec4(vCoord, 0.0, 0.0)).xyz;
vec3 vLight = normalize(vPosition.xyz / vPosition.w);
float distance = length(vCoord) - 1.0;
float distf = clamp(0.225 - distance, 0.0, 1.0);
float diffuse = clamp(0.25 + dot(-normal, vLight), 0.0, 1.0);
vec4 cOut = diffuse * 9.0 * distf * cGlow;
if (distance > 0.0 && t >= 0.0 && pIntersection.z <= vBBPosition.z)
{
vec2 vTexcoord = (vProjectedPosition.xy / vProjectedPosition.w) * 0.5 + 0.5;
vec4 cLookup = texture(tLookup, vTexcoord);
cOut = cLookup.a * cLookup + (1.0 - cLookup.a) * cOut;
}
vFragColor = cOut;
Edit: This turned out to be correct, hopefully it still helps others with similar issues.
Is there a piece I'm missing in setting up the depth testing pipeline in OpenGL ES 2.0 (using EGL)?
I've found many questions about this but all were solved by either correctly setting up the depth buffer on context initialization:
EGLint egl_attributes[] = {
...
EGL_DEPTH_SIZE, 16,
...
EGL_NONE };
if (!eglChooseConfig(
m_eglDisplay, egl_attributes, &m_eglConfig, 1, &numConfigs)) {
cerr << "Failed to set EGL configuration" << endl;
return EGL_FALSE;
}
or by properly enabling and clearing the depth buffer, and doing so after the context has been initialized:
// Set the viewport
glViewport(0, 0, m_display->width(), m_display->height());
// Enable culling and depth testing
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
// Clear the color and depth buffers
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepthf(1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw elements
m_Program->drawAll();
Commenting out the glEnable(GL_DEPTH_TEST) I have a scene but without the depth test occlusion I would love to have.
In a shader, outputting the z component of gl_Position visually works as expected (z values are in the range [0, 1]):
// Vertex Shader
uniform mat4 u_matrix;
attribute vec4 a_position;
varying float v_depth;
void main() {
vec4 v_position = u_matrix * a_position;
v_depth = v_position.z / v_position.w;
gl_Position = v_position;
}
// Fragment shader
varying float v_depth;
void main() {
gl_FragColor = vec4((v_depth < 0.0) ? 1.0 : 0.0,
v_depth,
(v_depth > 1.0) ? 1.0 : 0.0,
1.0);
}
All objects are a shade of pure green, darker for nearer and brighter for further, as expected. Sadly some further (brighter) objects are drawn over nearer (darker) objects.
Any ideas what I'm missing? (If nothing else I hope this summarises some issues others have been having).
It appears I wasn't missing anything. I had a rogue polygon (in a different shader program) that, when depth was enabled occluded everything. The above is a correct setup.