...and have it actually work. I get the principle, you write a vertex program, something like, this say:
attribute vec3 v_pos;
attribute vec4 v_color;
attribute vec2 v_uv;
attribute vec3 v_rotation; // [angle, x, y]
uniform mat4 modelview_mat;
uniform mat4 projection_mat;
varying vec4 frag_color;
varying vec2 uv_vec;
void main (void) {
mat4 trans_in = mat4(
1.0, 0.0, 0.0, 50.0, // <--- Transformation matrix
0.0, 1.0, 0.0, 50.0,
0.0, 0.0, 1.0, 50.0,
0.0, 0.0, 0.0, 1.0
);
vec4 pos = trans_in * vec4(v_pos,1.0); // <--- apply to input
// Mark a vertex using color to prove a transformation is actually happening...
if (v_rotation[0] > 10.0) {
frag_color = vec4(1.0, 0.0, 0.0, 1.0);
gl_Position = projection_mat * vec4(pos[0], pos[1], 1.0, 1.0);
}
// And leave all the other verticies untouched.
else {
frag_color = v_color;
gl_Position = projection_mat * vec4(v_pos, 1.0); // <--- Untransformed output
}
uv_vec = v_uv; // <--- Pass UV to fragment program
}
The problem is, this doesn't actually work.
After applying the matrix transformation trans_in * v_pos, I expect a point [1, 2, 3] to become [51, 52, 53, 1].
...but it doesn't. In fact, it renders this:
(ie. no transformation of the point location; pos = trans_in * v_pos == vec4(v_pos, 1.0)!!!!!! O_o)
Notice the red marked vertices that prove that I am actually setting the gl_Position for them; indeed, if I do this:
gl_Position = projection_mat * vec4(1.0, 1.0, 1.0, 1.0);
Each of those red points is jumped down to the bottom corner, as you would expect.
I've also tried various 3x3 matrix multiplications and it seems that while the scale operations work, and to some extent, the rotation operations work, I cannot for the life of me get any 2d translation operations to run; the matrix multiplication just seems to... do nothing.
What am I doing wrong?
You got the matrix order wrong. GLSL uses column-major oder, so each row in your intializer will become a column of the matrix. This refelcts the same convention which was used with the (now deprecated) GL matrix stack. It is also consistent to the setting of the transpose parameter of glUinformMatrix*() calls which has to be set to GL_FALSE for column major input (where translation part are elements m[12],m[13],m[14] in an 1D array).
Your matrix actually only alters the w component of your vector, which you then ignore, so it does not have any visible effect.
Related
I'm trying to draw to a cubemap in a single pass using a geometry shade in OpenGL.
Basically need I do this to copy the content of a cubemap into another cubemap, and the may not have the same resolution and pixel layout.
I'm trying to achieve the result I want feeding a single point to the vertex shader and then, from the geometry shader, select each layer (face of the cubemap) and emit a quad and texture coordinates.
So far I've tried this method emitting only two of the cubemap faces (positive and negative X) to see if it could work, but it doesn't.
Using NSight I can see that there is something wrong.
This is the source cubemap:
And this is the result cubemap:
The only face that's being drawn to is the positive X and still it's not correct.
This is my geometry shader:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 8) out;
in vec3 pos[];
out vec3 frag_textureCoord;
void main()
{
const vec4 positions[4] = vec4[4] ( vec4(-1.0, -1.0, 0.0, 0.0),
vec4( 1.0, -1.0, 0.0, 0.0),
vec4(-1.0, 1.0, 0.0, 0.0),
vec4( 1.0, 1.0, 0.0, 0.0) );
// Positive X
gl_Layer = 0;
gl_Position = positions[0];
frag_textureCoord = vec3(1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(1.0, 1.0, -1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(1.0, 1.0, 1.0);
EmitVertex();
EndPrimitive();
// Negative X
gl_Layer = 1;
gl_Position = positions[0];
frag_textureCoord = vec3(-1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(-1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(-1.0, 1.0, 1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(-1.0, 1.0, -1.0);
EmitVertex();
EndPrimitive();
}
And this is my fragment shader:
#version 150 core
uniform samplerCube AtmosphereMap;
in vec3 frag_textureCoord;
out vec4 FragColor;
void main()
{
FragColor = texture(AtmosphereMap, frag_textureCoord) * 1.0f;
}
UPDATE
Further debugging with NSight shows that for the positive x face every fragment gets a value of frag_textureCoord of vec3(~1.0, ~0.0, ~0.0) (I've used ~ since the values are not exactly those but approximated). The negative x face instead never reaches the fragment shader stage.
UPDATE
Changing the definition of my vertex position from vec4(x, y, z, 0.0) to vec4(x, y, z, 1.0) makes my shader render correctly the positive X face, but the negative is still wrong, even if debugging the fragment shader I see that the right color is selected and applied, but then it becomes black.
gl_Layer = 0;
This is a Geometry Shader output. Calling EmitVertex will cause the value of all output variables to become undefined. Therefore, you must always set each output for each vertex to which that output applies.
I am currently trying to apply a normal map in my shader but the shading in the final image is way off.
Surfaces that should be shaded are completely bright, surfaces that should be bright are completely shaded and the top surface, which should have the same shade regardless of rotation of the y-axis, is alternating between bright and dark.
After some trial and error i found out that i can get the correct shading by changing this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
Diffuse and specular lighting are now working correctly,
but obviously without the normal map applied. I honestly have no idea where exactly the error is originating. I am quite new to shader programming and was following this tutorial. Below are the shader sources, with all irrelevant parts cut.
Vertex shader:
#version 450
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 bitangent;
layout(location = 4) in vec2 texture_coordinates;
layout(location = 0) out mat3 normal_matrix;
layout(location = 3) out vec2 texture_coordinates_out;
layout(location = 4) out vec4 vertex_position_viewspace;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
void main() {
mat4 worldview = uniforms.view * uniforms.world;
normal_matrix = mat3(worldview) * mat3(normalize(tangent), normalize(bitangent), normalize(normal));
vec4 vertex_position_worldspace = uniforms.world * vec4(position, 1.0);
vertex_position_viewspace = uniforms.view * vertex_position_worldspace;
gl_Position = uniforms.projection * vertex_position_viewspace;
texture_coordinates_out = texture_coordinates;
}
Fragment shader:
#version 450
layout(location = 0) in mat3 normal_matrix;
layout(location = 3) in vec2 texture_coordinates;
layout(location = 4) in vec4 vertex_position_viewspace;
layout(location = 0) out vec4 fragment_color;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
// ...
layout (set = 0, binding = 2) uniform sampler2D normal_map;
// ...
const vec4 LIGHT = vec4(1.25, 3.0, 3.0, 1.0);
void main() {
// ...
vec4 normal_color = texture(normal_map, texture_coordinates);
// ...
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
vec4 light_position_viewspace = uniforms.view * LIGHT;
vec3 light_direction_viewspace = normalize((light_position_viewspace - vertex_position_viewspace).xyz);
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
vec3 light_color_intensity = vec3(1.0, 1.0, 1.0) * 7.0;
float distance_from_light = distance(vertex_position_viewspace, light_position_viewspace);
float diffuse_strength = clamp(dot(normal_viewspace, light_direction_viewspace), 0.0, 1.0);
vec3 diffuse_light = (light_color_intensity * diffuse_strength) / (distance_from_light * distance_from_light);
// ...
fragment_color.rgb = (diffuse_color.rgb * diffuse_light);
fragment_color.a = diffuse_color.a;
}
There are some things i am a bit uncertain about. For example i noticed that in the tutorial, the light is called lightPosition_worldSpace, making me think i need to multiply the light by the world matrix first, but doing so only makes my light rotate with the cube and still doesn't fix my lighting issue.
Any help or ideas on what i could be doing wrong would be greatly appreciated.
I'm the one who created the tutorial site you're referencing.
If possible could you share a link to your normal map as well? When you say that when you changed the line where the normal of the fragment is calculated using the normal map from this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to one where you hardcode a value like this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
and that fixes the rendering issue, it seems to indicate an issue with the normal map itself.
One way to verify this is to set your entire normal map image to the RGB value (128, 128, 255), which is exactly the same as the vec3(0.0, 0.0, 1.0) value you were using in your changed line. If this results in the object rendering correctly the same as when you were using a hardcoded value, that means you were using a bad normal map.
The normal map is just a texture/image that stores the directions of the normals of your object in "tangent-space" (think of it as like if you had to flatten out your entire object into a 2D surface, and then the normals for each point of that surface is plotted on the map). For each pixel, the red channel represents the X-axis, the green channel represents the Y-axis, and the blue channel represents the Z-axis.
With colors, the range of colors in a normal map goes from (0, 0, 128) to (255, 255, 255) (for images where each color channel uses 8 bits/1 byte), but in GLSL this would be a range from (0.0, 0.0, 0.5) to (1.0, 1.0, 1.0). Let's just work with the range that is used in GLSL for the sake of simplicity.
When looking at the actual possible values for normals, their range actually is (-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0) because you can have a normal direction be either forwards or backwards in either the X-axis or Y-axis.
So when we have a color value of (0.0, 0.0, 0.5), we're actually talking about a normal direction vector (-1.0, -1.0, 0.0). Similarly, a color value of (0.5, 0.5, 0.5) means the normal direction vector (0.0, 0.0, 0.0), and a color value of (1.0, 1.0, 1.0) means a normal value of (1.0, 1.0, 1.0).
So the goal now becomes transforming the value from the normal map from the color value range ((0.0, 0.0, 0.5) to (1.0, 1.0, 1.0)) to the actual range for normals ((-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0)).
If you multiply a value from a normal map by 2.0, you change the possible range of the value from (0.0, 0.0, 0.5) - (1.0, 1.0, 1.0) to (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0). And then if you subtract 1.0 from the result, the range now changes from (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0) to (-1.0, -1.0, 0.0) - (1.0, 1.0, 1.0), which is exactly the possible range of the normals of an object.
So you have to make sure that when you're creating your normal map, the range of the RGB color values is between (0, 0, 128) - (255, 255, 255).
Side note: As for why the range of the blue channel (Z-axis) in the normal map can only be between 128 to 255, a value less than 128 means that a negative value on the Z-axis, meaning that the normal of the fragment is pointing into the surface, not out of it. Since a normal map is supposed to represent the values of the normals when the surface of the object is flattened and facing towards you, having a normal with a negative Z-axis value would mean that at that point the surface is actually facing away from you, which doesn't really make sense, hence why negative values are not allowed.
You could still try having the blue channel be a value less than 128 and see what interesting results pop out.
Also with regards to the doubt you mentioned in the end and in the comments:
What does lightPosition_worldSpace mean?
lightPosition_worldSpace represents the coordinate at which light is present relative to the center of the world (relative to the entire world you're rendering), hence the world-space suffix. You just need to multiply this position with the your view-matrix if you wish to know the position of the light is view-space (relative to your camera).
If you have a coordinate that is relative to the center of the object you're rendering, then you should multiply it with your model matrix (uniforms.world) to transform that coordinate from one that's relative to the center of your model to one that's relative to the center of the world. Since the lightPosition_worldSpace is the position of the light already relative to the center of the world, you don't need to multiply them. This is why you saw the behavior of the light moving with the cube when you did try to do so (the light was moved since its coordinates were thought to be placed relative to the cube itself).
Your comment regarding confusion with the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
This is bad on my part for not representing what vec3(0.0, 0.0, 0.0) is with a variable. This is supposed to represent the position of the camera in view-space. Since in view-space the camera is at the center, its coordinate is vec3(0.0, 0.0, 0.0).
As for why I'm doing
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
when
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
is simpler and is basically the same thing, I had written it so to make it more obvious what was happening (which it appears I failed to do).
Typically, when you have two coordinates and you want to find the direction from a source coordinate to a destination coordinate you subtract the two coordinates to get their direction + magnitude. By normalizing that difference, you then just the directional component, with the magnitude part removed. So the equation for finding a direction from a source coordinate to a destination coordinate becomes:
direction = normalize(destination coordinate - source coordinate)
view_direction_viewspace Is supposed to represent the direction from the camera towards the fragment. To calculate this, we can just subtract the position of the camera (vec3(0.0, 0.0, 0.0)) from the position of the fragment (vertex_position_viewspace.xyz) and then run normalize(...) on the difference to get that result.
I've generally tried to maintain this consistency where when I'm calculating a direction using two coordinates I always have a destination and source coordinate explicitly written out, hence why you see the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0)); in the fragment shader code.
I've updated the code by setting vec3(0.0, 0.0, 0.0) to a variable cameraPosition_viewSpace and using that to better clarify this intention.
Feel free to reach out through GitHub issues if you want to ask anything else or help improve the tutorial.
I haven't updated this post in a while because i have completely shifted away from using normal mapping (for now) but still wanted to post an answer, in case that someone else runs into the same problem. I still can't be 100% sure but i am fairly certain, that this behavior was caused by the library i was using to load the normal map. Special thanks to sabarnac who has been a huge help to me in solving this.
I'm trying to draw to a cubemap in a single pass using a geometry shade in OpenGL.
Basically need I do this to copy the content of a cubemap into another cubemap, and the may not have the same resolution and pixel layout.
I'm trying to achieve the result I want feeding a single point to the vertex shader and then, from the geometry shader, select each layer (face of the cubemap) and emit a quad and texture coordinates.
So far I've tried this method emitting only two of the cubemap faces (positive and negative X) to see if it could work, but it doesn't.
Using NSight I can see that there is something wrong.
This is the source cubemap:
And this is the result cubemap:
The only face that's being drawn to is the positive X and still it's not correct.
This is my geometry shader:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 8) out;
in vec3 pos[];
out vec3 frag_textureCoord;
void main()
{
const vec4 positions[4] = vec4[4] ( vec4(-1.0, -1.0, 0.0, 0.0),
vec4( 1.0, -1.0, 0.0, 0.0),
vec4(-1.0, 1.0, 0.0, 0.0),
vec4( 1.0, 1.0, 0.0, 0.0) );
// Positive X
gl_Layer = 0;
gl_Position = positions[0];
frag_textureCoord = vec3(1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(1.0, 1.0, -1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(1.0, 1.0, 1.0);
EmitVertex();
EndPrimitive();
// Negative X
gl_Layer = 1;
gl_Position = positions[0];
frag_textureCoord = vec3(-1.0, -1.0, 1.0);
EmitVertex();
gl_Position = positions[1];
frag_textureCoord = vec3(-1.0, -1.0, -1.0);
EmitVertex();
gl_Position = positions[2];
frag_textureCoord = vec3(-1.0, 1.0, 1.0);
EmitVertex();
gl_Position = positions[3];
frag_textureCoord = vec3(-1.0, 1.0, -1.0);
EmitVertex();
EndPrimitive();
}
And this is my fragment shader:
#version 150 core
uniform samplerCube AtmosphereMap;
in vec3 frag_textureCoord;
out vec4 FragColor;
void main()
{
FragColor = texture(AtmosphereMap, frag_textureCoord) * 1.0f;
}
UPDATE
Further debugging with NSight shows that for the positive x face every fragment gets a value of frag_textureCoord of vec3(~1.0, ~0.0, ~0.0) (I've used ~ since the values are not exactly those but approximated). The negative x face instead never reaches the fragment shader stage.
UPDATE
Changing the definition of my vertex position from vec4(x, y, z, 0.0) to vec4(x, y, z, 1.0) makes my shader render correctly the positive X face, but the negative is still wrong, even if debugging the fragment shader I see that the right color is selected and applied, but then it becomes black.
gl_Layer = 0;
This is a Geometry Shader output. Calling EmitVertex will cause the value of all output variables to become undefined. Therefore, you must always set each output for each vertex to which that output applies.
After deciding to try programming in modern OpenGL, I've left behind the fixed function pipeline and I'm not entirely sure about getting the same functionality I had before.
I'm trying to texture map quads with pixel perfect size, matching the texture size. For example, a 128x128 texture maps to a quad 128x128 in size.
This is my vertex shader.
#version 110
uniform float xpos;
uniform float ypos;
uniform float tw; // texture width in pixels
uniform float th; // texture height in pixels
attribute vec4 position;
varying vec2 texcoord;
void main()
{
mat4 projectionMatrix = mat4( 2.0/600.0, 0.0, 0.0, -1.0,
0.0, 2.0/800.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
gl_Position = position * projectionMatrix;
texcoord = (gl_Position.xy);
}
This is my fragment shader:
#version 110
uniform float fade_factor;
uniform sampler2D textures[1];
varying vec2 texcoord;
void main()
{
gl_FragColor = texture2D(textures[0], texcoord);
}
My vertex data is as such, where w and h are the width and height of the texture.
[
0, 0,
w, 0,
w, h,
0, h
]
I load a 128x128 texture and with these shaders I see the image repeated 4 times: http://i.stack.imgur.com/UY7Ts.jpg
Can anyone offer advice on the correct way to be able to translate and scale given the tw th, xpos, xpos uniforms?
There's a problem with this:
mat4 projectionMatrix = mat4( 2.0/600.0, 0.0, 0.0, -1.0,
0.0, 2.0/800.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
gl_Position = position * projectionMatrix;
Transformation matices are right associative, i.e. you should multiply the opposite order. Also you normally don't specify a projection matrix in the shader, you pass it as a uniform. OpenGL provides you ready to use uniforms for projection and modelview. In OpenGL-3 core you can reuse the uniform names to stay compatible.
// predefined by OpenGL version < 3 core:
#if __VERSION__ < 400
uniform mat4 gl_ProjectionMatrix;
uniform mat4 gl_ModelviewMatrx;
uniform mat4 gl_ModelviewProjectionMatrix; // premultiplied gl_ProjectionMatrix * gl_ModelviewMatrix
uniform mat4 gl_ModelviewInverseTranspose; // needed for transformin normals
attribute vec4 gl_Vertex;
varying vec4 gl_TexCoord[];
#endif
void main()
{
gl_Position = gl_ModelviewProjectionMatrix * gl_Vertex;
}
Next you must understand that texture coordinates don't address texture pixels (texels), but that the texture should be understood as a interpolating function with the given sampling points; texture coordinates 0 or 1 don't hit the texel's centers, but lie exactly between the wraparound, thus blurring. As long as your quad on screen size exactly matches the texture dimensions this is fine. But as soon as you want to show just a subimage things get interesting (I leave it as an exercise to the reader to figure out the exact mapping; hint: You'll have the terms 0.5/dimension and (dimension - 1)/dimension in the solution)
i have implemented shadowmapping with an FBO and GLSL.
it is used on a heightfield. that is some objects (trees, plants, ...) cast shadows on the heightfield.
the problem i have, is that the shadows are only visible on the ground of the heightfield. that is, where the heightfield's height = 0. as soon as there is some height involved, the shadows disappear. if i look at the shadowmap itself, everything looks fine... objects that are closer to the light are darker.
here is my GLSL vertexshader:
uniform mat4 lightView, lightProjection;
const mat4 biasMatrix = mat4( 0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0); //bias from [-1, 1] to [0, 1]
void main()
{
gl_Position = ftransform();
mat4 shadowMatrix = biasMatrix * lightProjection * lightView;
shadowTexCoord = shadowMatrix * gl_Vertex;
}
fragmentshader:
uniform sampler2DShadow shadowmap;
varying vec4 shadowTexCoord;
void main()
{
vec4 shadow = shadow2DProj(shadowmap, shadowTexCoord, 0.0);
float colorshadow = shadow.r < 0.1 ? 0.5 : 1.0;
vec4 color = vec4(1,1,1,1);
gl_FragColor = vec4( color*colorshadow, color.w );
}
thanks a lot for any help on this!
I think there might be some confusion between the different spaces here. As written, it looks like your code would only work if gl_ModelViewMatrix for the ground contains only camera transformations. This is because ftransform basically goes
gl_Position = gl_ProjectionMatrix * (gl_ModelViewMatrix * gl_Vertex)
that means that the gl_Vertex is specified in object coordinates. However typically the view matrix of the light maps from world coordinates to the light's view space so this code would only work if object space = world space. So basically, lets say you scale the terrain, well then object space doesn't equal world space anymore. Because of this you need to separate out the gl_ModelViewMatrix into two parts: the camera view matrix and the modeling transform (eg object -> world space)
I havent tested this code, but I would try something like this:
uniform mat4 lightView, lightProjection;
uniform mat4 camView, camProj, modelTrans;
const mat4 biasMatrix = mat4( 0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0); //bias from [-1, 1] to [0, 1]
void main()
{
mat4 modelViewProjMatrix = camProj * camView * modelTrans;
gl_Position = modelViewProjMatrix * gl_Vertex;
mat4 shadowMatrix = biasMatrix * lightProjection * lightView * modelTrans;
shadowTexCoord = shadowMatrix * gl_Vertex;
}
Technically it's faster to multiply the matrices on the CPU and only pass the exact ones you need but for getting stuff working sometimes its easier to do this way.
Maybe you just missed it copy-pasting, but I don't see shadowTexCoord as varying in the vertex shader. This should result in a compilation error, though.