I tried to realize height map with GLSL.
For it, i need to sent my picture to VertexShader and get grey component.
glActiveTexture(GL_TEXTURE0);
Texture.bind();
glUniform1i(mShader.getUniformLocation("heightmap"), 0);
mShader.getUniformLocation uses glGetUniformLocation and work good for other uniforms values, that used in Fragment, Vertex Shaders. But for heightmap return -1...
VertexShader code:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 color;
layout (location = 2) in vec2 texCoords;
layout (location = 3) in vec3 normal;
out vec3 Normal;
out vec3 FragPos;
out vec2 TexCoords;
out vec4 ourColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform sampler2D heightmap;
void main()
{
float bias = 0.25;
float h = 0.0;
float scale = 5.0;
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
vec3 hnormal = vec3(normal.x*h, normal.y*h, normal.z*h);
vec3 position1 = position * hnormal;
gl_Position = projection * view * model * vec4(position1, 1.0f);
FragPos = vec3(model * vec4(position, 1.0f));
Normal = mat3(transpose(inverse(model))) * normal;
ourColor = color;
TexCoords = texCoords;
}
may be algorithm of getting height is bad, but error with getting uniformlocation stops my work..
What is wrong? Any ideas?
UPD: texCoords (not TexCoords) of course is using in
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
my mistake, but it doesn't solve the problem. Having same error..
My bet is your variable has been optimized out by driver or the shader did not compile/link properly. After trying to compile your shader (on my nVidia) I got this in the logs:
0(9) : warning C7050: "TexCoords" might be used before being initialized
You should always check the GLSL compile/link logs ? see
How to debug GLSL Fragment shader
especially how the glGetShaderInfoLog is used.
In line
h = scale * ((texture2D(heightmap, TexCoords).r) - bias);
You are using TexCoords which is output variable and not yet set so the behavior is undefined and most likely your gfx driver throw that line away (and may be others) removing the TexCoords from shader completely but that is just my assumption.
What driver and gfx card you got?
What returns the logs on your setup?
I draw vertex normals using geometry shader. Everything shows up as expected except that when I move the camera some lines partially disappear. First, I thought this was due to the frustrum size but I have other objects in the scene bigger than this one drawn just fine.
Before movement
After movement
If anyone could give me any pointer how to get rid of this effect of line disappearing, I would really appreciate it.
Below is the code of my geometry shader
#version 330 core
layout (triangles) in;
layout (line_strip, max_vertices = 6) out;
in Data{
vec4 position;
vec4 t_position;
vec4 normal;
vec2 texCoord;
vec4 color;
mat4 mvp;
mat4 view;
mat4 mv;
} received[];
out Data{
vec4 color;
vec2 uv;
vec4 normal;
vec2 texCoord;
mat4 view;
} gdata;
const float MAGNITUDE = 1.5f;
void GenerateLine(int index) {
const vec4 green = vec4(0.0f, 1.0f, 0.0f, 1.0f);
const vec4 blue = vec4(0.0f, 0.0f, 1.0f, 1.0f);
gl_Position = received[index].t_position;
//gdata.color = received[index].color;
gdata.color = green;
EmitVertex();
gl_Position = received[index].t_position + received[index].normal * MAGNITUDE;
//gdata.color = received[index].color;
gdata.color = blue;
EmitVertex();
EndPrimitive();
}
void main() {
GenerateLine(0); // First vertex normal
GenerateLine(1); // Second vertex normal
GenerateLine(2); // Third vertex normal
}
Vertex Shader
#version 330
layout(location = 0) in vec3 Position;
layout(location = 1) in vec2 TexCoord;
layout(location = 2) in vec3 Normal;
out Data{
vec4 position;
vec4 t_position;
vec4 normal;
vec2 texCoord;
vec4 color;
mat4 mvp;
mat4 view;
mat4 mv;
} vdata;
//MVP
uniform mat4 model;
uniform mat4 projection;
uniform mat4 view;
void main() {
vdata.position = vec4(Position, 1.0f);
vdata.normal = view * model * vec4(Normal, 0.0);
vdata.texCoord = TexCoord;
vdata.view = view;
vec4 modelColor = vec4(0.8f, 0.8f, 0.8f, 1.0f);
vdata.color = modelColor;
vdata.mvp = projection * view * model;
vdata.mv = view * model;
vdata.t_position = vdata.mvp * vdata.position;
gl_Position = vdata.t_position;
};
Referring to the answer by Illia May 18 '16 at 22:53
I originally had my m_zNear equal to 0.1 but when switched it to 1.0, the lines stopped disappearing. I am not completely sure why is that. If any one knows please share it
The disappearing line is about depth buffer clipping. The vertex is projected (multiplied with MVP-matrix) and then the vertex position is changed AFTER the projection in geometry shader (GS). These changes in GS causes the z-value to fall outside normalized device coordinates (NDC) in perspective division. With a larger zNear value the projected z-value is smaller so it does not fall outside NDC. Though if the value of MAGNITUDE in GS was large enough the lines would be clipped anyway even with a larger zNear. One option to fix this is to do projection in GS.
If anyone else runs into this issue the solution is the following:
glm::perspective(45.0f, m_aspectRatio, m_zNear, m_zFar);
I originally had my m_zNear equal to 0.1 but when switched it to 1.0, the lines stopped disappearing. I am not completely sure why is that. If any one knows please share it.
Have you try to google zFar zNear? Here you go.
Or at least trying to google that miraculous helping glm::perspective(...) function?
I'm trying to implement some basic lighting and shading following the tutorial over here and here.
Everything is more or less working but I get some kind of strange flickering on object surfaces due to the shading.
I have two images attached to show you guys how this problem looks.
I think the problem is related to the fact that I'm passing vertex coordinates from vertex shader to fragment shader to compute some lighting variables as stated in the above linked tutorials.
Here is some source code (stripped out unrelated code).
Vertex Shader:
#version 150 core
in vec4 pos;
in vec4 in_col;
in vec2 in_uv;
in vec4 in_norm;
uniform mat4 model_view_projection;
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
void main(void) {
gl_Position = model_view_projection * pos;
out_col = in_col;
out_vert = pos;
out_norm = in_norm;
passed_uv = in_uv;
}
and Fragment Shader:
#version 150 core
uniform sampler2D tex;
uniform mat4 model_mat;
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
out vec4 col;
void main(void) {
mat3 norm_mat = mat3(transpose(inverse(model_mat)));
vec3 norm = normalize(norm_mat * vec3(in_norm));
vec3 light_pos = vec3(0.0, 6.0, 0.0);
vec4 light_col = vec4(1.0, 0.8, 0.8, 1.0);
vec3 col_pos = vec3(model_mat * vert_pos);
vec3 s_to_f = light_pos - col_pos;
float brightness = dot(norm, normalize(s_to_f));
brightness = clamp(brightness, 0, 1);
gl_FragColor = out_col;
gl_FragColor = vec4(brightness * light_col.rgb * gl_FragColor.rgb, 1.0);
}
As I said earlier I guess the problem has to do with the way the vertex position is passed to the fragment shader. If I change the position values to something static no more flickering occurs.
I changed all other values to statics, too. It's the same result - no flickering if I am not using the vertex position data passed from vertex shader.
So, if there is someone out there with some GL-wisdom .. ;)
Any help would be appreciated.
Side note: running all this stuff on an Intel HD 4000 if that may provide further information.
Thanks in advance!
Ivan
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader:
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
and this in the fragment shader:
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
These variables are associated by name, not by order. Except for passed_uv, the names do not match here. For example, you could use these declarations in the vertex shader:
out vec4 passed_col;
out vec2 passed_uv;
out vec4 passed_vert;
out vec4 passed_norm;
and these in the fragment shader:
in vec4 passed_col;
in vec2 passed_uv;
in vec4 passed_vert;
in vec4 passed_norm;
Based on the way I read the spec, your shader program should actually fail to link. At least in the GLSL 4.50 spec, in the table on page 43, it lists "Link-Time Error" for this situation. The rules seem somewhat ambiguous in earlier specs, though.
I know that same questions were asked many times, but unfortunately I am unable to find the source of my problem.
With help of tutorials I've written a small GLSL shader. Right now it can work with ambient light and load normals from normal map. The issue is that directional light seems to be dependent on my viewing angle.
Here are my shaders:
//Vertex Shader
#version 120
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
attribute vec3 tangent;
varying vec2 texCoord0;
varying mat3 tbnMatrix;
uniform mat4 transform;
void main(){
gl_Position=transform * vec4(position, 1.0);
texCoord0 = texCoord;
vec3 n = normalize((transform*vec4(normal,0.0)).xyz);
vec3 t = normalize((transform*vec4(tangent,0.0)).xyz);
t=normalize(t-dot(t,n)*n);
vec3 btTangent=cross(t,n);
tbnMatrix=transpose(mat3(t,btTangent,n));
}
//Fragment Shader
#version 120
varying vec2 texCoord0;
varying mat3 tbnMatrix;
struct BaseLight{
vec3 color;
float intensity;
};
struct DirectionalLight{
BaseLight base;
vec3 direction;
};
uniform sampler2D diffuse;
uniform sampler2D normalMap;
uniform vec3 ambientLight;
uniform DirectionalLight directionalLight;
vec4 calcLight(BaseLight base, vec3 direction,vec3 normal){
float diffuseFactor=dot(normal,normalize(direction));
vec4 diffuseColor = vec4(0,0,0,0);
if(diffuseFactor>0){
diffuseColor=vec4(base.color,1.0)* base.intensity *diffuseFactor;
}
return diffuseColor;
}
vec4 calcDirectionalLight(DirectionalLight directionalLight ,vec3 normal){
return calcLight(directionalLight.base,directionalLight.direction,normal);
}
void main(){
vec3 normal =tbnMatrix*(255.0/128.0* texture2D(normalMap,texCoord0).xyz-255.0/256.0);
vec4 totalLight = vec4(ambientLight,0) +calcDirectionalLight(directionalLight, normal);
gl_FragColor=texture2D(diffuse,texCoord0)*totalLight;
}
"transform" matrix that I send to the shader is summarily computed this way:
viewProjection=m_perspective* glm::lookAt(CameraPosition,CameraPosition+m_forward,m_up);
glm::mat4 transform = vievProjection * object_matrix;
"object_matrix" is a matrix that I get directly from physics engine.(I think it's the matrix that defines position and rotation of the object in the world space, correct me if I'm wrong.)
And I guess that "transform" matrix is computed correctly, since all the objects are drawn in the right positions. It looks like the problem is related to normals because if I set gl_FragColor = vec4(normal, 0) the color is also changing with camera rotation.
I would greatly appreciate if anyone could point me to my mistake
I've successfully rendered my scene from my light's point of view onto a depth cubemap, but I don't quite understand how I can actually project it onto my scene.
Here's a short clip of the current situation: http://youtu.be/54WXDWxqmXw
I found an implementation example on how to do it over here:
http://www.opengl.org/discussion_boards/showthread.php/174093-GLSL-cube-shadows-projecting?p=1219162&viewfull=1#post1219162
It seemed fairly easy to understand, so I figured this would be a great way to start off with, but I'm having some difficulties with the matrices (As shown in the video above).
My Vertex Shader:
#version 330 core
layout(std140) uniform ViewProjection
{
mat4 V;
mat4 P;
};
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
out vec4 posCs;
uniform mat4 M;
uniform mat4 lightView;
void main()
{
mat4 MVP = P *V *M;
gl_Position = MVP *vec4(vertexPosition,1);
UV = vertexUV;
posCs = V *M *vec4(vertexPosition,1);
}
Fragment Shader:
#version 330 core
in vec2 UV;
in vec4 posCs;
out vec4 color;
// Diffuse texture
uniform sampler2D renderTexture;
uniform samplerCubeShadow shadowCubeMap;
uniform mat4 lightView;
uniform mat4 lightProjection;
uniform mat4 camViewInv;
void main()
{
color = texture2D(renderTexture,UV).rgba;
mat4 lView = mat4(1); // The light is currently at the world origin, so we'll skip the transformation for now (The less potential error sources the better)
vec4 posLs = lView *camViewInv *posCs;
vec4 posAbs = abs(posLs);
float fs_z = -max(posAbs.x,max(posAbs.y,posAbs.z));
vec4 clip = lightProjection *vec4(0.0,0.0,fs_z,1.0);
float depth = (clip.z /clip.w) *0.5 +0.5;
vec4 r = shadowCube(shadowCubeMap,vec4(posLs.xyz,depth));
color *= r;
}
(I've only posted the relevant parts)
lightProjection is the same projection matrix that I've used to render the scene into the cubemap.
I'm not entirely sure about 'camViewInv', from the example I've linked above I came up with this:
glm::mat4 camViewInv(
camView[0][0],camView[1][0],camView[2][0],0.0f,
camView[0][1],camView[1][1],camView[2][1],0.0f,
camView[0][2],camView[1][2],camView[2][2],0.0f,
camPos[0],camPos[1],camPos[2],1.0f
);
camView being the camera's view matrix, and camPos the camera's worldspace position.
Everything else should be self-explanatory I believe.
I can't see anything wrong with the shaders, but I'm fairly certain the scene is rendered correctly to the cubemap (As shown in the video above). Maybe someone more versed than me can spot the issue.
// Update:
Some additional information about the creation / usage of the shadow cubemap:
Creating the cubemap texture:
unsigned int frameBuffer;
glGenFramebuffers(1,&frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER,frameBuffer);
unsigned int texture;
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_CUBE_MAP,texture);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_WRAP_R,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_R_TO_TEXTURE);
for(int i=0;i<6;i++)
{
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X +i,0,GL_DEPTH_COMPONENT,size,size,0,GL_DEPTH_COMPONENT,GL_FLOAT,0);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_CUBE_MAP_POSITIVE_X +i,texture,0);
glDrawBuffer(GL_NONE);
}
The light's matrices:
glm::perspective<float>(90.f,1.f,2.f,m_distance); // Projection Matrix
// View Matrices
glm::vec3 pos = GetPosition(); // Light worldspace position
glm::lookAt(pos,pos +glm::vec3(1,0,0),glm::vec3(0,1,0));
glm::lookAt(pos,pos +glm::vec3(-1,0,0),glm::vec3(0,1,0));
glm::lookAt(pos,pos +glm::vec3(0,1,0),glm::vec3(0,0,-1))
glm::lookAt(pos,pos +glm::vec3(0,-1,0),glm::vec3(0,0,1))
glm::lookAt(pos,pos +glm::vec3(0,0,1),glm::vec3(0,1,0))
glm::lookAt(pos,pos +glm::vec3(0,0,-1),glm::vec3(0,1,0))
Vertex Shader:
#version 330 core
layout(location = 0) in vec4 vertexPosition;
uniform mat4 shadowMVP;
void main()
{
gl_Position = shadowMVP *vertexPosition;
}
Fragment Shader:
#version 330 core
layout(location = 0) out float fragmentDepth;
void main()
{
fragmentdepth = gl_FragCoord.z;
}
I would suggest doing this in world space, light positions are typically defined in world space and it will reduce the workload if you keep it that way. I removed a bunch of uniforms that you do not need if you do this in world space.
Compute lighting direction and depth in vtx. shader:
#version 330 core
layout(std140) uniform ViewProjection
{
mat4 V;
mat4 P;
};
layout(location = 0) in vec4 vertexPosition; // W is automatically assigned 1, if missing.
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
out vec4 lightDirDepth; // Direction = xyz, Depth = w
uniform mat4 M;
uniform vec3 lightPos; // World Space Light Pos
uniform vec2 shadowZRange; // Near / Far clip plane distances for shadow's camera
float vecToDepth (vec3 Vec)
{
vec3 AbsVec = abs (Vec);
float LocalZcomp = max (AbsVec.x, max (AbsVec.y, AbsVec.z));
const float n = shadowZRange [0]; // Near plane when the shadow map was built
const float f = shadowZRange [1]; // Far plane when the shadow map was built
float NormZComp = (f+n) / (f-n) - (2.0*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
void main()
{
mat4 MVP = P *V *M;
gl_Position = MVP *vertexPosition;
UV = vertexUV;
vec3 lightDir = lightPos - (M *vertexPosition).xyz;
float lightDepth = vecToDepth (lightDir);
lightDirDepth = vec4 (lightDir, lightDepth);
}
Modified Fragment Shader (sample cubemap using light dir, and test against depth):
#version 330 core
in vec2 UV;
in vec4 lightDirDepth; // Direction = xyz, Depth = w
out vec4 color;
// Diffuse texture
uniform sampler2D renderTexture;
uniform samplerCubeShadow shadowCubeMap;
void main()
{
const float bias = 0.0001; // Prevent shadow acne
color = texture (renderTexture,UV).rgba;
float r = texture (shadowCubeMap, vec4 (lightDirDepth.xyz, lightDirDepth.w + bias));
color *= r;
}
I added two new uniforms:
lightPos -- World space position of your light
shadowZRange -- The values of your near and far plane when you built your shadow cube map, packed into a vec2
Let me know if you need me to explain anything or if this does not produce meaningful results.