I draw vertex normals using geometry shader. Everything shows up as expected except that when I move the camera some lines partially disappear. First, I thought this was due to the frustrum size but I have other objects in the scene bigger than this one drawn just fine.
Before movement
After movement
If anyone could give me any pointer how to get rid of this effect of line disappearing, I would really appreciate it.
Below is the code of my geometry shader
#version 330 core
layout (triangles) in;
layout (line_strip, max_vertices = 6) out;
in Data{
vec4 position;
vec4 t_position;
vec4 normal;
vec2 texCoord;
vec4 color;
mat4 mvp;
mat4 view;
mat4 mv;
} received[];
out Data{
vec4 color;
vec2 uv;
vec4 normal;
vec2 texCoord;
mat4 view;
} gdata;
const float MAGNITUDE = 1.5f;
void GenerateLine(int index) {
const vec4 green = vec4(0.0f, 1.0f, 0.0f, 1.0f);
const vec4 blue = vec4(0.0f, 0.0f, 1.0f, 1.0f);
gl_Position = received[index].t_position;
//gdata.color = received[index].color;
gdata.color = green;
EmitVertex();
gl_Position = received[index].t_position + received[index].normal * MAGNITUDE;
//gdata.color = received[index].color;
gdata.color = blue;
EmitVertex();
EndPrimitive();
}
void main() {
GenerateLine(0); // First vertex normal
GenerateLine(1); // Second vertex normal
GenerateLine(2); // Third vertex normal
}
Vertex Shader
#version 330
layout(location = 0) in vec3 Position;
layout(location = 1) in vec2 TexCoord;
layout(location = 2) in vec3 Normal;
out Data{
vec4 position;
vec4 t_position;
vec4 normal;
vec2 texCoord;
vec4 color;
mat4 mvp;
mat4 view;
mat4 mv;
} vdata;
//MVP
uniform mat4 model;
uniform mat4 projection;
uniform mat4 view;
void main() {
vdata.position = vec4(Position, 1.0f);
vdata.normal = view * model * vec4(Normal, 0.0);
vdata.texCoord = TexCoord;
vdata.view = view;
vec4 modelColor = vec4(0.8f, 0.8f, 0.8f, 1.0f);
vdata.color = modelColor;
vdata.mvp = projection * view * model;
vdata.mv = view * model;
vdata.t_position = vdata.mvp * vdata.position;
gl_Position = vdata.t_position;
};
Referring to the answer by Illia May 18 '16 at 22:53
I originally had my m_zNear equal to 0.1 but when switched it to 1.0, the lines stopped disappearing. I am not completely sure why is that. If any one knows please share it
The disappearing line is about depth buffer clipping. The vertex is projected (multiplied with MVP-matrix) and then the vertex position is changed AFTER the projection in geometry shader (GS). These changes in GS causes the z-value to fall outside normalized device coordinates (NDC) in perspective division. With a larger zNear value the projected z-value is smaller so it does not fall outside NDC. Though if the value of MAGNITUDE in GS was large enough the lines would be clipped anyway even with a larger zNear. One option to fix this is to do projection in GS.
If anyone else runs into this issue the solution is the following:
glm::perspective(45.0f, m_aspectRatio, m_zNear, m_zFar);
I originally had my m_zNear equal to 0.1 but when switched it to 1.0, the lines stopped disappearing. I am not completely sure why is that. If any one knows please share it.
Have you try to google zFar zNear? Here you go.
Or at least trying to google that miraculous helping glm::perspective(...) function?
Related
I don't know what is the best description for my target effect, but I want it to look this:
I made these two attempts, where the color around diagonal line is more close to the right-down vertex.
#version 450
layout(binding=0) uniform UniformBufferObject {
mat4 model;
mat4 view;
mat4 proj;
} ubo;
layout(location=0) in vec2 inPosition;
layout(location=1) in vec3 inColor;
layout(location=2) in vec2 inUV;
layout(location=0) out vec3 fragColor;
vec3 azure = { 0.0f, 0.5f, 0.5f };
vec3 blue = { 0.0f, 0.0f, 1.0f };
vec3 green = { 0.0f, 1.0f, 0.0f };
float translate(float i) {
return i + 0.5;
}
float helpfunc(int x) {
return mix(mix(blue[x], green[x], translate(inPosition.y)), mix(azure[x], azure[x], translate(inPosition.y)), translate(inPosition.x));
}
void main() {
gl_Position = ubo.proj * ubo.view * ubo.model * vec4(inPosition, 0.0, 1.0);
fragColor = vec3(helpfunc(0), helpfunc(1), helpfunc(2));
}
I thought that this might be leaded by the color space, yet since after exchanging the position of blue and green in function helpfunc, the color of the distinctive diagonal line changed, not the distinctive diagonal line postion. I guess the problem is in algorithm, but I can't figure out what it is or how to solve it.
thks to krOoze, i find a way to solve my problem, since idk if it is the best solution.
the only thing i do is put the step "calculate the color" from vertex shader into frag shader.
here's my code:
//vertex shader
#version 450
layout(binding=0)uniform UniformBufferObject{
mat4 model;
mat4 view;
mat4 proj;
}ubo;
layout(location=0)in vec2 inPosition;
layout(location=1)in vec3 inColor;
layout(location=2)in vec2 inUV;
layout(location=0)out vec2 outPosition;
void main(){
gl_Position=ubo.proj*ubo.view*ubo.model*vec4(inPosition,0.,1.);
outPosition=inPosition;
}
//frag shader
#version 450
layout(location=0)in vec2 outPosition;
layout(location=0)out vec4 outColor;
vec3 red={1.f,0.f,0.f};
vec3 blue={0.f,0.f,1.f};
vec3 green={0.f,1.f,0.f};
vec3 white={1.f,1.f,1.f};
float translate(float i){
return i+.5;
}
float helpfunc(int x){
return mix(mix(blue[x],green[x],translate(outPosition.y)),mix(white[x],red[x],translate(outPosition.y)),translate(outPosition.x));
}
void main(){
outColor=vec4(helpfunc(0),helpfunc(1),helpfunc(2),1.);
}
and here is my outcome : smooth gradient
I have an OpenGL 3.3 program whichts has different objects in, for example a simple cube. The cube's dimensions are 1x1x1 (vertices from -0.5, -0.5, -0.5 to 0.5, 0.5, 0.5) and is textured with one 2D texture on each side. The texture is repeatable (seamless).
With my actual code the model scaling looks like this (ignore the actual texture):
After scaling like this:
In this case the texture in should stay at size in z-direction but repeate over the z-axis.
Is there a good way to scale the texture properly to the model's scaling to prevent it from stretching? Or do I have to create a 3D texture?
The problem i found is that in my shader I get only the (scaled) point of the cube, for example -0.5, -1,5, -0.5 but the texture's coordinates are only 2D (0.0, 0.0) and I don't know which side of the texture I have to scale since I don't know which side it will currently be rendered on.
For for the sake of completeness, however, the vertex shader code:
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
out vec2 TexCoord;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
TexCoord = aTexCoord;
gl_Position = projection * view * model * vec4(aPos, 1.0);
//gl_Position = projection * view * model * vec4(aPos, 1.0f);
//TexCoord = aTexCoord;
}
The fragment shader looks like this:
out vec4 FragColor;
in vec2 TexCoord;
// texture samplers
uniform sampler2D texture_diffuse1;
uniform vec4 color;
void main()
{
FragColor = color + texture(texture_diffuse1, TexCoord);
}
I tried to realize height map with GLSL.
For it, i need to sent my picture to VertexShader and get grey component.
glActiveTexture(GL_TEXTURE0);
Texture.bind();
glUniform1i(mShader.getUniformLocation("heightmap"), 0);
mShader.getUniformLocation uses glGetUniformLocation and work good for other uniforms values, that used in Fragment, Vertex Shaders. But for heightmap return -1...
VertexShader code:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 color;
layout (location = 2) in vec2 texCoords;
layout (location = 3) in vec3 normal;
out vec3 Normal;
out vec3 FragPos;
out vec2 TexCoords;
out vec4 ourColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform sampler2D heightmap;
void main()
{
float bias = 0.25;
float h = 0.0;
float scale = 5.0;
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
vec3 hnormal = vec3(normal.x*h, normal.y*h, normal.z*h);
vec3 position1 = position * hnormal;
gl_Position = projection * view * model * vec4(position1, 1.0f);
FragPos = vec3(model * vec4(position, 1.0f));
Normal = mat3(transpose(inverse(model))) * normal;
ourColor = color;
TexCoords = texCoords;
}
may be algorithm of getting height is bad, but error with getting uniformlocation stops my work..
What is wrong? Any ideas?
UPD: texCoords (not TexCoords) of course is using in
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
my mistake, but it doesn't solve the problem. Having same error..
My bet is your variable has been optimized out by driver or the shader did not compile/link properly. After trying to compile your shader (on my nVidia) I got this in the logs:
0(9) : warning C7050: "TexCoords" might be used before being initialized
You should always check the GLSL compile/link logs ? see
How to debug GLSL Fragment shader
especially how the glGetShaderInfoLog is used.
In line
h = scale * ((texture2D(heightmap, TexCoords).r) - bias);
You are using TexCoords which is output variable and not yet set so the behavior is undefined and most likely your gfx driver throw that line away (and may be others) removing the TexCoords from shader completely but that is just my assumption.
What driver and gfx card you got?
What returns the logs on your setup?
I'd like to display a simple UV sphere (exported from Blender) and generate lines with normal coordinates using a unique geometry shader.
In a first time, I wrote a simple geometry shader which simply return the input vertices informations to the fragment shader. For a sake of simplicity (for the exemple) I erased the luminosity calculations in the fragment shader.
Vertex shader :
#version 400
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
uniform mat4 MVP;
out vec3 VPosition;
out vec3 VNormal;
void main(void)
{
VNormal = VertexNormal;
gl_Position = vec4(VertexPosition, 1.0f);
}
Geometry shader :
#version 400
layout(points) in;
layout(line_strip, max_vertices = 2) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 2.5f;
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[0].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
And the fragment shader :
#version 400
in vec3 Position;
in vec3 Normal;
in vec2 TexCoords;
out vec4 FragColor;
in vec3 fcolor;
void main(void)
{
FragColor = vec4(fcolor, 1.0f);
}
Now in the C++ code the primitive type to draw (here triangles):
glDrawArrays(GL_TRIANGLES, 0, meshList[idx]->getVertexBuffer()->getBufferSize());
And finally the output :
Until here all is ok.
Now I want to generate strands on the sphere as normals. To do the job done I wrote the following geometry shader (the vertex and fragment shaders are the sames).
#version 400
layout(points) in;
layout(line_strip, max_vertices = 2) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 1.0f;
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[0].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
The input primitive type being points I modified the C++ code to draw the scene :
glDrawArrays(GL_POINTS, 0, meshList[idx]->getVertexBuffer()->getBufferSize());
And the output:
Finally if I want to get a triangle input as input primitive and a line_strip as output primitive in the geometry shader I have the following shader:
#version 400
layout(triangles, invocations = 3) in;
layout(line_strip, max_vertices = 6) out;
uniform mat4 MVP;
in vec3 VNormal[];
out vec3 fcolor;
void main(void)
{
float size = 1.0f;
for (int i = 0; i < 3; i++)
{
fcolor = vec3(0.0f, 0.0f, 1.0f);
gl_Position = MVP * gl_in[i].gl_Position;
EmitVertex();
fcolor = vec3(1.0f, 1.0f, 0.0f);
gl_Position = MVP * vec4(gl_in[0].gl_Position.xyz + vec3(
VNormal[0].x * size, VNormal[0].y * size, VNormal[0].z * size), 1.0f);
EmitVertex();
EndPrimitive();
}
}
And the output is the following :
But my goal is to display in one output the scene (sphere + strands) using the same geometry shader. I'd like to know if it's possible to do this. I don't think so because a geometry shader must have just one type of input primitive and an other one in output and not several types. I want to be sure if it's possible or not.
Who knows, maybe one day there'll be an extension to emit multiple primitive types from a geometry shader, but as you say it can't currently be done.
One alternative might be to draw the normal lines with triangles instead.
Another, but completely useless in this case, might be to use the transform feedback extension to save the vertex shader results and reuse that data with two separate geometry shaders. I only mention this as it's the closest thing I could think of to emit multiple primitive types after the vertex stage.
EDIT
The two geometry shaders for drawing normals confuses me. In the second one, max_vertices = 3, which should be 6 for 3 separate lines and EndPrimitive should also be inside the for-loop so the 3 lines aren't connected. But you've already sorted this out by drawing GL_POINTS in the previous one. Is this intended to be structured for multiple primitive output, if it were supported? (fixed)
Given your geometry reuses many vertices, indices with glDrawElements would be more efficient. Although you'd still want to use glDrawArrays for drawing normal lines to avoid drawing duplicate vertices referenced by an index array.
I've been learning OpenGL for the past couple of weeks and I've run into some trouble implementing a Phong shader. It appears to do no interpolation between vertexes despite my use of the smooth qualifier. Am I missing something here? To give credit where credit is due, the code for the vertex and fragment shaders cribs heavily from the OpenGL SuperBible Fifth Edition. I would highly recommend this book!
Vertex Shader:
#version 330
in vec4 vVertex;
in vec3 vNormal;
uniform mat4 mvpMatrix; // mvp = ModelViewProjection
uniform mat4 mvMatrix; // mv = ModelView
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
void main(void) {
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vVaryingLightDir = normalize(vLightPosition - vPosition3);
gl_Position = mvpMatrix * vVertex;
}
Fragment Shader:
#version 330
out vec4 vFragColor;
uniform vec4 ambientColor;
uniform vec4 diffuseColor;
uniform vec4 specularColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
void main(void) {
float diff = max(0.0, dot(normalize(vVaryingNormal), normalize(vVaryingLightDir)));
vFragColor = diff * diffuseColor;
vFragColor += ambientColor;
vec3 vReflection = normalize(reflect(-normalize(vVaryingLightDir),normalize(vVaryingNormal)));
float spec = max(0.0, dot(normalize(vVaryingNormal), vReflection));
if(diff != 0) {
float fSpec = pow(spec, 32.0);
vFragColor.rgb += vec3(fSpec, fSpec, fSpec);
}
}
This (public domain) image from Wikipedia shows exactly what sort of image I'm getting and what I'm aiming for -- I'm getting the "flat" image but I want the "Phong" image.
Any help would be greatly appreciated. Thank you!
edit: If it makes a difference, I'm using PyOpenGL 3.0.1 and Python 2.6.
edit2:
Solution
It turns out the problem was with my geometry; Kos was correct. For anyone else that's having this problem with Blender models, Kos pointed out that doing Edit->Faces->Set Smooth does the trick. I found that Wings 3D worked "out of the box."
As an addition to this answer, here is a simple geometry shader which will let you visualize your normals. Modify the accompanying vertex shader as needed based on your attribute locations and how you send your matrices.
But first, a picture of a giant bunny head from our friend the Stanford bunny as an example of the result !
Major warning: do note that I get away with transforming the normals with the modelview matrix instead of a proper normal matrix. This won't work correctly if your modelview contains non uniform scaling. Also, the length of your normals won't be correct but that matters little if you just want to check their direction.
Vertex shader:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 normal;
layout(location = 2) in mat4 mv;
out Data
{
vec4 position;
vec4 normal;
vec4 color;
mat4 mvp;
} vdata;
uniform mat4 projection;
void main()
{
vdata.mvp = projection * mv;
vdata.position = position;
vdata.normal = normal;
}
Geometry shader:
#version 330
layout(triangles) in;
layout(line_strip, max_vertices = 6) out;
in Data
{
vec4 position;
vec4 normal;
vec4 color;
mat4 mvp;
} vdata[3];
out Data
{
vec4 color;
} gdata;
void main()
{
const vec4 green = vec4(0.0f, 1.0f, 0.0f, 1.0f);
const vec4 blue = vec4(0.0f, 0.0f, 1.0f, 1.0f);
for (int i = 0; i < 3; i++)
{
gl_Position = vdata[i].mvp * vdata[i].position;
gdata.color = green;
EmitVertex();
gl_Position = vdata[i].mvp * (vdata[i].position + vdata[i].normal);
gdata.color = blue;
EmitVertex();
EndPrimitive();
}
}
Fragment shader:
#version 330
in Data
{
vec4 color;
} gdata;
out vec4 outputColor;
void main()
{
outputColor = gdata.color;
}
Hmm... You're interpolating the normal as a varying variable, so the fragment shader should receive the correct per-pixel normal.
The only explanation (I can think of) of the fact that you're having the result as on your left image is that every fragment on a given face ultimately receives the same normal. You can confirm it with a fragment shader like:
void main() {
vFragColor = normalize(vVaryingNormal);
}
If it's the case, the question remains: Why? The vertex shader looks OK.
So maybe there's something wrong in your geometry? What is the data which you send to the shader? Are you sure you have correctly calculated per-vertex normals, not just per-face normals?
The orange lines are normals of the diagonal face, the red lines are normals of the horizontal face.
If your data looks like the above image, then even with a correct shader you'll get flat shading. Make sure that you have correct per-vertex normals like on the lower image. (They are really simple to calculate for a sphere.)