I'm pretty new to GLSL and am attempting to get the two cubes I'm loading in to have different coloration based upon the normal of each of their faces. Instead, the entirety of both cubes is exactly the same color. As I rotate the camera, the coloration changes from blue to green and red, but all sides of both cubes always simultaneously remain the same color. I want something more along the lines of the top of the cubes being blue, one side being green, one side being red, etc. I don't particularly care which side is which color, just so long as the different sides aren't all the same color.
Vertex Shader
uniform mat4 gl_ModelViewMatrix;
varying vec3 viewVert;
varying vec3 normal;
void main()
{
viewVert = gl_ModelViewMatrix * gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}
Fragment Shader
uniform mat4 gl_ModelViewMatrix;
varying vec3 viewVert;
varying vec3 normal;
void main()
{
vec3 nor = normalize(normal);
gl_FragColor = vec4(nor.x,nor.y,nor.z, 1.0);
}
I assume I'm doing something wrong with transforming between spaces and causing all the normals to be the same, but I'm not sure what. Perhaps I'm accidentally using the camera's normal instead of the normal of the faces?
Related
maybe someone could give me a hint. I wrote a shader that draws a circle on a plane.The circle is colored in two colors mixed together. I would like to make only the circle visible and the plane transparent. I think I need an if statement in the fragment shader, but I can't write it properly to make it work. Below I am pasting my fragment shader. I will be grateful for any hint.
fragmentShader: `
#define PI2 6.28318530718
uniform vec3 u_color1;
uniform vec3 u_color2;
varying vec2 vUv;
varying vec3 vPosition;
varying vec2 p;
varying float result;
float circle(vec2 pt, vec2 center, float radius, float edge_thickness){
vec2 p = pt - center;
float len = length(p);
float result = 1.0-smoothstep(radius-edge_thickness, radius, len);
return result;
}
void main (void)
{
vec3 col = mix(u_color1, u_color2, vUv.y);
vec3 color = col * circle(vPosition.xy, vec2(0.0), 10.0, 0.002);
gl_FragColor = vec4(color, 1.0);
}
`,
On CPU side code you need:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
And also make sure your rendering context has some alpha btis allocated. If not use different blending function (one that does not use alpha).
On GPU side you use alpha channel to set transparency (for the blend function above) so:
gl_FragColor = vec4(color, alpha_transparency);
Its also usual to pass vec4 color in RGBA form directly from CPU side code instead.
On top of all this you also need to render your stuff in correct order see:
OpenGL - How to create Order Independent transparency?
I'm doing some OpenGL stuff in Java (lwjgl) for a project, part of which includes importing 3d models in OBJ format. Everything looks ok, until I try to displace vertices, then the models break up, you can see right through them.
Here is Suzanne from blender, UV mapped with a completely black texture (for visibility's sake). In the frag shader I'm adding some white colour to the fragment depending on the fragments angle between its normal and the world's up vector:
So far so good. But when I apply a small Y component displacement to the same vertices, I expect to see the faces 'stretch' up. Instead this happens:
Vertex shader:
#version 150
in vec3 position;
in vec2 texCoords;
in vec3 normal;
void main()
{
vertPosModel = position;
cosTheta = dot(vec3(0.0, 1.0, 0.0), normal);
if(cosTheta > 0.0 && cosTheta < 1.0)
vertPosModel += vec3(0.0, 0.15, 0.0);
gl_Position = transform * vec4(vertPosModel, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D objTexture;
in vec2 texcoordOut;
in float cosTheta;
out vec4 fragColor;
void main()
{
fragColor = vec4(texture(objTexture, texcoordOut.st).rgb, 1.0) + vec4(cosTheta);
}
So, your algorithm offsets a vertex's position, based on a property derived from the vertex's normal. This will only produce a connected mesh if your mesh is completely smooth. That is, where two triangles meet, the normals of the shared vertices must be the same on both sides of the triangle.
If the model has discontinuous normals over the surface, then breaks can appear anywhere that the normal stops being continuous. That is, the edges where there is a normal discontinuity may become disconnected.
I'm pretty sure that Blender3D can generate a smooth version of Suzanne. So... did you generate a smooth mesh? Or is it faceted?
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I have an simple Opengl-Application like this :
As you can see i have a plane and suzanne with a texture and also some light.But there is aslo an cube.You can see just the upper half, but the problem ist that the cube stays black, no matter what.I tried to apply a texture --> black.I tried to give the cube just a color ---->still black.
in vec3 surfaceNormal;
in vec3 toLightVector;
smooth in vec2 textureCords;
out vec4 outputColor;
uniform sampler2D s;
uniform int hasTexture;
uniform vec3 LightColor;
void main()
{
vec3 unitNormal = normalize(surfaceNormal);
vec3 unitLightVector = normalize(toLightVector);
float ndot = dot(unitNormal,unitLightVector);
float brightness = max(ndot,0.0);
vec3 diffuse = brightness*LightColor*1.5;
if(hasTexture == 1){
**outputColor = vec4(diffuse,1.0) * texture(s,textureCords);**
}else{
**outputColor = vec4(diffuse,1.0) *vec4(0.4,0.5,0.7,1.0);**
}
}
This is my fragment shader.I pass this shader a uniform hasTexture to check if it should be rendered with texture or without.
For example :
cube.draw("texture.png");
or
cube.draw();
Since this method i use works with the two other models, i really dont know whats the problem with it.Maybe the texture-coordinats are wrong?
Based off of the given information, my guess is that brightness is 0 because the ndot is either 0 or negative. I would say check your normals and your 'toLightVector's. Even if your texture coordinates were wrong, setting the cube to a color should have worked.
I'm using C++ and when I implemented a diffuse shader, it causes every other triangle to disappear.
I can post my render code if need be, but I believe the issue is with my normal matrix (which I wrote it to be the transpose inverse of the model view matrix). Here is the shader code which is sort of similar to that of a tutorial on lighthouse tutorials.
VERTEX SHADER
#version 330
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
uniform mat4 transform_matrix;
uniform mat4 view_model_matrix;
uniform mat4 normal_matrix;
uniform vec3 light_pos;
out vec3 light_intensity;
void main()
{
vec3 tnorm = normalize(normal_matrix * vec4(normal, 1.0)).xyz;
vec4 eye_coords = transform_matrix * vec4(position, 1.0);
vec3 s = normalize(vec3(light_pos - eye_coords.xyz)).xyz;
vec3 light_set_intensity = vec3(1.0, 1.0, 1.0);
vec3 diffuse_color = vec3(0.5, 0.5, 0.5);
light_intensity = light_set_intensity * diffuse_color * max(dot(s, tnorm), 0.0);
gl_Position = transform_matrix * vec4(position, 1.0);
}
My fragment shader just outputs the "light_intensity" in the form of a color. My model is straight from Blender and I have tried different exporting options like keeping vertex order, but nothing has worked.
This is not related to you shader.
It appears to be depth test related. Here, the order of triangles in the depth relative to you viewport is messed up, because you do not make sure that only the nearest pixel to your camera gets drawn.
Enable depth testing and make sure you have a z buffer bound to your render target.
Read more about this here: http://www.opengl.org/wiki/Depth_Test
Only the triangles highlighted in red should be visible to the viewer. Due to the lack of a valid depth test, there is no chance to guarantee, what triangle is painted top most. Thus blue triangles of the faces that should not be visible will cover parts of previously drawn red triangles.
The depth test would omit this, by comparing the depth in the z buffer with the depth of the pixel to be drawn at the current moment. Only the color information of a pixel that is closer to the viewer, i.e. has a smaller z value than the z value in the buffer, shall be written to the framebuffer in order to achieve a correct result.
(Backface culling, would be nice too and, if the model is correctly exported, also allow to show it correctly. But it would only hide the main problem, not solve it.)