In glsl, one can look up the texel from a texture with the function:
vec4 texture2D(sampler2D sampler, vec2 coord)
I'm wondering if there's a similar function or method that would return the first non-0 texel between two vec2s? In pseudo-code, it'd be something like.
vec4 textureBetween(sampler2D sampler, vec2 starting, vec2 ending)
So it'd be kind of like raycasting - you'd cast a ray from starting to ending and return the texel of the first point you hit or (0.0, 0.0, 0.0, 0.0) if nothing is hit.
This is part of an experiment I'm doing trying to use points instead of instanced meshes for certain animations.
Related
I'm doing some OpenGL stuff in Java (lwjgl) for a project, part of which includes importing 3d models in OBJ format. Everything looks ok, until I try to displace vertices, then the models break up, you can see right through them.
Here is Suzanne from blender, UV mapped with a completely black texture (for visibility's sake). In the frag shader I'm adding some white colour to the fragment depending on the fragments angle between its normal and the world's up vector:
So far so good. But when I apply a small Y component displacement to the same vertices, I expect to see the faces 'stretch' up. Instead this happens:
Vertex shader:
#version 150
in vec3 position;
in vec2 texCoords;
in vec3 normal;
void main()
{
vertPosModel = position;
cosTheta = dot(vec3(0.0, 1.0, 0.0), normal);
if(cosTheta > 0.0 && cosTheta < 1.0)
vertPosModel += vec3(0.0, 0.15, 0.0);
gl_Position = transform * vec4(vertPosModel, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D objTexture;
in vec2 texcoordOut;
in float cosTheta;
out vec4 fragColor;
void main()
{
fragColor = vec4(texture(objTexture, texcoordOut.st).rgb, 1.0) + vec4(cosTheta);
}
So, your algorithm offsets a vertex's position, based on a property derived from the vertex's normal. This will only produce a connected mesh if your mesh is completely smooth. That is, where two triangles meet, the normals of the shared vertices must be the same on both sides of the triangle.
If the model has discontinuous normals over the surface, then breaks can appear anywhere that the normal stops being continuous. That is, the edges where there is a normal discontinuity may become disconnected.
I'm pretty sure that Blender3D can generate a smooth version of Suzanne. So... did you generate a smooth mesh? Or is it faceted?
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I'm using C++ and when I implemented a diffuse shader, it causes every other triangle to disappear.
I can post my render code if need be, but I believe the issue is with my normal matrix (which I wrote it to be the transpose inverse of the model view matrix). Here is the shader code which is sort of similar to that of a tutorial on lighthouse tutorials.
VERTEX SHADER
#version 330
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
uniform mat4 transform_matrix;
uniform mat4 view_model_matrix;
uniform mat4 normal_matrix;
uniform vec3 light_pos;
out vec3 light_intensity;
void main()
{
vec3 tnorm = normalize(normal_matrix * vec4(normal, 1.0)).xyz;
vec4 eye_coords = transform_matrix * vec4(position, 1.0);
vec3 s = normalize(vec3(light_pos - eye_coords.xyz)).xyz;
vec3 light_set_intensity = vec3(1.0, 1.0, 1.0);
vec3 diffuse_color = vec3(0.5, 0.5, 0.5);
light_intensity = light_set_intensity * diffuse_color * max(dot(s, tnorm), 0.0);
gl_Position = transform_matrix * vec4(position, 1.0);
}
My fragment shader just outputs the "light_intensity" in the form of a color. My model is straight from Blender and I have tried different exporting options like keeping vertex order, but nothing has worked.
This is not related to you shader.
It appears to be depth test related. Here, the order of triangles in the depth relative to you viewport is messed up, because you do not make sure that only the nearest pixel to your camera gets drawn.
Enable depth testing and make sure you have a z buffer bound to your render target.
Read more about this here: http://www.opengl.org/wiki/Depth_Test
Only the triangles highlighted in red should be visible to the viewer. Due to the lack of a valid depth test, there is no chance to guarantee, what triangle is painted top most. Thus blue triangles of the faces that should not be visible will cover parts of previously drawn red triangles.
The depth test would omit this, by comparing the depth in the z buffer with the depth of the pixel to be drawn at the current moment. Only the color information of a pixel that is closer to the viewer, i.e. has a smaller z value than the z value in the buffer, shall be written to the framebuffer in order to achieve a correct result.
(Backface culling, would be nice too and, if the model is correctly exported, also allow to show it correctly. But it would only hide the main problem, not solve it.)
I'd like to write a GLSL shader program for a per-face shading. My first attempt uses the flat interpolation qualifier with provoking vertices. I use the flat interpolation for both normal and position vertex attributes which gives me the desired old-school effect of solid-painted surfaces.
Although the rendering looks correct, the shader program doesn't actually do the right job:
The light calculation is still performed on a per-fragment basis (in the fragment shader),
The position vector is taken from the provoking vertex, not the triangle's centroid (right?).
Is it possible to apply the illumination equation once, to the triangle's centroid, and then use the calculated color value for the whole primitive? How to do that?
Use a geometry shader whose input is a triangle and whose output is a triangle. Pass normals and positions to it from the vertex shader, calculate the centroid yourself (by averaging the positions), and do the lighting, passing the output color as an output variable to the fragment shader, which just reads it in and writes it out.
Another simple approach is to compute the (screenspace) face normal in the fragment shader using the derivative of the screenspace position. It is very simple to implement and even performs well.
I have written an example of it here (requires a WebGL capable browser):
Vertex:
attribute vec3 vertex;
uniform mat4 _mvProj;
uniform mat4 _mv;
varying vec3 fragVertexEc;
void main(void) {
gl_Position = _mvProj * vec4(vertex, 1.0);
fragVertexEc = (_mv * vec4(vertex, 1.0)).xyz;
}
Fragment:
#ifdef GL_ES
precision highp float;
#endif
#extension GL_OES_standard_derivatives : enable
varying vec3 fragVertexEc;
const vec3 lightPosEc = vec3(0,0,10);
const vec3 lightColor = vec3(1.0,1.0,1.0);
void main()
{
vec3 X = dFdx(fragVertexEc);
vec3 Y = dFdy(fragVertexEc);
vec3 normal=normalize(cross(X,Y));
vec3 lightDirection = normalize(lightPosEc - fragVertexEc);
float light = max(0.0, dot(lightDirection, normal));
gl_FragColor = vec4(normal, 1.0);
gl_FragColor = vec4(lightColor * light, 1.0);
}
I am using GLSL to render a basic cube (made from GL_QUADS surfaces). I would like to pass the gl_Vertex content from the vertex into the fragment shader. Everything works, if I am using gl_FrontColor (vertex shader) and gl_Color (fragment shader) for this, but it doesn't work, when using a plain varying (see code & image below). It appears the varying is not interpolated across the surface for some reason. Any idea what could cause this in OpenGL ?
glShadeModel is set to GL_SMOOTH - I can't think of anything else that could cause this effect right now.
Vertex Shader:
#version 120
varying vec4 frontSideValue;
void main() {
frontSideValue = gl_Vertex;
gl_Position = transformPos;
}
Fragment Shader:
#version 120
varying vec4 frontSideValue;
void main() {
gl_FragColor = frontSideValue;
}
The result looks just like you are not using values in the range [0,1] for the color vector. You basically use the untransformed vertex position, which might be well outside this range. Your cube seems centered around the origin, so you are seeing the small transition where the values are actually in the range [0,1] as that unsharp band.
With the builin gl_FrontColor, the value seems to get clamped before the interpolation.