Back face culling for linestrips - opengl

I have circle in 3D space (red on image) with normals (white)
This circle is being drawn as linestrip.
Problem is: i need to draw only those pixels whose normals directed into camera (angle between normal and camera vector is < 90) using discard in fragment shader code. Like backface culling but for lines.
Red part of circle is what i need to draw and black is what i need to discard in fragment shader.
Good example is 3DS Max rotation gizmo, back sides of lines are hidden:
So, in fragment shader i have:
if(condition)
discard;
Help me to come up with this condition. Considering both orthographic and perspective cameras would be good.

Well, you already described your condition:
(angle between normal and camera vector is < 90)
You have to forward your normals to the fragment shader (don't forget to re-normalize it in the FS, the interpolation will change the length). And you need the viewing vector (in the same space than your normals, so you might transform normal to eye space, or use world space, or even transform the view direction/camera location into object space). Since the condition angle(N,V) >= 90 (degrees) is the same as cos(angle(N,V)) <= 0 (assuming normalized vectors), you can simply use the dot product:
if (dot(N,V) <= 0)
discard;
UPDATE:
As you pointed out in the comments, you have the "classical" GL matrices available. So it makes sense to do this transformation in eye space. In the vertex shader, you put
in vec4 vertex; // object space position
in vec3 normal; // object space normal direction
out vec3 normal_eyespace;
out vec3 vertex_eyespace;
uniform mat3 normalMatrix;
uniform mat4 modelView;
uniform mat4 projection;
void main()
{
normal_eyespace = normalize(normalMatrix * normal);
vec4 v = modelViewMatrix * vertex;
vertex_eyespace = v.xyz;
gl_Position=projectionMatrix * v;
}
and in the fragment shader, you can simply do
in vec3 normal_eyespace;
in vec3 vertex_eyespace;
void main()
{
if (dot(normalize(normal_eyespace), normalize(-vertex_eyespace)) <= 0)
discard;
// ...
}
Note: this code assumes modern GLSL with in/out instead of attribute/varying qualifiers. I also assume no builtin attributes. But that code should be easily adaptable to older GL.

Related

Project cubemap to 2D texture

I'd like to debug my render to cubemap function by projecting the whole thing to a 2D texture just like this one:
On my render from texture shader I've only got the UV texture coordinates available (ranging from (0,0) to (1,1)). How can I project the cubemap to the screen in a single draw call?
You can do this by rendering 6 quads and using 3D texture coords (s,t,p) pointing to each vertex of the cube so 8 variations of ( +/-1,+/-1,+/-1 ).
The UV 2D coords (s,t) like 4 variations of (0/1,0/1) are not usable for whole CUBE_MAP only for its individual sides.
Look for txr_skybox in here
Normal mapping gone horribly wrong
on how CUBE_MAP is used in fragment shader.
PS in OpenGL the texture coords are called s,t,p,q instead of u,v,w,...
Here related QA:
rendering cube map layout, understanding glTexCoord3f parameters
My answer is essentially the same as the one accepted one, but I have used this very technique to debug my depth-cubemap (used for shadowcasting) in my current project, so I thought I would include a working sample of the fragment shader code I used.
Unfolding cubemap
This is supposed to be rendered to a rectangle on top of the screen with aspect ratio 3/4 directly on the screen and with s,t going from (0,0) in the lower-left corner to (1,1) at the upper-right corner.
Note that in this case, the cubemap I use is inverted, that is objects to the +(x,y,z) side of the cubemap origen is rendered to -(x,y,z), and the direction I choose as up for the top/bottom quads are completely arbitrary; so to get this example to work you may need to change some signs or swap s and t some times, also note that I here only read one channel, as it is debth map:
Fragment shader code for a quad-map as the one in the question:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to, just replace this with a vec3 or vec4
float debth=0;
vec2 localST=ST;
//Scale Tex coordinates such that each quad has local coordinates from 0,0 to 1,1
localST.t = mod(localST.t*3,1);
localST.s = mod(localST.s*4,1);
//Due to the way my debth-cubemap is rendered, objects to the -x,y,z side is projected to the positive x,y,z side
//Inside where tob/bottom is to be drawn?
if (ST.s*4>1 && ST.s*4<2)
{
//Bottom (-y) quad
if (ST.t*3.f < 1)
{
vec3 dir=vec3(localST.s*2-1,1,localST.t*2-1);//Get lower y texture, which is projected to the +y part of my cubemap
debth = texture( dynamic_texture, dir ).r;
}
//top (+y) quad
else if (ST.t*3.f > 2)
{
vec3 dir=vec3(localST.s*2-1,-1,-localST.t*2+1);//Due to the (arbitrary) way I choose as up in my debth-viewmatrix, i her emultiply the latter coordinate with -1
debth = texture( dynamic_texture, dir ).r;
}
else//Front (-z) quad
{
vec3 dir=vec3(localST.s*2-1,-localST.t*2+1,1);
debth = texture( dynamic_texture, dir ).r;
}
}
//If not, only these ranges should be drawn
else if (ST.t*3.f > 1 && ST.t*3 < 2)
{
if (ST.x*4.f < 1)//left (-x) quad
{
vec3 dir=vec3(-1,-localST.t*2+1,localST.s*2-1);
debth = texture( dynamic_texture, dir ).r;
}
else if (ST.x*4.f < 3)//right (+x) quad (front was done above)
{
vec3 dir=vec3(1,-localST.t*2+1,-localST.s*2+1);
debth = texture( dynamic_texture, dir ).r;
}
else //back (+z) quad
{
vec3 dir=vec3(-localST.s*2+1,-localST.t*2+1,-1);
debth = texture( dynamic_texture, dir ).r;
}
}
else//Tob/bottom, but outside where we need to put something
{
discard;//No need to add fancy semi transparant borders for quads, this is just for debugging purpose after all
}
out_color = vec4(vec3(debth),1);
}
Here is a screenshot of this technique used to render my depth-map in the lower-right corner of the screen (rendering with a point-light source placed at the very center of an empty room with no other objects than the walls and the player character):
Equirectangular projection
I must, however, say that I prefer using an equirectangular projection for debugging cubemaps, as it doesn't have any holes in it; and, luckily, these are even easier to make than unfolded cubemaps, just use a fragment shader like this (still with s,t going from (0,0) to (1,1) from lower-left to upper-right corner), but this time with aspect ratio 1/2:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
float phi=ST.s*3.1415*2;
float theta=(-ST.t+0.5)*3.1415;
vec3 dir = vec3(cos(phi)*cos(theta),sin(theta),sin(phi)*cos(theta));
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to
float debth = texture( dynamic_texture, dir ).r;
out_color = vec4(vec3(debth),1);
}
Here is a screenshot where an equirectangular projection is used to display my depth-map in the lower-right corner:

OpenGL shader to shade each face similar to MeshLab's visualizer

I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;

Point Sprites for particle system

Are point sprites the best choice to build a particle system?
Are point sprites present in the newer versions of OpenGL and drivers of the latest graphics cards? Or should I do it using vbo and glsl?
Point sprites are indeed well suited for particle systems. But they don't have anything to do with VBOs and GLSL, meaning they are a completely orthogonal feature. No matter if you use point sprites or not, you always have to use VBOs for uploading the geometry, be they just points, pre-made sprites or whatever, and you always have to put this geometry through a set of shaders (in modern OpenGL of course).
That being said point sprites are very well supported in modern OpenGL, just not that automatically as with the old fixed-function approach. What is not supported are the point attenuation features that let you scale a point's size based on it's distance to the camera, you have to do this manually inside the vertex shader. In the same way you have to do the texturing of the point manually in an appropriate fragment shader, using the special input variable gl_PointCoord (that says where in the [0,1]-square of the whole point the current fragment is). For example a basic point sprite pipeline could look this way:
...
glPointSize(whatever); //specify size of points in pixels
glDrawArrays(GL_POINTS, 0, count); //draw the points
vertex shader:
uniform mat4 mvp;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = mvp * position;
}
fragment shader:
uniform sampler2D tex;
layout(location = 0) out vec4 color;
void main()
{
color = texture(tex, gl_PointCoord);
}
And that's all. Of course those shaders just do the most basic drawing of textured sprites, but are a starting point for further features. For example to compute the sprite's size based on its distance to the camera (maybe in order to give it a fixed world-space size), you have to glEnable(GL_PROGRAM_POINT_SIZE) and write to the special output variable gl_PointSize in the vertex shader:
uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;
layout(location = 0) in vec4 position;
void main()
{
vec4 eyePos = modelview * position;
vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
gl_PointSize = 0.25 * (projSize.x+projSize.y);
gl_Position = projection * eyePos;
}
This would make all point sprites have the same world-space size (and thus a different screen-space size in pixels).
But point sprites while still being perfectly supported in modern OpenGL have their disadvantages. One of the biggest disadvantages is their clipping behaviour. Points are clipped at their center coordinate (because clipping is done before rasterization and thus before the point gets "enlarged"). So if the center of the point is outside of the screen, the rest of it that might still reach into the viewing area is not shown, so at the worst once the point is half-way out of the screen, it will suddenly disappear. This is however only noticable (or annyoing) if the point sprites are too large. If they are very small particles that don't cover much more than a few pixels each anyway, then this won't be much of a problem and I would still regard particle systems the canonical use-case for point sprites, just don't use them for large billboards.
But if this is a problem, then modern OpenGL offers many other ways to implement point sprites, apart from the naive way of pre-building all the sprites as individual quads on the CPU. You can still render them just as a buffer full of points (and thus in the way they are likely to come out of your GPU-based particle engine). To actually generate the quad geometry then, you can use the geometry shader, which lets you generate a quad from a single point. First you do only the modelview transformation inside the vertex shader:
uniform mat4 modelview;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = modelview * position;
}
Then the geometry shader does the rest of the work. It combines the point position with the 4 corners of a generic [0,1]-quad and completes the transformation into clip-space:
const vec2 corners[4] = {
vec2(0.0, 1.0), vec2(0.0, 0.0), vec2(1.0, 1.0), vec2(1.0, 0.0) };
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
uniform mat4 projection;
uniform float spriteSize;
out vec2 texCoord;
void main()
{
for(int i=0; i<4; ++i)
{
vec4 eyePos = gl_in[0].gl_Position; //start with point position
eyePos.xy += spriteSize * (corners[i] - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corners[i]; //use corner as texCoord
EmitVertex();
}
}
In the fragment shader you would then of course use the custom texCoord varying instead of gl_PointCoord for texturing, since we're no longer drawing actual points.
Or another possibility (and maybe faster, since I remember geometry shaders having a reputation for being slow) would be to use instanced rendering. This way you have an additional VBO containing the vertices of just a single generic 2D quad (i.e. the [0,1]-square) and your good old VBO containing just the point positions. What you then do is draw this single quad multiple times (instanced), while sourcing the individual instances' positions from the point VBO:
glVertexAttribPointer(0, ...points...);
glVertexAttribPointer(1, ...quad...);
glVertexAttribDivisor(0, 1); //advance only once per instance
...
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, count); //draw #count quads
And in the vertex shader you then assemble the per-point position with the actual corner/quad-position (which is also the texture coordinate of that vertex):
uniform mat4 modelview;
uniform mat4 projection;
uniform float spriteSize;
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 corner;
out vec2 texCoord;
void main()
{
vec4 eyePos = modelview * position; //transform to eye-space
eyePos.xy += spriteSize * (corner - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corner;
}
This achieves the same as the geometry shader based approach, properly-clipped point sprites with a consistent world-space size. If you actually want to mimick the screen-space pixel size of actual point sprites, you need to put some more computational effort into it. But this is left as an exercise and would be quite the oppisite of the world-to-screen transformation from the the point sprite shader.

What is wrong with my shadow map implementation?

I'm trying to implement shadow mapping in my game. The shaders you see below result in correctly drawn shadows for the game's map, but all the models walking around on the map are completely black.
Here's a screenshot:
I suspect the problem lies with the calculation of the world position. The gl_Vertex is not transformed in any way. Because the map is generated with absolute world coordinates, the transformation with the light matrix results in a correct relative position that can be used to perform the shadow mapping.
However, my 3D models are all very close to the origin. So if their coordinates are plainly transformed using the light matrix, they will most likely never be lit.
I'm wondering if this could be the case, and if so, how I could fix it.
Here's my vertex shader:
#version 120
uniform mat4x4 LightMatrixValue;
varying vec4 shadowMapPosition;
varying vec3 worldPos;
void main(void)
{
vec4 modelPos = gl_Vertex;
worldPos=modelPos.xyz/modelPos.w;
vec4 lightPos = LightMatrixValue*modelPos;
shadowMapPosition = 0.5 * (lightPos.xyzw +lightPos.wwww);
gl_Position = ftransform();
}
And the fragment shader:
varying vec4 shadowMapPosition;
varying vec3 worldPos;
uniform sampler2D depthMap;
uniform vec4 LightPosition;
void main(void)
{
vec4 textureColour = gl_Color;
vec3 realShadowMapPosition = shadowMapPosition.xyz/shadowMapPosition.w;
float depthSm = texture2D(depthMap, realShadowMapPosition.xy).r;
if (depthSm < realShadowMapPosition.z-0.0001)
{
textureColour = vec4(0, 0, 0, 1);
}
gl_FragColor= textureColour;
}
I write this here because it will not fit above, i hope it will solve you problem.
For rendering you on the one hand have your models with their model matrix to position the element, on the other hand you have you view and projection matrix that transforms your model into the screen space.
For creating your shadow map (simplest approach, and i guess the one you have chosen) you render the scene from the view of the light source, so you apply the view and projection matrix of you light source, which will map the x, y and z value to screen space. The x and y values are for the position in the image, the z is for the depth buffer test and for the color you write in your color buffer (you later use as shadow map).
For the final rendering you load this shadow map and the view and projection matrix of the light to your shader. For the display in the scene you apply your view and project matrix of the camera to the vertexes, for the shadow map lookup you apply the view and projection matrix of the light to the vertex (like you did with the rendering for the shadow map). When you apply the lights view and projection matrix to the vertex you have the same mapping as in the shadow map pass, now you only need to transform your x and y coordinates to the texture coordinates and lookup the stored z value, which you compare with the calculated one.
Transforming the models world position into screen space (from the lights point of view)
This part is often done in the vertex or geometry shader:
shadowMapPosition = matLightViewProjection * modelWoldPos;
This is often done in the fragment shader:
shadowMapPosition = shadowMapPosition / shadowMapPosition.w ;
// Add an offset to prevent self-shadowing and moiré pattern
shadowMapPosition.z += 0.0005;
//lookup the stored z value
float distanceFromLight = texture2D(depthMap,shadowMapPosition.xz).z;
Now just compare the distanceFromLight with the shadowMapPosition.z to see if the object is in shadow or not.
So in your second pass you will do the steps of the shadow mapping again, except that you don't draw the data you calculated but compare it to the one you calculated in the pass before.

GLSL: How to get pixel x,y,z world position?

I want to adjust the colors depending on which xyz position they are in the world.
I tried this in my fragment shader:
varying vec4 verpos;
void main(){
vec4 c;
c.x = verpos.x;
c.y = verpos.y;
c.z = verpos.z;
c.w = 1.0;
gl_FragColor = c;
}
but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle?
Heres my vertex shader:
varying vec4 verpos;
void main(){
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
Edit2: changed title, so i want world coords, not screen coords!
Edit3: added my full code
In vertex shader you have gl_Vertex (or something else if you don't use fixed pipeline) which is the position of a vertex in model coordinates. Multiply the model matrix by gl_Vertex and you'll get the vertex position in world coordinates. Assign this to a varying variable, and then read its value in fragment shader and you'll get the position of the fragment in world coordinates.
Now the problem in this is that you don't necessarily have any model matrix if you use the default modelview matrix of OpenGL, which is a combination of both model and view matrices. I usually solve this problem by having two separate matrices instead of just one modelview matrix:
model matrix (maps model coordinates to world coordinates), and
view matrix (maps world coordinates to camera coordinates).
So just pass two different matrices to your vertex shader separately. You can do this by defining
uniform mat4 view_matrix;
uniform mat4 model_matrix;
In the beginning of your vertex shader. And then instead of ftransform(), say:
gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex;
In the main program you must write values to both of these new matrices. First, to get the view matrix, do the camera transformations with glLoadIdentity(), glTranslate(), glRotate() or gluLookAt() or what ever you prefer as you would normally do, but then call glGetFloatv(GL_MODELVIEW_MATRIX, &array); in order to get the matrix data to an array. And secondly, in a similar way, to get the model matrix, also call glLoadIdentity(); and do the object transformations with glTranslate(), glRotate(), glScale() etc. and finally call glGetFloatv(GL_MODELVIEW_MATRIX, &array); to get the matrix data out of OpenGL, so you can send it to your vertex shader. Especially note that you need to call glLoadIdentity() before beginning to transform the object. Normally you would first transform the camera and then transform the object which would result in one matrix that does both the view and model functions. But because you're using separate matrices you need to reset the matrix after camera transformations with glLoadIdentity().
gl_FragCoord are the pixel coordinates and not world coordinates.
Or you could just divide the z coordinate by the w coordinate, which essentially un-does the perspective projection; giving you your original world coordinates.
ie.
depth = gl_FragCoord.z / gl_FragCoord.w;
Of course, this will only work for non-clipped coordinates..
But who cares about clipped ones anyway?
You need to pass the World/Model matrix as a uniform to the vertex shader, and then multiply it by the vertex position and send it as a varying to the fragment shader:
/*Vertex Shader*/
layout (location = 0) in vec3 Position
uniform mat4 World;
uniform mat4 WVP;
//World FragPos
out vec4 FragPos;
void main()
{
FragPos = World * vec4(Position, 1.0);
gl_Position = WVP * vec4(Position, 1.0);
}
/*Fragment Shader*/
layout (location = 0) out vec4 Color;
...
in vec4 FragPos
void main()
{
Color = FragPos;
}
The easiest way is to pass the world-position down from the vertex shader via a varying variable.
However, if you really must reconstruct it from gl_FragCoord, the only way to do this is to invert all the steps that led to the gl_FragCoord coordinates. Consult the OpenGL specs, if you really have to do this, because a deep understanding of the transformations will be necessary.