I would like to use shaders to increase my in-game performance.
As you can see, I am cutting out the far-away pixels from rendering, using the discard function.
However, for some reason this is not increasing FPS. I have got the exact same 5 FPS that I have got without any shaders at all! The low FPS is because I chose to render an absolute ton of trees, but since they are not seen... Then why are they causing lag!?
My fragment shader code:
varying vec4 position_in_view_space;
uniform sampler2D color_texture;
void main()
{
float dist = distance(position_in_view_space,
vec4(0.0, 0.0, 0.0, 1.0));
if (dist < 1000.0)
{
gl_FragColor = gl_Color;
// color near origin
}
else
{
//kill;
discard;
//gl_FragColor = vec4(0.3, 0.3, 0.3, 1.0);
// color far from origin
}
}
My vertex shader code:
varying vec4 position_in_view_space;
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
position_in_view_space = gl_ModelViewMatrix * gl_Vertex;
// transformation of gl_Vertex from object coordinates
// to view coordinates;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
(I have got very slightly modified fragment shader for the terrain texturing and a lower distance for the trees (Ship in that screenshot is too far from trees), however it is basically the same idea.)
So, could someone please tell me why discard is not improving performance? If it can't, is there another way to make unwanted vertices not render (or render less vertices further away) to increase performance?
using discard will actually prevent the GPU from using the depth buffer before invoking the fragment shader.
It's much simpler to adjust the far plane of the projection matrix so unwanted vertices are outside the render box.
Also consider not calling the glDraw for distant trees in the first place.
you can for example group the trees per island and then per island check if it is close enough for the trees to render and just not call glDraw* when it's not.
Related
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I'm using C++ and when I implemented a diffuse shader, it causes every other triangle to disappear.
I can post my render code if need be, but I believe the issue is with my normal matrix (which I wrote it to be the transpose inverse of the model view matrix). Here is the shader code which is sort of similar to that of a tutorial on lighthouse tutorials.
VERTEX SHADER
#version 330
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
uniform mat4 transform_matrix;
uniform mat4 view_model_matrix;
uniform mat4 normal_matrix;
uniform vec3 light_pos;
out vec3 light_intensity;
void main()
{
vec3 tnorm = normalize(normal_matrix * vec4(normal, 1.0)).xyz;
vec4 eye_coords = transform_matrix * vec4(position, 1.0);
vec3 s = normalize(vec3(light_pos - eye_coords.xyz)).xyz;
vec3 light_set_intensity = vec3(1.0, 1.0, 1.0);
vec3 diffuse_color = vec3(0.5, 0.5, 0.5);
light_intensity = light_set_intensity * diffuse_color * max(dot(s, tnorm), 0.0);
gl_Position = transform_matrix * vec4(position, 1.0);
}
My fragment shader just outputs the "light_intensity" in the form of a color. My model is straight from Blender and I have tried different exporting options like keeping vertex order, but nothing has worked.
This is not related to you shader.
It appears to be depth test related. Here, the order of triangles in the depth relative to you viewport is messed up, because you do not make sure that only the nearest pixel to your camera gets drawn.
Enable depth testing and make sure you have a z buffer bound to your render target.
Read more about this here: http://www.opengl.org/wiki/Depth_Test
Only the triangles highlighted in red should be visible to the viewer. Due to the lack of a valid depth test, there is no chance to guarantee, what triangle is painted top most. Thus blue triangles of the faces that should not be visible will cover parts of previously drawn red triangles.
The depth test would omit this, by comparing the depth in the z buffer with the depth of the pixel to be drawn at the current moment. Only the color information of a pixel that is closer to the viewer, i.e. has a smaller z value than the z value in the buffer, shall be written to the framebuffer in order to achieve a correct result.
(Backface culling, would be nice too and, if the model is correctly exported, also allow to show it correctly. But it would only hide the main problem, not solve it.)
VC++ 2010, OpenGL, GLSL, SDL
I am moving over to shaders, and have run into a problem that originally occured while working with the ogl pipeline. That is, the position of the light seems to point in whatever direction my camera faces. In the ogl pipeline it was just the specular highlight, which was fixable with:
glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);
Here are the two shaders:
Vertex
varying vec3 lightDir,normal;
void main()
{
normal = normalize(gl_NormalMatrix * gl_Normal);
lightDir = normalize(vec3(gl_LightSource[0].position));
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment
varying vec3 lightDir,normal;
uniform sampler2D tex;
void main()
{
vec3 ct,cf;
vec4 texel;
float intensity,at,af;
intensity = max(dot(lightDir,normalize(normal)),0.0);
cf = intensity * (gl_FrontMaterial.diffuse).rgb +
gl_FrontMaterial.ambient.rgb;
af = gl_FrontMaterial.diffuse.a;
texel = texture2D(tex,gl_TexCoord[0].st);
ct = texel.rgb;
at = texel.a;
gl_FragColor = vec4(ct * cf, at * af);
}
Any help would be much appreciated!
The question is: What coordinate system (reference frame) do you want the lights to be in? Probably "the world".
OpenGL's fixed-function pipeline, however, has no notion of world coordinates, because it uses a modelview matrix, which transforms directly from eye (camera) coordinates to model coordinates. In order to have “fixed” lights, you could do one of these:
The classic OpenGL approach is to, every frame, set up the modelview matrix to be the view transform only (that is, be the coordinate system you want to specify your light positions in) and then use glLight to set the position (which is specified to apply the modelview matrix to the input).
Since you are using shaders, you could also have separate model and view matrices and have your shader apply both (rather than using ftransform) to vertices, but only the view matrix to lights. However, this means more per-vertex matrix operations and is probably not an especially good idea unless you are looking for clarity rather than performance.
I Just started write a phong shader
vert:
varying vec3 normal, eyeVec;
#define MAX_LIGHTS 8
#define NUM_LIGHTS 3
varying vec3 lightDir[MAX_LIGHTS];
void main() {
gl_Position = ftransform();
normal = gl_NormalMatrix * gl_Normal;
vec4 vVertex = gl_ModelViewMatrix * gl_Vertex;
eyeVec = -vVertex.xyz;
int i;
for (i=0; i<NUM_LIGHTS; ++i){
lightDir[i] = vec3(gl_LightSource[i].position.xyz - vVertex.xyz);
}
}
I know that I need to get the camera position with uniform, but how, and where put this value?
ps. I'm using opengl 2.0
You don't need to pass the camera position, because, well, there is no camera in OpenGL.
Lighting calculations are performed in eye/world space, i.e. after the multiplication with the modelview matrix, which also performs the "camera positioning". So actually you already got the right things in place. using ftransform() is a little inefficient, as you're doing half of what it does again (gl_ModelviewMatrix * gl_Vertex, you can make this into ftransform by adding gl_Position = gl_ProjectionMatrix * eyeVec)
So if your lights seem to move when your "camera" transforms, you're not transforming the light's positions properly. Either precompute the transformed light positions, or transform them in the shader as well. It's more a matter of choosen convention, less laid out constraint if using shaders.
I'm looking for a way to access OpenGL states from a shader. The GLSL Quick Reference Guide, an awesome resource, couldn't really help me out on this one.
In the example I'm working on I have the two following shaders:
Vertex:
void main()
{
gl_FrontColor = gl_Color;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment:
uniform sampler2D tex;
uniform float flattening;
void main( void )
{
vec4 texel;
texel = texture2D(tex, gl_TexCoord[0].st);
texel.r *= gl_Color.r;
texel.g *= gl_Color.g;
texel.b *= gl_Color.b;
texel.a *= gl_Color.a;
gl_FragColor = texel;
}
When I'm rendering polygons that are not textured, their alpha values are correct, but they're assigned the color black.
1, What conditional check can I setup so that the variable 'texel' will be set a vec4(1.0, 1.0, 1.0, 1.0) rather than sampling from a texture when GL_TEXTURE_2D is disabled?
2, Would the processing be faster if I wrote different shaders for different texturing modes and switch between them where I would use glEnable/glDisable(GL_TEXTURE_2D)?
Sorry, but you can't access that kind of state from GLSL, period.
In fact, in the future GLSL you have to send all unforms/attributes yourself, i.e. no automagic gl_ModelViewMatrix, gl_LightPosition, gl_Normal or the like. Only basic stuff like gl_Position and gl_FragColor will be available.
That sort of voids your second question, but you could always use #ifdef's to enable/disable parts in your shader if you find that more convenient than writing separate shaders for different texture modes.
Related, do note that branching is generally rather slow so if you have the need for speed do avoid it as much as possible. (This matters especially irregular dynamic branching, as fragments are processed in SIMD blocks and all fragments in a block must compute the same instructions, even if they only apply for one or a few fragments.)
One method I've seen is to bind a 1x1 texture with a single texel of (1.0, 1.0, 1.0, 1.0) when rendering polygons that are not textured. A texture switch should be less expensive than a shader switch and a 1x1 texture will fit entirely inside the texture cache.