In OpenGL vertex shader, gl_Position doesn't get homogenized - opengl

I was expecting gl_Position to automatically get homogenized (divided by w), but it seems not working.. Why do the followings make different results?
1)
void main() {
vec4 p;
... omitted ...
gl_Position = projectionMatrix * p;
}
2)
... same as above ...
p = projectionMatrix * p;
gl_Position = p / p.w;
I think the two are supposed to generate the same results, but it seems it's not the case. 1 doesn't work while 2 is working as expected.. Could it possibly be a precision problem? Am I missing something? This is driving me almost crazy.. helps needed. Many thanks in advance!

the perspective divide cannot be done before clipping, which happens after the vertex shader is completed. So there is no reason that you could observe the w divide in the vertex shader.
The GL will do the perspective divide before the rasterization of the triangles, before the fragment shader runs, though.
What are you trying to do that does not work ?

From the GLSL spec 1.2:
The variable gl_Position is available
only in the vertex language and is
intended for writing the
homogeneous vertex position.
So it's not automatically homogenized.

Related

How to get homogeneous screen space coordinates in openGL

I'm studying opengl and I'v got i little 3d scene with some objects. In GLSL vertex shader I multiply vertexes on matixes like this:
vertexPos= viewMatrix * worldMatrix * modelMatrix * gl_Vertex;
gl_Position = vertexPos;
vertexPos is a vec4 varying variable and I pass it to fragment shader.
Here is how the scene renders normaly:
normal render
But then I wana do a debug render. I write in fragment shader:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0);
vertexPos is multiplied by all matrixes, including perspective matrix, and I assumed that I would get a smooth gradient from the center of the screen to the right edge, because they are mapped in -1 to 1 square. But look like they are in screen space but perspective deformation isn't applied. Here is what I see:
(dont look at red line and light source, they are using different shader)
debug render
If I devide it by about 15 it will look like this:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0)/15.0;
devided by 15
Can someone please explain me, why the coordinates aren't homogeneous and the scene still renders correctly with perspective distortion?
P.S. if I try to put gl_Position in fragment shader instead of vertexPos, it doesn't work.
A so-called perspective division is applied to gl_Position after it's computed in a vertex shader:
gl_Position.xyz /= gl_Position.w;
But it doesn't happen to your varyings unless you do it manually. Thus, you need to add
vertexPos.xyz /= vertexPos.w;
at the end of your vertex shader. Make sure to do it after you copy the value to gl_Position, you don't want to do the division twice.

Parts of my floor plane disappear

I'm having a problem rendering. The object in question is a large plane consisting of two triangles. It should cover most of the area of the window, but parts of it disappear and reappear with the camera moving and turning (I never see the whole plane though)
Note that the missing parts are NOT whole triangles.
I have messed around with the camera to find out where this is coming from, but I haven't found anything.
I haven't added view frustum culling yet.
I'm really stuck as I have no idea at which part of my code I even have to look at to solve this. Searches mainly turn up questions about whole triangles missing, this is not what's happening here.
Any pointers to what the cause of the problem may be?
Edit:
I downscaled the plane and added another texture that's better suited for testing.
Now I have found this behaviour:
This looks like I expect it to
If I move forward a bit more, this happens
It looks like the geometry behind the camera is flipped and rendered even though it should be invisible?
Edit 2:
my vertex and fragment shaders:
#version 330
in vec3 position;
in vec2 textureCoords;
out vec4 pass_textureCoords;
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position, 1);
pass_textureCoords = vec4(textureCoords/gl_Position.w, 0, 1/gl_Position.w);
gl_Position = gl_Position/gl_Position.w;
}
#version 330
in vec4 pass_textureCoords;
out vec4 fragColor;
uniform sampler2D textureSampler;
void main()
{
fragColor= texture(textureSampler, pass_textureCoords.xy/pass_textureCoords.w);
}
Many drivers do not handle big triangles that cross the z-plane very well, as depending on your precision settings and the drivers' internals these triangles may very well generate invalid coordinates outside of the supported numerical range.
To make sure this is not the issue, try to manually tessellate the floor in a few more divisions, instead of only having two triangles for the whole floor.
Doing so is quite straightforward. You'd have something like this pseudocode:
division_size_x = width / max_x_divisions
division_size_y = height / max_y_divisions
for i from 0 to max_x_divisions:
for j from 0 to max_y_divisions:
vertex0 = { i * division_size_x, j * division_size_y }
vertex1 = { (i+1) * division_size_x, j * division_size_y }
vertex2 = { (i+1) * division_size_x, (j+1) * division_size_y }
vertex3 = { i * division_size_x, (j+1) * division_size_y }
OutputTriangle(vertex0, vertex1, vertex2)
OutputTriangle(vertex2, vertex3, vertex0)
Apparently there is an error in my matrices that caused problems with vertices that are behind the camera. I deleted all the divisions by w in my shaders and did glPosition = -gl_Position (initially just to test something), it works now.
I still need to figure out the exact problem but it is working for now.

How can I use something like "discard;" to increase performance?

I would like to use shaders to increase my in-game performance.
As you can see, I am cutting out the far-away pixels from rendering, using the discard function.
However, for some reason this is not increasing FPS. I have got the exact same 5 FPS that I have got without any shaders at all! The low FPS is because I chose to render an absolute ton of trees, but since they are not seen... Then why are they causing lag!?
My fragment shader code:
varying vec4 position_in_view_space;
uniform sampler2D color_texture;
void main()
{
float dist = distance(position_in_view_space,
vec4(0.0, 0.0, 0.0, 1.0));
if (dist < 1000.0)
{
gl_FragColor = gl_Color;
// color near origin
}
else
{
//kill;
discard;
//gl_FragColor = vec4(0.3, 0.3, 0.3, 1.0);
// color far from origin
}
}
My vertex shader code:
varying vec4 position_in_view_space;
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
position_in_view_space = gl_ModelViewMatrix * gl_Vertex;
// transformation of gl_Vertex from object coordinates
// to view coordinates;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
(I have got very slightly modified fragment shader for the terrain texturing and a lower distance for the trees (Ship in that screenshot is too far from trees), however it is basically the same idea.)
So, could someone please tell me why discard is not improving performance? If it can't, is there another way to make unwanted vertices not render (or render less vertices further away) to increase performance?
using discard will actually prevent the GPU from using the depth buffer before invoking the fragment shader.
It's much simpler to adjust the far plane of the projection matrix so unwanted vertices are outside the render box.
Also consider not calling the glDraw for distant trees in the first place.
you can for example group the trees per island and then per island check if it is close enough for the trees to render and just not call glDraw* when it's not.

OpenGL GLSL SSAO Implementation

I try to implement Screen Space Ambient Occlusion (SSAO) based on the R5 Demo found here: http://blog.nextrevision.com/?p=76
In Fact I try to adapt their SSAO - Linear shader to fit into my own little engine.
1) I calculate View Space surface normals and Linear depth values.
I Store them in a RGBA texture using the following shader:
Vertex:
varNormalVS = normalize(vec3(vmtInvTranspMatrix * vertexNormal));
depth = (modelViewMatrix * vertexPosition).z;
depth = (-depth-nearPlane)/(farPlane-nearPlane);
gl_Position = pvmtMatrix * vertexPosition;
Fragment:
gl_FragColor = vec4(varNormalVS.x,varNormalVS.y,varNormalVS.z,depth)
For my linear depth calculation I referred to: http://www.gamerendering.com/2008/09/28/linear-depth-texture/
Is it correct?
Texture seem to be correct, but maybe it is not?
2) The actual SSAO Implementation:
As mentioned above the original can be found here: http://blog.nextrevision.com/?p=76
or faster: on pastebin http://pastebin.com/KaGEYexK
In contrast to the original I only use 2 input textures since one of my textures stores both, normals as RGB and Linear Depht als Alpha.
My second Texture, the random normal texture, looks like this:
http://www.gamerendering.com/wp-content/uploads/noise.png
I use almost exactly the same implementation but my results are wrong.
Before going into detail I want to clear some questions first:
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong?
2) Having a combined normal and Depth texture instead of two seperate ones.
In my opinion this is the biggest difference between the R5 implementation and my implementation attempt. I think this should not be a big problem, however, due to different depth textures this is most likley to cause problems.
Please note that R5_clipRange looks like this
vec4 R5_clipRange = vec4(nearPlane, farPlane, nearPlane * farPlane, farPlane - nearPlane);
Original:
float GetDistance (in vec2 texCoord)
{
//return texture2D(R5_texture0, texCoord).r * R5_clipRange.w;
const vec4 bitSh = vec4(1.0 / 16777216.0, 1.0 / 65535.0, 1.0 / 256.0, 1.0);
return dot(texture2D(R5_texture0, texCoord), bitSh) * R5_clipRange.w;
}
I have to admit I do not understand the code snippet. My depth his stored in the alpha of my texture and I thought it should be enought to just do this
return texture2D(texSampler0, texCoord).a * R5_clipRange.w;
Correct or Wrong?
Your normal texture seems wrong. My guess is that your vmtInvTranspMatrix is a model-view matrix. However it should be model-view-projection matrix (note you need screen space normals, not view space normals). The depth calculation is correct.
I've implemented SSAO once and the normal texture looks like this (note there is no blue here):
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong ?
If you mean the second pass where you are rendering a quad to compute the actual SSAO, yes. You can avoid the multiplication by the orthogonal projection matrix altogether. If you render screen quad with [x,y] dimensions ranging from -1 to 1, you can use really simple vertex shader:
const vec2 madd=vec2(0.5,0.5);
void main(void)
{
gl_Position = vec4(in_Position, -1.0, 1.0);
texcoord = in_Position.xy * madd + madd;
}
2) Having a combined normal and Depth texture instead of two seperate
ones.
Nah, that won't cause problems. It's a common practice to do so.

GLSL Phong Light, Camera Problem

I Just started write a phong shader
vert:
varying vec3 normal, eyeVec;
#define MAX_LIGHTS 8
#define NUM_LIGHTS 3
varying vec3 lightDir[MAX_LIGHTS];
void main() {
gl_Position = ftransform();
normal = gl_NormalMatrix * gl_Normal;
vec4 vVertex = gl_ModelViewMatrix * gl_Vertex;
eyeVec = -vVertex.xyz;
int i;
for (i=0; i<NUM_LIGHTS; ++i){
lightDir[i] = vec3(gl_LightSource[i].position.xyz - vVertex.xyz);
}
}
I know that I need to get the camera position with uniform, but how, and where put this value?
ps. I'm using opengl 2.0
You don't need to pass the camera position, because, well, there is no camera in OpenGL.
Lighting calculations are performed in eye/world space, i.e. after the multiplication with the modelview matrix, which also performs the "camera positioning". So actually you already got the right things in place. using ftransform() is a little inefficient, as you're doing half of what it does again (gl_ModelviewMatrix * gl_Vertex, you can make this into ftransform by adding gl_Position = gl_ProjectionMatrix * eyeVec)
So if your lights seem to move when your "camera" transforms, you're not transforming the light's positions properly. Either precompute the transformed light positions, or transform them in the shader as well. It's more a matter of choosen convention, less laid out constraint if using shaders.