I'm studying opengl and I'v got i little 3d scene with some objects. In GLSL vertex shader I multiply vertexes on matixes like this:
vertexPos= viewMatrix * worldMatrix * modelMatrix * gl_Vertex;
gl_Position = vertexPos;
vertexPos is a vec4 varying variable and I pass it to fragment shader.
Here is how the scene renders normaly:
normal render
But then I wana do a debug render. I write in fragment shader:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0);
vertexPos is multiplied by all matrixes, including perspective matrix, and I assumed that I would get a smooth gradient from the center of the screen to the right edge, because they are mapped in -1 to 1 square. But look like they are in screen space but perspective deformation isn't applied. Here is what I see:
(dont look at red line and light source, they are using different shader)
debug render
If I devide it by about 15 it will look like this:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0)/15.0;
devided by 15
Can someone please explain me, why the coordinates aren't homogeneous and the scene still renders correctly with perspective distortion?
P.S. if I try to put gl_Position in fragment shader instead of vertexPos, it doesn't work.
A so-called perspective division is applied to gl_Position after it's computed in a vertex shader:
gl_Position.xyz /= gl_Position.w;
But it doesn't happen to your varyings unless you do it manually. Thus, you need to add
vertexPos.xyz /= vertexPos.w;
at the end of your vertex shader. Make sure to do it after you copy the value to gl_Position, you don't want to do the division twice.
Related
The goal is to select an object on screen and then draw a small coordinate system on this object as an overlay. But I want this coordinate system to always be the same size no matter how far away the object is from the camera.
I start with a position in world space that I pass to the vertex shader as uniform. I then do:
gl_position = uViewProjection * vec4(uPosition, 1.0);
This gets passed to the geometry shader.
What I would like to do now is to let the geometry shader draw a line segment that’s going from gl_position in the direction of the projected x-axis.
That itself is now problem:
I just do:
vec4 x = uViewProjection * vec4(1.0, 0.0, 0.0, 0.0);
Now I add this vector x to gl_position and draw the line.
This works but the line gets smaller or bigger depending on the camera distance to the position. I think that is because gl_position has not yet been divided by gl_position.w?
But I would like the line to always be a quarter of the screen size in length.
I know the perspective divide happens right before the fragment shader but I think I have to use it beforehand in some way in order to achieve my goal.
What am I doing wrong? What am I missing?
I want to view a flat fullscreen texture as it is spherical, by transforming it in a postprocess shader.
I figure I have to apply a projectionmatrix to the texture coordinate in the shader.
I found this website: http://www.songho.ca/opengl/gl_projectionmatrix.html which learns me a lot about the inners of the projectionmatrix.
But how do I apply it? I thought I would have to multiply the third row of the projection matrix to the texture coordinate with a calculated z value added to make it spherical. My efforts don't show any result though.
EDIT: I see the same issue here: http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/2008-April/009765.html
I think after you multiply text coords by projection matrix you have to make a perspective division and move from 3D to 2D (since the texture is 2D). This is the same as with shadow mapping.
// in fragment shader:
vec4 proj = uniformModelViewProjMatrix * tex_coords;
proj.xyz /= proj.w;
proj.xyz += vec3(1.0);
proj.xyz *= 0.5;
vec4 col = texture2D(sampler, proj.xy);
or look at http://www.ozone3d.net/tutorials/glsl_texturing_p08.php (for texture2DProj)
In my shader I already have a special variable that has the entire content of previously rendered screen. It is stored in
uniform sampler2D _GrabTexture;
Which it's content should be:
(As a side note, I'm using Unity's GrabPass{} to get the entire screen. Also please ignore Unity's GUI)
Now how can I render one more pass, using _GrabTexture as a texture for my plane model, so the result is exactly the same as my _GrabTexture?
(The point is, I can apply some effects like blur,sharpen,etc to that screen texture before render one more pass so the plane's texture is now stylized.)
I'm trying this in the final pass. The variable is declared in both Vertex & Fragment Shader.
uniform sampler2D _GrabTexture;
varying vec4 v_Position;
Vertex Shader :
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
v_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Store the screen coordinate of each model's vertex as varying, to be used in Fragment Shader.
Fragment Shader :
void main()
{
gl_FragColor = texture2D(_GrabTexture,vec2(v_Position));
}
Now use that stored varying position to access the _GrabTexture screen texture. Since it's entire screen my v_Position which is already in screen coordinate should correctly got the right pixel.
But the result is
As you can see the plane's texture is 'sorts of' showing previously rendered screen but the coordinate is not right. How can I fix it so the result is the same as first image?
You're thinking too complicated. OpenGL tells you the on-screen fragment position in the fragment shader built in variable gl_FragCoord.
With GLSL-1.30 or newer you have texelFetch which you can give gl_FragCoord as source coordinate directly. Or you use translate gl_FragCoord into texture space coordinates, see https://stackoverflow.com/a/5879551/524368
I try to implement Screen Space Ambient Occlusion (SSAO) based on the R5 Demo found here: http://blog.nextrevision.com/?p=76
In Fact I try to adapt their SSAO - Linear shader to fit into my own little engine.
1) I calculate View Space surface normals and Linear depth values.
I Store them in a RGBA texture using the following shader:
Vertex:
varNormalVS = normalize(vec3(vmtInvTranspMatrix * vertexNormal));
depth = (modelViewMatrix * vertexPosition).z;
depth = (-depth-nearPlane)/(farPlane-nearPlane);
gl_Position = pvmtMatrix * vertexPosition;
Fragment:
gl_FragColor = vec4(varNormalVS.x,varNormalVS.y,varNormalVS.z,depth)
For my linear depth calculation I referred to: http://www.gamerendering.com/2008/09/28/linear-depth-texture/
Is it correct?
Texture seem to be correct, but maybe it is not?
2) The actual SSAO Implementation:
As mentioned above the original can be found here: http://blog.nextrevision.com/?p=76
or faster: on pastebin http://pastebin.com/KaGEYexK
In contrast to the original I only use 2 input textures since one of my textures stores both, normals as RGB and Linear Depht als Alpha.
My second Texture, the random normal texture, looks like this:
http://www.gamerendering.com/wp-content/uploads/noise.png
I use almost exactly the same implementation but my results are wrong.
Before going into detail I want to clear some questions first:
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong?
2) Having a combined normal and Depth texture instead of two seperate ones.
In my opinion this is the biggest difference between the R5 implementation and my implementation attempt. I think this should not be a big problem, however, due to different depth textures this is most likley to cause problems.
Please note that R5_clipRange looks like this
vec4 R5_clipRange = vec4(nearPlane, farPlane, nearPlane * farPlane, farPlane - nearPlane);
Original:
float GetDistance (in vec2 texCoord)
{
//return texture2D(R5_texture0, texCoord).r * R5_clipRange.w;
const vec4 bitSh = vec4(1.0 / 16777216.0, 1.0 / 65535.0, 1.0 / 256.0, 1.0);
return dot(texture2D(R5_texture0, texCoord), bitSh) * R5_clipRange.w;
}
I have to admit I do not understand the code snippet. My depth his stored in the alpha of my texture and I thought it should be enought to just do this
return texture2D(texSampler0, texCoord).a * R5_clipRange.w;
Correct or Wrong?
Your normal texture seems wrong. My guess is that your vmtInvTranspMatrix is a model-view matrix. However it should be model-view-projection matrix (note you need screen space normals, not view space normals). The depth calculation is correct.
I've implemented SSAO once and the normal texture looks like this (note there is no blue here):
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong ?
If you mean the second pass where you are rendering a quad to compute the actual SSAO, yes. You can avoid the multiplication by the orthogonal projection matrix altogether. If you render screen quad with [x,y] dimensions ranging from -1 to 1, you can use really simple vertex shader:
const vec2 madd=vec2(0.5,0.5);
void main(void)
{
gl_Position = vec4(in_Position, -1.0, 1.0);
texcoord = in_Position.xy * madd + madd;
}
2) Having a combined normal and Depth texture instead of two seperate
ones.
Nah, that won't cause problems. It's a common practice to do so.
VC++ 2010, OpenGL, GLSL, SDL
I am moving over to shaders, and have run into a problem that originally occured while working with the ogl pipeline. That is, the position of the light seems to point in whatever direction my camera faces. In the ogl pipeline it was just the specular highlight, which was fixable with:
glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);
Here are the two shaders:
Vertex
varying vec3 lightDir,normal;
void main()
{
normal = normalize(gl_NormalMatrix * gl_Normal);
lightDir = normalize(vec3(gl_LightSource[0].position));
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment
varying vec3 lightDir,normal;
uniform sampler2D tex;
void main()
{
vec3 ct,cf;
vec4 texel;
float intensity,at,af;
intensity = max(dot(lightDir,normalize(normal)),0.0);
cf = intensity * (gl_FrontMaterial.diffuse).rgb +
gl_FrontMaterial.ambient.rgb;
af = gl_FrontMaterial.diffuse.a;
texel = texture2D(tex,gl_TexCoord[0].st);
ct = texel.rgb;
at = texel.a;
gl_FragColor = vec4(ct * cf, at * af);
}
Any help would be much appreciated!
The question is: What coordinate system (reference frame) do you want the lights to be in? Probably "the world".
OpenGL's fixed-function pipeline, however, has no notion of world coordinates, because it uses a modelview matrix, which transforms directly from eye (camera) coordinates to model coordinates. In order to have “fixed” lights, you could do one of these:
The classic OpenGL approach is to, every frame, set up the modelview matrix to be the view transform only (that is, be the coordinate system you want to specify your light positions in) and then use glLight to set the position (which is specified to apply the modelview matrix to the input).
Since you are using shaders, you could also have separate model and view matrices and have your shader apply both (rather than using ftransform) to vertices, but only the view matrix to lights. However, this means more per-vertex matrix operations and is probably not an especially good idea unless you are looking for clarity rather than performance.