I want to view a flat fullscreen texture as it is spherical, by transforming it in a postprocess shader.
I figure I have to apply a projectionmatrix to the texture coordinate in the shader.
I found this website: http://www.songho.ca/opengl/gl_projectionmatrix.html which learns me a lot about the inners of the projectionmatrix.
But how do I apply it? I thought I would have to multiply the third row of the projection matrix to the texture coordinate with a calculated z value added to make it spherical. My efforts don't show any result though.
EDIT: I see the same issue here: http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/2008-April/009765.html
I think after you multiply text coords by projection matrix you have to make a perspective division and move from 3D to 2D (since the texture is 2D). This is the same as with shadow mapping.
// in fragment shader:
vec4 proj = uniformModelViewProjMatrix * tex_coords;
proj.xyz /= proj.w;
proj.xyz += vec3(1.0);
proj.xyz *= 0.5;
vec4 col = texture2D(sampler, proj.xy);
or look at http://www.ozone3d.net/tutorials/glsl_texturing_p08.php (for texture2DProj)
Related
I'm studying opengl and I'v got i little 3d scene with some objects. In GLSL vertex shader I multiply vertexes on matixes like this:
vertexPos= viewMatrix * worldMatrix * modelMatrix * gl_Vertex;
gl_Position = vertexPos;
vertexPos is a vec4 varying variable and I pass it to fragment shader.
Here is how the scene renders normaly:
normal render
But then I wana do a debug render. I write in fragment shader:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0);
vertexPos is multiplied by all matrixes, including perspective matrix, and I assumed that I would get a smooth gradient from the center of the screen to the right edge, because they are mapped in -1 to 1 square. But look like they are in screen space but perspective deformation isn't applied. Here is what I see:
(dont look at red line and light source, they are using different shader)
debug render
If I devide it by about 15 it will look like this:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0)/15.0;
devided by 15
Can someone please explain me, why the coordinates aren't homogeneous and the scene still renders correctly with perspective distortion?
P.S. if I try to put gl_Position in fragment shader instead of vertexPos, it doesn't work.
A so-called perspective division is applied to gl_Position after it's computed in a vertex shader:
gl_Position.xyz /= gl_Position.w;
But it doesn't happen to your varyings unless you do it manually. Thus, you need to add
vertexPos.xyz /= vertexPos.w;
at the end of your vertex shader. Make sure to do it after you copy the value to gl_Position, you don't want to do the division twice.
I'm using spherically billboarded sprites along with 3D objects. Because the quad leans backwards to match the camera angle, it intersects with 3D objects immediately behind it. It is more noticeable when the camera angle is very large.The following link provides a very clear visual.
http://answers.unity3d.com/questions/582680/billboard-issue-in-front-of-3d-object.html
Is there an efficient way to resolve this?
The best solution I could come up with was to use cylindrical billboarding for depth calculations and spherical for the quad's actual position. This allows you to use spherical billboarding while ensuring the quad's depth remains constant.
For reference here are the billboarding ModelView Matrixes. [x]: implies the value is left as is.
Cylindrical mvMatrix Spherical mvMatrix
[1][x][0][x] [1][0][0][x]
[0][x][0][x] [0][1][0][x]
[0][x][1][x] [0][0][1][x]
[x][x][x][x] [x][x][x][x]
First modify the ModelViewMatrix for cylindrical billboarding and generate a depth vertex as such:
depthV = projectionMatrix * (mvm * vertex);
Next set the second column values for spherical billboarding and create the quad as usual:
mvm[1][0] = 0; mvm[1][2] = 0; mvm[1][1] = 1;
gl_Position = projectionMatrix * (mvm * vertex);
Finally send depthV to the fragment shader and use it for the depth calculation.
float ndcDepth = depthV.z / depthV.w;
gl_FragDepth = ((gl_DepthRange.diff * ndcDepth ) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
Scaling should be done before applying the ModelView Matrixes.
I try to simulate reflection on a plane with render to texture.
My only problem is how to adjust the texture coordinates correctly to the current view.
In the shader i multiply the texture coordinates with a rotation matrix.
The rotation matrix is set with:
glm::vec3 v1 = glm::vec3(0.0f,1.0f,0.0f);
glm::vec3 v2=glm::vec3(camlocation[0],camlocation[1],0.0);
if(glm::length(v2)!=0.0f)
{
v2=glm::normalize(v2);
}
float alpha=glm::angle(v1,v2);
texturematrix=glm::mat4(1.0f);
texturematrix = glm::translate(texturematrix,glm::vec3(0.5f,0.5f,0.0f));
texturematrix = glm::rotate(texturematrix,alpha,glm::vec3(0.0f,0.0f,1.0f));
texturematrix = glm::translate(texturematrix,glm::vec3(-0.5f,-0.5f,0.0f));
I dont know if its the right way, but the reflection looks wrong.
edit:
Step 1: i bind a framebuffer and my reflection texture and render my model, a teapot for example.
in the shader i invert Z-position.
Step 2: i bind the texture again and draw the plane. in the shader i use
vec4 texcoord = texturematrix*vec4(VertexIn.texcoord,1.0,1.0);
vec4 firsttex = texture(reflectionMap,texcoord.xy);
Step 3: i draw the real model
vec4 texcoord = texturematrix*vec4(VertexIn.texcoord,1.0,1.0);
ok, one mistake was the third coordinate. it must be 0.0.
now it looks better, but still wrong. i have to add the current eye direction to the camera location angle
http://fs1.directupload.net/images/141207/temp/7wk8lvms.png
I'm trying to implement shadow mapping in my game. The shaders you see below result in correctly drawn shadows for the game's map, but all the models walking around on the map are completely black.
Here's a screenshot:
I suspect the problem lies with the calculation of the world position. The gl_Vertex is not transformed in any way. Because the map is generated with absolute world coordinates, the transformation with the light matrix results in a correct relative position that can be used to perform the shadow mapping.
However, my 3D models are all very close to the origin. So if their coordinates are plainly transformed using the light matrix, they will most likely never be lit.
I'm wondering if this could be the case, and if so, how I could fix it.
Here's my vertex shader:
#version 120
uniform mat4x4 LightMatrixValue;
varying vec4 shadowMapPosition;
varying vec3 worldPos;
void main(void)
{
vec4 modelPos = gl_Vertex;
worldPos=modelPos.xyz/modelPos.w;
vec4 lightPos = LightMatrixValue*modelPos;
shadowMapPosition = 0.5 * (lightPos.xyzw +lightPos.wwww);
gl_Position = ftransform();
}
And the fragment shader:
varying vec4 shadowMapPosition;
varying vec3 worldPos;
uniform sampler2D depthMap;
uniform vec4 LightPosition;
void main(void)
{
vec4 textureColour = gl_Color;
vec3 realShadowMapPosition = shadowMapPosition.xyz/shadowMapPosition.w;
float depthSm = texture2D(depthMap, realShadowMapPosition.xy).r;
if (depthSm < realShadowMapPosition.z-0.0001)
{
textureColour = vec4(0, 0, 0, 1);
}
gl_FragColor= textureColour;
}
I write this here because it will not fit above, i hope it will solve you problem.
For rendering you on the one hand have your models with their model matrix to position the element, on the other hand you have you view and projection matrix that transforms your model into the screen space.
For creating your shadow map (simplest approach, and i guess the one you have chosen) you render the scene from the view of the light source, so you apply the view and projection matrix of you light source, which will map the x, y and z value to screen space. The x and y values are for the position in the image, the z is for the depth buffer test and for the color you write in your color buffer (you later use as shadow map).
For the final rendering you load this shadow map and the view and projection matrix of the light to your shader. For the display in the scene you apply your view and project matrix of the camera to the vertexes, for the shadow map lookup you apply the view and projection matrix of the light to the vertex (like you did with the rendering for the shadow map). When you apply the lights view and projection matrix to the vertex you have the same mapping as in the shadow map pass, now you only need to transform your x and y coordinates to the texture coordinates and lookup the stored z value, which you compare with the calculated one.
Transforming the models world position into screen space (from the lights point of view)
This part is often done in the vertex or geometry shader:
shadowMapPosition = matLightViewProjection * modelWoldPos;
This is often done in the fragment shader:
shadowMapPosition = shadowMapPosition / shadowMapPosition.w ;
// Add an offset to prevent self-shadowing and moiré pattern
shadowMapPosition.z += 0.0005;
//lookup the stored z value
float distanceFromLight = texture2D(depthMap,shadowMapPosition.xz).z;
Now just compare the distanceFromLight with the shadowMapPosition.z to see if the object is in shadow or not.
So in your second pass you will do the steps of the shadow mapping again, except that you don't draw the data you calculated but compare it to the one you calculated in the pass before.
I try to implement Screen Space Ambient Occlusion (SSAO) based on the R5 Demo found here: http://blog.nextrevision.com/?p=76
In Fact I try to adapt their SSAO - Linear shader to fit into my own little engine.
1) I calculate View Space surface normals and Linear depth values.
I Store them in a RGBA texture using the following shader:
Vertex:
varNormalVS = normalize(vec3(vmtInvTranspMatrix * vertexNormal));
depth = (modelViewMatrix * vertexPosition).z;
depth = (-depth-nearPlane)/(farPlane-nearPlane);
gl_Position = pvmtMatrix * vertexPosition;
Fragment:
gl_FragColor = vec4(varNormalVS.x,varNormalVS.y,varNormalVS.z,depth)
For my linear depth calculation I referred to: http://www.gamerendering.com/2008/09/28/linear-depth-texture/
Is it correct?
Texture seem to be correct, but maybe it is not?
2) The actual SSAO Implementation:
As mentioned above the original can be found here: http://blog.nextrevision.com/?p=76
or faster: on pastebin http://pastebin.com/KaGEYexK
In contrast to the original I only use 2 input textures since one of my textures stores both, normals as RGB and Linear Depht als Alpha.
My second Texture, the random normal texture, looks like this:
http://www.gamerendering.com/wp-content/uploads/noise.png
I use almost exactly the same implementation but my results are wrong.
Before going into detail I want to clear some questions first:
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong?
2) Having a combined normal and Depth texture instead of two seperate ones.
In my opinion this is the biggest difference between the R5 implementation and my implementation attempt. I think this should not be a big problem, however, due to different depth textures this is most likley to cause problems.
Please note that R5_clipRange looks like this
vec4 R5_clipRange = vec4(nearPlane, farPlane, nearPlane * farPlane, farPlane - nearPlane);
Original:
float GetDistance (in vec2 texCoord)
{
//return texture2D(R5_texture0, texCoord).r * R5_clipRange.w;
const vec4 bitSh = vec4(1.0 / 16777216.0, 1.0 / 65535.0, 1.0 / 256.0, 1.0);
return dot(texture2D(R5_texture0, texCoord), bitSh) * R5_clipRange.w;
}
I have to admit I do not understand the code snippet. My depth his stored in the alpha of my texture and I thought it should be enought to just do this
return texture2D(texSampler0, texCoord).a * R5_clipRange.w;
Correct or Wrong?
Your normal texture seems wrong. My guess is that your vmtInvTranspMatrix is a model-view matrix. However it should be model-view-projection matrix (note you need screen space normals, not view space normals). The depth calculation is correct.
I've implemented SSAO once and the normal texture looks like this (note there is no blue here):
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong ?
If you mean the second pass where you are rendering a quad to compute the actual SSAO, yes. You can avoid the multiplication by the orthogonal projection matrix altogether. If you render screen quad with [x,y] dimensions ranging from -1 to 1, you can use really simple vertex shader:
const vec2 madd=vec2(0.5,0.5);
void main(void)
{
gl_Position = vec4(in_Position, -1.0, 1.0);
texcoord = in_Position.xy * madd + madd;
}
2) Having a combined normal and Depth texture instead of two seperate
ones.
Nah, that won't cause problems. It's a common practice to do so.