How can I get a vec3 with the the world position of a vertex?
let's say I want to get white pixels for positions of a cube at Y 1 in world space and black pixels for 0…
I tried
(vertex shader)
[...]
varying float whiteness;
[...]
vec4 posWorld = gl_ProjectionMatrix * gl_Vertex;
whiteness = clamp(posWorld.y,0.0,1.0);
[...]
(fragment shader)
[...]
varying float whiteness;
[...]
gl_FragColor.rgb = vec3(whiteness);
[...]
But that gives me weird results where the surface still depends on the camera angle and height.
How can I just get the vertex position in world space x,y,z?
Read into how points are transformed from their local space into the coordinates of your screen.
worldMatrix * vertex = worldSpace
viewMatrix * worldSpace = viewSpace
projectionMatrix * viewSpace = screenSpace
You should be passing the World Matrix into the shader and multiplying the vertex by that if you wish to get the position of the vertex.
vec4 posWorld = worldMatrix * gl_Vertex;
Related
My understanding is that you can convert gl_FragCoord to a point in world coordinates in the fragment shader if you have the inverse of the view projection matrix, the screen width, and the screen height. First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates by dividing by the width and height respectively, then scaling and offsetting them into the range [-1, 1]. Next you transform by the inverse view projection matrix to get a world space point that you can use only if you divide by the w component.
Below is the fragment shader code I have that isn't working. Note inverse_proj is actually set to the inverse view projection matrix:
#version 450
uniform mat4 inverse_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2,
(gl_FragCoord.y / screen_height - 0.5) * 2,
0,
1);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_proj * ndc;
vec3 view = (1 / ndc.w * clip).xyz;
// ...
}
First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates
While simultaneously ignoring the fact that NDC space is three-dimensional (as is window space). You also forgot that the transformation from clip-space to NDC space involved a division, which you did not undo. Well, you did kinda try to undo it, but after transforming by the inverse clip transformation.
Undoing the vertex post-processing transformations use all four components of gl_FragCoord (though you could make due with just 3). The first step is undoing the viewport transform, which requires getting access to the parameters given to glDepthRange.
That gives you the NDC coordinate. Then you have to undo the perspective divide. gl_FragCoord.w is given the value 1/clipW. And clipW was the divisor in that operation. So you divide by gl_FragCoord.w to get back into clip space.
From there, you can multiply by the inverse of the projection matrix. Though if you want world-space, the projection matrix you invert must be a world-to-projection, rather than just pure projection (which is normally camera-to-projection).
In-code:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
Where viewport is a uniform containing the four parameters specified by the glViewport function, in the same order as given to that function.
I figured out the problems with my code. First, as Nicol pointed out, glFragCoord.z (depth) needs to be shifted from screen coordinates. Also, there is a mistake with the original code where I wrote 1 / ndc.w * clip instead of clip / clip.w.
As noted by BDL, however, it would be more efficient to pass the world position as a varying to the fragment shader. However, the code below is a short way to achieve the desired result entirely through the fragment shader (e.g. for screen-space programs that don't have a world position per fragment and you want the view vector per fragment).
#version 450
uniform mat4 inverse_view_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2.0,
(gl_FragCoord.y / screen_height - 0.5) * 2.0,
(gl_FragCoord.z - 0.5) * 2.0,
1.0);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_view_proj * ndc;
vec3 vertex = (clip / clip.w).xyz;
// ...
}
I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.
I have a deferred renderer that I have created. It writes the normal and depth values to a floating point texture. From that I can get a specific fragment's position in view space. But I want to get the pixel's position in world space.
I thought that to get the pixel from VS to WS I would have to multiply it by the camera's inverse world matrix. That doesn't seem to be right though...
The depthMap is the depth texture, the w component is the clipPos.z / clipPos.w. (passed down from the vertex shader as clipPos = gl_Position)
Then in my screen quad shader I do this
vec2 texCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
vec2 xy = texCoord * 2.0 - 1.0;
vec4 vertexPositionProjected = vec4( xy, depthMap.w, 1.0 );
vec4 vertexPositionVS = projectionInverseMatrix * vertexPositionProjected;
vertexPositionVS.xyz /= vertexPositionVS.w;
vertexPositionVS.w = 1.0;
// This next line I don't think is correct?
vec3 worldPosition = (camWorldInv * vec4( vertexPositionVS.xyz, 1.0 )).rgb;
The end goal here is to create a fog algorithm that bases the fog calculation both on the distance away from the camera as well as the fragment's height (in world space).
I have a 2D mode which displays moving sprites over the world. each sprite has rotation.
When i'm trying to implement the same in 3D world, over a sphere, i met a problem calculating the sprite rotation so it will look like it is moving toward the direction. I'm aware that the sprite is billboard only and the rotation will be 2D only, and will not be 100% rotated toward the direction but at least to make it look reasonable for the eye.
I've tried to consider the vector to the north (of the world) in my rotation but still, there are allot of cases when we move the camera around the sphere that the sprite arrow is not in the direction of the movement.
Can anyone direct me for a solution ?
-------- ADDITION -----------
More explanation: I have 2D world (x,y). In this world I have a point that moves toward a direction (an angle is saved in the object). The rotations are calculated in the fragment shader of course.
In the 3D world, i'm converting this (x, y) to a (x,y,z) by simple sphere formula.
My sphere (world) origin is (0,0,0) with radius 1.
The angle (saved in the point for the direction of movement) is used in 2D for rotating the texture as well (As shown above in the first image). The problem is the rotation of the texture in 3D. The rotating should consider the point direction angle, and the camera.
-------- ADDITION -----------
My fragment shader for 2D - If it is helping. And few more pictures and my wish
varying vec2 TextureCoord;
varying vec2 TextureSize;
uniform sampler2D sampler;
varying float angle;
uniform vec4 uColor;
void main()
{
vec2 calcedCoord = gl_PointCoord;
float c = cos(angle);
float s = sin(angle);
vec2 trans = vec2(-0.5, -0.5);
mat2 rot = mat2(c, s, -s, c);
calcedCoord = calcedCoord + trans;
calcedCoord = rot * calcedCoord;
calcedCoord = calcedCoord - trans;
vec2 realTexCoord = TextureCoord + (calcedCoord * TextureSize);
vec4 fragColor = texture2D(sampler, realTexCoord);
gl_FragColor = fragColor * uColor;
}
After struggling allot with this issue I came into this solution.
Instead of attaching as attribute the direction angle to each sprite, I sent the next sprite location instead. And calculating the 2D angle in the vertex shader as follow:
varying float angle;
attribute vec3 nextPointAtt;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vec4 nextPnt = gl_ModelViewProjectionMatrix * vec4(nextPointAtt, gl_Vertex.w);
vec2 ver = gl_Position.xy / gl_Position.w;
vec2 nextVer = nextPnt.xy / nextPnt.w;
vec2 d = nextVer - ver;
angle = atan(d.y, d.x);
}
The angle will be used in the fragment shader (Look at my question for the fragment shader code).
I am trying to move object depending on camera position. Here is my vertex shader
uniform mat4 osg_ViewMatrixInverse;
void main(){
vec4 position = gl_ProjectionMatrix * gl_ModelViewMatrix *gl_Vertex;
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
if( camPos.z >1000.0 )
position.z = position.z+1.0;
if( camPos.z >5000.0 )
position.z = position.z+10.0;
if (camPos.z< 300.0 )
position.z = position.z+300.0;
gl_Position = position;
}
But when camera's vertical position is less than 300 or more than 1000 the model simply disappears though in second case it should be moved just by one unit. I read about inside the shader coordinates are different from a world coordinates that's why i am multiplying by Projection and ModelView matrices, to get the world coordinates. Maybe I am wrong at this point? Forgive me if it's a simple question but i couldnt find the answer.
UPDATE: camPos is translated to world coordinates, but position is not. Maybe it has to do with the fact i am using osg_ViewMatrixInverse (passed by OpenSceneGraph) to get camera position and internal gl_ProjectionMatrix and gl_ModelViewMatrix to get the vertex coordinates? How do I translate position into world coordinates?
The problem is that you are transforming the position into clip coordinates (by multiplying gl_Vertex by the projection and modelview matrices), then performing a world-coordinate operation on those clip coordinates, which does not give the results you want.
Simply perform your transformations before you multiply by the modelview and projection matrices.
uniform mat4 osg_ViewMatrixInverse;
void main() {
vec4 position = gl_Vertex;
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
if( camPos.z >1000.0 )
position.z = position.z+1.0;
if( camPos.z >5000.0 )
position.z = position.z+10.0;
if (camPos.z< 300.0 )
position.z = position.z+300.0;
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * position;
}
gl_Position is in clip-space, the values you output for any coordinate must be >= -gl_Position.W or <= gl_Position.W or they will be clipped. If all of your coordinates for a primitive are outside this range, then nothing will be drawn. The reasoning for this is that after the vertex shader completes, OpenGL divides the clip-space coordinates by W to produce coordinates in the range [-1,1] (NDC). Anything outside this volume will not be on screen.
What you should actually do here is add these coordinates to your object-space position and then perform the transformation from object-space to clip-space. Colonel Thirty Two's answer already does a very good job of showing how to do this; I just wanted to explain exactly why you should not apply this offset to the clip-space coordinates.
Figured it out:
uniform mat4 osg_ViewMatrixInverse;
uniform mat4 osg_ViewMatrix;
void main(){
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
vec4 position_in_view_space = gl_ModelViewMatrix * gl_Vertex;
vec4 position_in_world_space = osg_ViewMatrixInverse * position_in_view_space;
if( camPos.z >1000.0 )
position_in_world_space.z = position_in_world_space.z+700.0;
if( camPos.z >5000.0 )
position_in_world_space.z = position_in_world_space.z+1000.0;
if (camPos.z< 300.0 )
position_in_world_space.z = position_in_world_space.z+200;
position_in_view_space = osg_ViewMatrix * position_in_world_space;
vec4 position_in_object_space = gl_ModelViewMatrixInverse * position_in_view_space;
gl_Position = gl_ModelViewProjectionMatrix * position_in_object_space;
}
One needs to transform gl_Vertex (which is in object space coords) into a world coordinates through view space coordinates (maybe there is direct conversion i dont see) than he can modify them and transform back into object space coordinates.