GLSL vertex shader gl_Position value - opengl

I'm creating game that uses orthogonal view(2D). I'm trying to understand the value of gl_Position in vertex shader.
From what I understand x and y coordinates translate to screen position in range of -1 to 1, but I'm quite confused with role of the z and w, I only know that the w value should be set to 1.0
For the moment I just use gl_Position.xyw = vec3(Position, 1.0);, where Position is 2D vertex position
I use OpenGL 3.2.

Remember that openGL must also work for 3D and it's easier to expose the 3D details than to create a new interface for just 2D.
The Z component is to set the depth of the vertex, points outside -1,1 (after perspective divide) will not be drawn and for the values between -1,1 it will be checked against a depth buffer to see if the fragment is behind some previously drawn triangle and not draw it if it should be hidden.
The w component is for a perspective divide and allowing the GPU to interpolate the values in a perspective correct way. Otherwise the textures looks weird.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

OpenGL: Mapping texture on a sphere using spherical coordinates

I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).

Passing data into a vertex shader for perspective divide

In OpenGL and GLSL, I am just learning about perspective projection and the vertex shader. However, I am a little confused about what data actually needs to be passed to the vertex shader, and what needs to be done in the shader code itself.
I have two questions:
Suppose I have a triangle defined in 3D coordinates (x,y,z). Do I need to pass a 4D vector with values (x,y,z,w), where w = z? Or do I just pass the 3D vector? The reason I ask is that I know that somewhere in the pipeline, the x and y coordinates are divided by the w component, in the perspective divide.
In the vertex shader code, do I need to manually divide the x and y components by the w component myself? Or is this taken care of automatically?
Thanks!
The OpenGL implementation does the perspective divide in between the vertex and fragment shaders.
You can input whatever you want into a vertex shader; the perspective divide happens after the vertex shader on the gl_Position variable which is a vec4.
(I don't know how tessellation shaders fit into this)
as we know, there are 6 spaces in OpenGL:
model/object space--homogeneous model coordinates: (x,y,z,1) in true units
`model-world transform`
world space--world coordinates: (x,y,z,1) in true units
`world-view transform`
view/camera/eye space--eye coordinates: (x,y,z,1) in true units
`perspective projection`/`orthographic projection`
clip space--homogeneous clip coordinates: (x,y,z,w) OpenGL true input
`projective division`
NDC space--normalized device coordinates---x->(-1,1)、y->(-1,1)、z->(0,1)
window space--window coordinates:(x,y) in pixels、depth coordinates:z->(0,1)
So the answer is:
Q1: perspective projection automatically converts (x,y,z,1) to (x,y,z,w).
Q2: no, you don't. anything after OpenGL true input can be automatically done by OpenGL and you can not control it in vertex shader.

Sampling data from a shadow map texture using automatic comparison via the texture2D function

I've got a sampler2DShadow in my shader and I want to use it to implement shadow mapping. My shadow texture has the good initializers, with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE and GL_TEXTURE_COMPARE_FUNC set to GL_LEQUAL (meaning that the comparison should return 1 if the r value of my coordinates are less or equal to the depth value fetched in the texture). This texture is bound to the GL_DEPTH_ATTACHMENT of a FBO rendered in light space coordinates.
What coordinates should I give the texture2D function in my final fragment shader? I currently have a
smooth in vec4 light_vert_pos
set in my fragment shader that is defined in the vertex shader by the function
light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;
I would assume I could multiply my lighting by the value
texture2D(shadowmap,(light_vert_pos.xyz)/light_vert_pos.w)
but this does not seem to work. Since light_vert_pos is only in post projective coordinates (the matrix used to create it is the matrix I use to create the depth buffer in the FBO), should I manually clamp the 3 x/y/z variables to [0,1]?
You don't say how you generated your depth values. So I'll assume you generated your depth values by rendering triangles using normal projection. That is, you transform the geometry to camera space, transform it to projection space, and let the rasterization pipeline handle things from there as normal.
In order to make shadow mapping work, your texture coordinates must match what the rasterizer did.
The output of a vertex shader is clip-space. From there, you get the perspective divide, followed by the viewport transform. The latter uses the values from glViewport and glDepthRange to compute the window-space XYZ. The window-space Z is the depth value written to the depth buffer.
Note that this is all during the depth pass: the generation of the depth values for the shadow map.
However, you can take some shortcuts. If your glViewport range was set to the same size as the texture (which is generally how it's done), then you can ignore the viewport transform. You will still need the glDepthRange you used in the depth pass.
In your fragment shader, you can perform the perspective divide, which puts the coordinates in normalized device coordinate (NDC) space. That space is [-1, 1] in all directions. Your texture coordinates are [0, 1], so you need to divide the X and Y by two and add 0.5 to them:
vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;
To compute the Z value, you need to know the near and far values you use for glDepthRange.
texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);
Where n and f are the glDepthRange near and far values. You can of course precompute some of these and pass them as uniforms. Or, if you use the default range of near=0 and far=1, you get
texCoords.z = ndc_space_values.z * 0.5 + 0.5;
Which looks familiar somehow.
Aside:
Since you defined your inputs with in rather than varying, you have to be using GLSL 1.30 or above. So why are you using texture2D (which is an old function) rather than texture?

Fragment Shader Eye-Space unscaled depth coordinate

I'm trying to use the unscaled (true distance from the front clipping plane) distance to objects in my scene in a GLSL fragment shader. The gl_FragCoord.z value is smaller than I expect. In my vertex shader, I just use ftransform() to set gl_Position. I'm seeing values between 2 and 3 when I expect them to be between 15 and 20.
How can I get the real eye-space depth?
Thanks!
Pass whatever you want down as a varying from the vertex shader.
the Z value available in the fragment shader has gone through normalization based on your z-near/z-far (from the projection matrix), and DepthRange. So it is not directly what you're after. Technically, you could try to reconstruct it by reverting the various OpenGL operations on Z that happen after the vertex shader, but it's probably more trouble (starting with the fact that reverting the projection matrix is non-linear) than just passing down what you want, exactly.
As a side note, the Z you would compute with gl_ModelViewMatrix * gl_Vertex is the Z from the point of view, not the near-Z plane.