Passing data into a vertex shader for perspective divide - opengl

In OpenGL and GLSL, I am just learning about perspective projection and the vertex shader. However, I am a little confused about what data actually needs to be passed to the vertex shader, and what needs to be done in the shader code itself.
I have two questions:
Suppose I have a triangle defined in 3D coordinates (x,y,z). Do I need to pass a 4D vector with values (x,y,z,w), where w = z? Or do I just pass the 3D vector? The reason I ask is that I know that somewhere in the pipeline, the x and y coordinates are divided by the w component, in the perspective divide.
In the vertex shader code, do I need to manually divide the x and y components by the w component myself? Or is this taken care of automatically?
Thanks!

The OpenGL implementation does the perspective divide in between the vertex and fragment shaders.
You can input whatever you want into a vertex shader; the perspective divide happens after the vertex shader on the gl_Position variable which is a vec4.
(I don't know how tessellation shaders fit into this)

as we know, there are 6 spaces in OpenGL:
model/object space--homogeneous model coordinates: (x,y,z,1) in true units
`model-world transform`
world space--world coordinates: (x,y,z,1) in true units
`world-view transform`
view/camera/eye space--eye coordinates: (x,y,z,1) in true units
`perspective projection`/`orthographic projection`
clip space--homogeneous clip coordinates: (x,y,z,w) OpenGL true input
`projective division`
NDC space--normalized device coordinates---x->(-1,1)、y->(-1,1)、z->(0,1)
window space--window coordinates:(x,y) in pixels、depth coordinates:z->(0,1)
So the answer is:
Q1: perspective projection automatically converts (x,y,z,1) to (x,y,z,w).
Q2: no, you don't. anything after OpenGL true input can be automatically done by OpenGL and you can not control it in vertex shader.

Related

OpenGL: Mapping texture on a sphere using spherical coordinates

I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).

opengl mixed perspective and ortographic projection

How can one mix ortographic and perspective projection in openGL?
Some 2d elements have to be drawn in screen space (no scaling, rotation, etc..)
These 2d elements have a z position, they have to appear in front/behind of other 3d elements.
So i set up orographic projection, draw all 2d elements, then setup perspective projection and draw all 3d elements.
The result is that all 2d elements are drawn on top. It seems that the z values from the orto projection and the z values from the perspective projection are not compatible (GL_DEPTH_TEST).
Separately all 2d and all 3d elements work fine, the problem is when i try to mix them.
Does the prespective projection changes the z values? In what way?
Is it possible to use z values from orto projection mixed with z values from perspective projection for depth test, or this whole concept is flawed?
Bare opengl1.5
It seems that the z values from the orto projection and the z values from the perspective projection are not compatible (GL_DEPTH_TEST).
That is indeed the case. Perspective transformation maps the Z values nonlinear to the depth buffer values. The usual way to address this problem is to copy the depth buffer after the perspective pass into a depth texture and use that as an additional input in the fragment shader of the orthographic drawn stuff, reverse the nonlinearity in the depth input and compare the incoming Z coordinate with that; then discard appropriately.
It's also possible to emit linear depth values in the perspective drawn geometry fragment shaders, however the depth nonlinearity of perspective projection has its purpose; without it you loose depth precision where it matters most, close to the point of view.

GLSL vertex shader gl_Position value

I'm creating game that uses orthogonal view(2D). I'm trying to understand the value of gl_Position in vertex shader.
From what I understand x and y coordinates translate to screen position in range of -1 to 1, but I'm quite confused with role of the z and w, I only know that the w value should be set to 1.0
For the moment I just use gl_Position.xyw = vec3(Position, 1.0);, where Position is 2D vertex position
I use OpenGL 3.2.
Remember that openGL must also work for 3D and it's easier to expose the 3D details than to create a new interface for just 2D.
The Z component is to set the depth of the vertex, points outside -1,1 (after perspective divide) will not be drawn and for the values between -1,1 it will be checked against a depth buffer to see if the fragment is behind some previously drawn triangle and not draw it if it should be hidden.
The w component is for a perspective divide and allowing the GPU to interpolate the values in a perspective correct way. Otherwise the textures looks weird.

In glsl, what is the formula used to compute gl_fragCoord from gl_position?

Please correct me if I'm wrong.
When using vertex and pixel shaders, we usually provide the code to compute the output gl_position of the vertex shader.
Then, we find ouselves with the input gl_FragCoord in the pixel shader.
What are the name of the operations performed by OpenGL to compute gl_FragCoord from gl_position ? Is it correct that those are "projection" and "clip coordinates transform" ?
Then, what exactly are the transformations performs during those operations ?
In other terms, what is the mathematical relation between gl_FragCoord and gl_position, that I could use to replace the openGL function ?
Thank you very much for any contribution.
gl_Position is in post-projection homogeneous coordinates.
It's worth noting that gl_Position is the (4d) position of a vertex, while gl_FragCoord is the (2d) position of a fragment.
The operations that happen in between are
primitive assembly (to generate a triangle from 3 vertices, e.g.)
clipping (i.e. cut the triangle in multiple triangles that are all on inside the view, if it does not initially fit)
viewport application
rasterization (take those triangles and generate covered fragments from them)
So, while you can find the formula to transform the arbitrary point that is represented from the vertex position in the 2d space that is the viewport, it's not in and of itself that useful, as the generated fragments do not align directly to the vertex position. the formula to get the 2d coordinate of the vertex is
2d_coord_vertex = viewport.xy + viewport.wh * (1 + gl_Position.xy / gl_Position.w)/2
Again, this is not gl_FragCoord. Check the details on rasterization in the GL specification if you want more in-depth knowledge.
I'm not sure exactly what you mean by "replace the openGL function", but rasterizing is non-trivial, and way beyond the scope of an SO answer.

Fragment Shader Eye-Space unscaled depth coordinate

I'm trying to use the unscaled (true distance from the front clipping plane) distance to objects in my scene in a GLSL fragment shader. The gl_FragCoord.z value is smaller than I expect. In my vertex shader, I just use ftransform() to set gl_Position. I'm seeing values between 2 and 3 when I expect them to be between 15 and 20.
How can I get the real eye-space depth?
Thanks!
Pass whatever you want down as a varying from the vertex shader.
the Z value available in the fragment shader has gone through normalization based on your z-near/z-far (from the projection matrix), and DepthRange. So it is not directly what you're after. Technically, you could try to reconstruct it by reverting the various OpenGL operations on Z that happen after the vertex shader, but it's probably more trouble (starting with the fact that reverting the projection matrix is non-linear) than just passing down what you want, exactly.
As a side note, the Z you would compute with gl_ModelViewMatrix * gl_Vertex is the Z from the point of view, not the near-Z plane.