I would like to output in my pixelshader the actual camera distance per pixel. This means, the result should (aside from some slight variations in precision and clamping) not depend upon the near/far clipping planes.
Also, a plane lying exactly on the near plane (and parallel to it) should therefore not output a uniform result, because pixels in the middle of the nearplane are closer to the camera than pixels on the border.
How can I calculate the actual camera distance for each pixel in a pixelshader? (actual shader language doesn't matter)
You send the camera position is a uniform.
You send the position to the fragment shader in a varying.
You use the built-in distance function with those vectors.
It looks like this, but don't bother overcomplicating it with the .w if you don't need it.
Alternatively, multiply the vertex position by the ModelView matrix, and use length() on that.
Related
I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.
I am drawing a stack of decals on a quad. Same geometry, different textures. Z-fighting is the obvious result. I cannot control the rendering order or use glPolygonoffset due to batched rendering. So I adjust depth values inside the vertex shader.
gl_Position = uMVPMatrix * pos;
gl_Position.z += aDepthLayer * uMinStep * gl_Position.w;
gl_Position holds clip coordinates. That means a change in z will move a vertex along its view ray and bring it to the front or push it to the back. For normalized device coordinates the clip coords get divided by gl_Position.w (=-Zclip). As a result the depth buffer does not have linear distribution and has higher resolution towards the near plane. By premultiplying gl_Position.w that should be fixed and I should be able to apply a flat amount (uMinStep) to the NDC.
That minimum step should be something like 1/(2^GL_DEPTH_BITS -1). Or, since NDC space goes from -1.0 to 1.0, it might have to be twice that amount. However it does not work with these values. The minStep is roughly 0.00000006 but it does not bring a texture to the front. Neither when I double that value. If I drop a zero (scale by 10), it works. (Yay, thats something!)
But it does not work evenly along the frustum. A value that brings a texture in front of another while the quad is close to the near plane does not necessarily do the same when the quad is close to the far plane. The same effect happens when I make the frustum deeper. I would expect that behaviour if I was changing eye coordinates, because of the nonlinear z-Buffer distribution. But it seems that premultiplying gl_Position.w is not enough to counter that.
Am I missing some part of the transformations that happen to clip coords? Do I need to use a different formula in general? Do I have to include the depth range [0,1] somehow?
Could the different behaviour along the frustum be a result of nonlinear floating point precision instead of nonlinear z-Buffer distribution? So maybe the calculation is correct, but the minStep just cannot be handled correctly by floats at some point in the pipeline?
The general question: How do I calculate a z-Shift for gl_Position (clip coordinates) that will create a fixed change in the depth buffer later? How can I make sure that the z-Shift will bring one texture in front of another no matter where in the frustum the quad is placed?
Some material:
OpenGL depth buffer faq
https://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
Same with better readable formulas (but some typos, be careful)
https://www.opengl.org/wiki/Depth_Buffer_Precision
Calculation from eye coords to z-buffer. Most of that happens already when I multiply the projection matrix.
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
Explanation about the elements in the projection matrix that turn into the A and B parts in most depth buffer calculation formulas.
http://www.songho.ca/opengl/gl_projectionmatrix.html
As far as I understand, location of a point/pixel cannot be a fraction, at least on a raster graphics system where hardwares use pixels to display images.
Then, why and how does OpenGL use fractional values for plotting pixels?
For example, how is it possible: glVertex2f(0.15f, 0.51f); ?
This command does not plot any pixels. It merely defines the location of a point in 3D space (you'll notice that there are 3 coordinates, while for a pixel on the screen you'd only need 2). This is the starting point for the OpenGL pipeline. This point then goes through a lot of transformations before it ends up on the screen.
Also, the coordinates are unitless. For example, you can say that your viewport is between 0.0f and 1.0f, then these coordinates make a lot of sense. Basically you have to think of these point in terms of mathematics, not pixels.
I would suggest some reading on how OpenGL transformations work, for example here, here or the tutorial here.
The vectors you pass into OpenGL are not viewport positions but arbitrary numbers in some vector space. Only after a chain of transformations these numbers are mapped into viewport pixel positions. With the old fixed function pipeline this could be anything that can be represented by a vector–matrix multiplication.
These days, where everything is programmable (shaders) the mapping can very well be any kind of function you can think of. For example the values you pass into glVertex (immediate mode call, but available to shaders with OpenGL-2.1) may be interpreted as polar coordinates in the vertex shader:
This is a perfectly valid OpenGL-2.1 vertex shader that interprets the vertex position to be in polar coordinates. Note that due to triangles and lines being straight edges and polar coordinates being curvilinear this gives good visual results only for points or highly tesselated primitives.
#version 110
void main() {
gl_Position =
gl_ModelViewProjectionMatrix
* vec4( gl_Vertex.y*vec2(sin(gl_Vertex.x),cos(gl_Vertex.x)) , 0, 1);
}
As you can see here the valus passed to glVertex are actually arbitrary, unitless components of vectors in some vector space. Only by applying some transformation to the viewport space these vectors gain meaning. Hence it makes no way to impose a certain value range onto the values that go into the vertex attribute.
Vertex and pixel are very different things.
It's quite possible to have all your vertices within one pixel (although in this case you probably need help with LODing).
You might want to start here...
http://www.glprogramming.com/blue/ch01.html
Specifically...
Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with a vertex, and each vertex and its associated data are processed independently, in order, and in the same way.
And...
Rasterization produces a series of frame buffer addresses and associated values using a two-dimensional description of a point, line segment, or polygon. Each fragment so produced is fed into the last stage, per-fragment operations, which performs the final operations on the data before it's stored as pixels in the frame buffer.
For your example, before glVertex2f(0.15f, 0.51f) is on the screen, there are many transforms to be done. Making complex thing crudely simpler, after moving your vertex to view space (applying camera position and direction), the magic here is (1) projection matrix, and (2) viewport setting.
Internally, OpenGL "screen coordinates" are in a cube (-1, -1, -1) - (1, 1, 1), :
http://www.matrix44.net/cms/wp-content/uploads/2011/03/ogl_coord_object_space_cube.png
Projection matrix 'squeezes' the frustum in this cube (which you do in vertex shader), assuming you have perspective transform - if projection is orthogonal, the projection is just a tube, limited by near and far values (and like in both cases, scaling factors):
http://www.songho.ca/opengl/files/gl_projectionmatrix01.png
EDIT: Maybe better example here:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#The_Projection_matrix
(EDIT: The Z-coordinate is used as depth value) When fragments are finally transferred to pixels on texture/framebuffer/screen, these are multiplied with viewport settings:
https://www3.ntu.edu.sg/home/ehchua/programming/opengl/images/GL_2DViewportAspectRatio.png
Hope this helps!
Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.
Is there a way using OpenGL or GLUT to project a point from the model-view matrix into an associated texture matrix? If not, is there a commonly used library that achieves this? I want to modify the texture of an object according to a ray cast in 3D space.
The simplest case would be:
A ray is cast which intersects a quad, mapped with a single texture.
The point of intersection is converted to a value in texture space clamped between [0.0,1.0] in the x and y axis.
A 3x3 patch of pixels centered around the rounded value of the resulting texture point is set to an alpha value of 0.( or another RGBA value which is convenient, for the desired effect).
To illustrate here is a more complex version of the question using a sphere, the pink box shows the replaced pixels.
I just specify texture points for mapping in OpenGL, I don't actually know how the pixels are projected onto the sphere. Basically I need to to the inverse of that projection, but I don't quite know how to do that math, especially on more complex shapes like a sphere or an arbitrary convex hull. I assume that you can somehow find a planar polygon that makes up the shape, which the ray is intersecting, and from there the inverse projection of a quad or triangle would be trivial.
Some equations, articles and/or example code would be nice.
There are a few ways you could accomplish what you're trying to do:
Project a world coordinate point into normalized device coordinates (NDCs) by doing the model-view and projection transformation matrix multiplications by yourself (or if you're using old-style OpenGL, call gluProject), and perform the perspective division step. If you use a depth coordinate of zero, this would correspond to intersecting your ray at the imaging plane. The only other correction you'd need to do map from NDCs (which are in the range [-1,1] in x and y) into texture space by dividing the resulting coordinate by two, and then shifting by .5.
Skip the ray tracing all together, and bind your texture as a framebuffer attachment to a framebuffer object, and then render a big point (or sprite) that modifies the colors in the neighborhood of the intersection as you want. You could use the same model-view and projection matrices, and will (probably) only need to update the viewport to match the texture resolution.
So I found a solution that is a little complicated, but does the trick.
For complex geometry you must determine which quad or triangle was intersected, and use this as the plane. The quad must be planar(obviously).
Draw a plane in the identity matrix with dimensions 1x1x0, map the texture on points identical to the model geometry.
Transform the plane, and store the inverse of each transform matrix in a stack
Find the point at which the the plane is intersected
Transform this point using the inverse matrix stack until it returns to identity matrix(it should have no depth(
Convert this point from 1x1 space into pixel space by multiplying the point by the number of pixels and rounding. Or start your 2D combining logic here.