In OpenGL you can linearize a depth value like so:
float linearize_depth(float d,float zNear,float zFar)
{
float z_n = 2.0 * d - 1.0;
return 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
(Source: https://stackoverflow.com/a/6657284/10011415)
However, Vulkan handles depth values somewhat differently (https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/). I don't quite understand the math behind it, what changes would I have to make to the function to linearize a depth value with Vulkan?
The important difference between OpenGL and Vulkan here is that the normalized device coordinates (NDC) have a different range for z (the depth). In OpenGL it's -1 to 1 and in Vulkan it's 0 to 1.
However, in OpenGL when the depth is stored into a depth texture and you read from it, the value is further normalized to 0 to 1. This seems to be the case in your example, since the first line of your function maps it back to -1 to 1.
In Vulkan, your depth is always between 0 and 1, so the above function works in Vulkan as well. You can simplify it a bit though:
float linearize_depth(float d,float zNear,float zFar)
{
return zNear * zFar / (zFar + d * (zNear - zFar));
}
Related
I have the following glm perspective projection
world.cameraProjection = glm::perspective(glm::radians(45.0f), app.aspectRatio, 0.0f, 100.0f);
world.cameraProjection = glm::scale(world.cameraProjection, world.scale);
world.cameraView = glm::translate(world.cameraProjection, world.camera);
And I will like to guess the value I will have to use to draw a line that has pixel perfect width.
I know the width of my screen, and I'm trying now to translate the percentage of the viewport that represents 1 pixel into a distance for the glm cameraView. So even if the zoom changes the line I'm drawing will appear always the same size in the screen.
Is there a function in glm to do this?
When using Perspective projection, the x- and y-distance that represents one pixel depends on the depth (z-distance).
A projected size in normalized device space can be transformed to a size in view space with:
aspect = width / height
tanFov = tan(fov_y / 2.0) * 2.0;
dist_x = ndc_dist_x * z_eye * tanFov * aspect;
dist_y = ndc_dist_y * z_eye * tanFov;
What you want is that:
height * (ndc_dist_y + 1) / 2 == 1
So if you know the z-distance of an object, the formula is:
dist = z_eye * (2.0 / height - 1.0) * tan(fov_y / 2.0) * 2.0
where fov_y is the field of view in radiant, height is the height of the field of view in pixels, and z_eye is the absolute distance from the object to the camera.
Consider using parallel projection (Orthographic projection), where the size of the projection does not depend on the distance from the camera.
I am writing a simple path tracer and I want to make a preview with OpenGL.
But since OpenGL pipeline use the projection matrix, images rendered with OpenGL and Path tracing are a bit different.
For OpenGL rendering I use the basic pipeline, where the final transformation matrix is:
m_transformation = PersProjTrans * CameraRotateTrans * CameraTranslationTrans * TranslationTrans * RotateTrans * ScaleTrans
Parameters for perspective projection are: 45.0f, WINDOW_WIDTH, WINDOW_HEIGHT, 0.01, 10000.0f.
And in my path tracer I generate rays from camera like this:
m_width_recp = 1.0 /0m_width;
m_height_recp = 1.0 / m_height;
m_ratio = (double)m_width / m_height;
m_x_spacing = (2.0 * m_ratio) / (double)m_width;
m_y_spacing = (double)2.0 / (double)m_height;
m_x_spacing_half = m_x_spacing * 0.5;
m_y_spacing_half = m_y_spacing * 0.5;
x_jitter = (Xi1 * m_x_spacing) - m_x_spacing_half;
y_jitter = (Xi2 * m_y_spacing) - m_y_spacing_half;
Vec pixel = m_position + m_direction * 2.0;
pixel = pixel - m_x_direction * m_ratio + m_x_direction * (x * 2.0 * m_ratio * m_width_recp) + x_jitter;
pixel = pixel + m_y_direction - m_y_direction*(y * 2.0 * m_height_recp + y_jitter);
return Ray(m_position, (pixel-m_position).norm());
Here are images rendered by OpenGL (1) and Path Tracer(2).
The problem is the distance between the scene and the camera. I mean, it's not constant. In some cases the path tracer image looks "closer" to the objects and in some cases vise versa.
I haven't found anything in google.
Invert the view-projection matrix as used for OpenGL rendering and use that to transform screen pixel positions into rays for your ray tracer.
Alright, so I started looking up tutorials on openGL for the sake of making a minecraft mod. I still don't know too much about it because I figured that I really shouldn't have to when it comes to making the small modification that I want, but this is giving me such a headache. All I want to do is be able to properly map a texture to an irregular concave quad.
Like this:
I went into openGL to figure out how to do this before I tried running code in the game. I've read that I need to do a perspective-correct transformation, and I've gotten it to work for trapezoids, but for the life of me I can't figure out how to do it if both pairs of edges aren't parallel. I've looked at this: http://www.imagemagick.org/Usage/scripts/perspective_transform, but I really don't have a clue where the "8 constants" this guy is talking about came from or if it will even help me. I've also been told to do calculations with matrices, but I've got no idea how much of that openGL does or doesn't take care of.
I've looked at several posts regarding this, and the answer I need is somewhere in those, but I can't make heads or tails of 'em. I can never find a post that tells me what arguments I'm supposed to put in the glTexCoord4f() method in order to have the perspective-correct transform.
If you're thinking of the "Getting to know the Q coordinate" page as a solution to my problem, I'm afraid I've already looked at it, and it has failed me.
Is there something I'm missing? I feel like this should be a lot easier. I find it hard to believe that openGL, with all its bells and whistles, would have nothing for making something other than a rectangle.
So, I hope I haven't pissed you off too much with my cluelessness, and thanks in advance.
EDIT: I think I need to make clear that I know openGL does perspective transform for you when your view of the quad is not orthogonal. I'd know to just change the z coordinates or my fov. I'm looking to smoothly texture non-rectangular quadrilateral, not put a rectangular shape in a certain fov.
OpenGL will do a perspective correct transform for you. I believe you're simply facing the issue of quad vs triangle interpolation. The difference between affine and perspective-correct transforms are related to the geometry being in 3D, where the interpolation in image space is non-linear. Think of looking down a road: the evenly spaced lines appear more frequent in the distance. Anyway, back to triangles vs quads...
Here are some related posts:
How to do bilinear interpolation of normals over a quad?
Low polygon cone - smooth shading at the tip
https://gamedev.stackexchange.com/questions/66312/quads-vs-triangles
Applying color to single vertices in a quad in opengl
An answer to this one provides a possible solution, but it's not simple:
The usual approach to solve this, is by performing the interpolation "manually" in a fragment shader, that takes into account the target topology, in your case a quad. Or in short you have to perform barycentric interpolation not based on a triangle but on a quad. You might also want to apply perspective correction.
The first thing you should know is that nothing is easy with OpenGL. It's a very complex state machine with a lot of quirks and a poor interface for developers.
That said, I think you're confusing a lot of different things. To draw a textured rectangle with perspective correction, you simply draw a textured rectangle in 3D space after setting the projection matrix appropriately.
First, you need to set up the projection you want. From your description, you need to create a perspective projection. In OpenGL, you usually have 2 main matrixes you're concerned with - projection and model-view. The projection matrix is sort of like your "camera".
How you do the above depends on whether you're using Legacy OpenGL (less than version 3.0) or Core Profile (modern, 3.0 or greater) OpenGL. This page describes 2 ways to do it, depending on which you're using.
void BuildPerspProjMat(float *m, float fov, float aspect, float znear, float zfar)
{
float xymax = znear * tan(fov * PI_OVER_360);
float ymin = -xymax;
float xmin = -xymax;
float width = xymax - xmin;
float height = xymax - ymin;
float depth = zfar - znear;
float q = -(zfar + znear) / depth;
float qn = -2 * (zfar * znear) / depth;
float w = 2 * znear / width;
w = w / aspect;
float h = 2 * znear / height;
m[0] = w;
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = h;
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = q;
m[11] = -1;
m[12] = 0;
m[13] = 0;
m[14] = qn;
m[15] = 0;
}
and here is how to use it in an OpenGL 1 / OpenGL 2 code:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(m);
// okay we can switch back to modelview mode
// for all other matrices
glMatrixMode(GL_MODELVIEW);
With a real OpenGL 3.0 code, we must use GLSL shaders and uniform variables to pass and exploit the transformation matrices:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glUseProgram(shaderId);
glUniformMatrix4fv("projMat", 1, GL_FALSE, m);
RenderObject();
glUseProgram(0);
Since I've not used Minecraft, I don't know whether it gives you a projection matrix to use or if you have the other information to construct it.
I need to write a function which shall take a sub-rectangle from a 2D texture (non power-of-2) and copy it to a destination sub-rectangle of an output 2D texture, using a shader (no glSubImage or similar).
Also the source and the destination may not have the same size, so I need to use linear filtering (or even mipmap).
void CopyToTex(GLuint dest_tex,GLuint src_tex,
GLuint src_width,GLuint src_height,
GLuint dest_width,GLuint dest_height,
float srcRect[4],
GLuint destRect[4]);
Here srcRect is in normalized 0-1 coordinates, that is the rectangle [0,1]x[0,1] touch the center of every border pixel of the input texture.
To achieve a good result when the input and source dimensions don't match, I want to use a GL_LINEAR filtering.
I want this function to behave in a coherent manner, i.e. calling it multiple times with many subrects shall produce the same result as one invocation with the union of the subrects; that is the linear sampler should sample the exact center of the input pixel.
Moreover, if the input rectangle fit exactly the destination rectangle an exact copy should occur.
This seems to be particularly hard.
What I've got now is something like this:
//Setup RTT, filtering and program
float vertices[4] = {
float(destRect[0]) / dest_width * 2.0 - 1.0,
float(destRect[1]) / dest_height * 2.0 - 1.0,
//etc..
};
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width - 0.5 / dest_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height - 0.5 / dest_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width + 0.5 / dest_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height + 0.5 / dest_height,
};
glBegin(GL_QUADS);
glTexCoord2f(texcoords[0], texcoords[1]);
glVertex2f(vertices[0], vertices[1]);
glTexCoord2f(texcoords[2], texcoords[1]);
glVertex2f(vertices[2], vertices[1]);
//etc...
glEnd();
To write this code I followed the information from this page.
This seems to work as intended in some corner cases (exact copy, copying a row or a column of one pixel).
My hardest test case is to perform an exact copy of a 2xN rectangle when both the input and output textures are bigger than 2xN.
I probably have some problem with offsets and scaling (the trivial ones don't work).
Solution:
The 0.5/tex_width part in the definition of the texcoords was wrong.
An easy way to work around is to completely remove that part.
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height
};
Instead, we draw a smaller quad, by offsetting the vertices by:
float dx = 1.0 / (dest_rect[2] - dest_rect[0]) - epsilon;
float dy = 1.0 / (dest_rect[3] - dest_rect[1]) - epsilon;
// assume glTexCoord for every vertex
glVertex2f(vertices[0] + dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[3] - dy);
glVertex2f(vertices[0] + dx, vertices[3] - dy);
In this way we draw a quad which pass from the exact center of every border pixel.
Since OpenGL may or may not draw the border pixels in this case, we need the epsilons.
I believe that my original solution (don't offset vertex coords) can still work, but need a bit of extra math to compute the right offsets for the texcoords.
I am reading the depthbuffer of a scene, however as I rotate the camera I notice that towards the edges of the screen the depth is returned closer to camera. I think the angle of impact has an effect on the depthbuffer, however as I am drawing a quad to the framebuffer, I do not want this to happen (this is not actually the case ofcourse but this sums up my what I need).
I linearize the depth with the following:
float linearize(float depth) {
float zNear = 0.1;
float zFar = 40.0;
return (2.0 * zNear) / (zFar + zNear - depth * (zFar - zNear));
}
I figured the following to correct for this, but it's not quite right yet. 45.0 is the angle of the camera vertically / 2. side is the space from the center of the screen.
const float angleVert = 45.0 / 180.0 * 3.17;
float sideAdjust(vec2 coord, float depth) {
float angA = cos(angleVert);
float side = (coord.y - 0.5);
if (side < 0.0) side = -side;
side *= 2.0;
float depthAdj = angA * side;
return depth / depthAdj;
}
To show my problem with a drawing with results of depth of a flat surface in front of the camera:
c
/ | \
/ | \
/ | \
closer further closer
is what I have, what I need:
c
| | |
| | |
| | |
even even even
An idea of how to do it, would be to find the position P in eye-space. Consider P a vector from origin to the point. Project the P onto the the eye direction vector (which in eye-space always is (0,0,-1)). The length of the projected vector is what you need.