Heightmap rendering with quads - opengl

I am trying to design a "low poly" terrain renderer, which takes in a heightmap and draws a tile/quad for each point (rather than a pixel/vertex).
I am having some trouble finding a way to "stitch" the quads together such that the centre of the quad is at the height specified in the heightmap.
What I have done so far is to simply set one vertex equal to the height of the "current" quad and the rest to the heights of the adjacent quads, for example something like:
Quad # (x, y) with height from heightmap as "height(x,y)":
B C
|---| y
| | ^
|---| |
A D --> x
With vertex heights:
heightA = height(x, y)
heightB = height(x, y + 1)
heightC = height(x + 1, y + 1)
heightD = height(x + 1, y)
Though this does join the quads together, it means the top right vertex (A) of the quad has the height specified rather of the centre.
tl;dr: Currently vertex of quad has a certain height, I would like the centroid of the quad to have this height instead.

Set
heightA = (height(x-1, y-1) + height(x-1, y) + height(x, y-1) + height(x, y))/4
heightB = (height(x-1, y) + height(x-1, y+1) + height(x, y) + height(x, y+1))/4
heightC = (height(x, y) + height(x, y+1) + height(x+1, y) + height(x+1, y+1))/4
heightD = (height(x, y-1) + height(x, y) + height(x+1, y-1) + height(x+1, y))/4
I.e. sample the height map at half-pixel locations with a bilinear interpolation. This has the same effect as box-blurring the height map, which is not such a good thing. You might use a different kernel, but essentially you cannot do much better.
The typical solution is to place the mesh vertices at the centers of the height-map texels, this way you preserve the height-map resolution without any spacial shifts.

Related

OpenGL matrix setup for tiled rendering

After reading datenwolf's 2011 answer concerning tile-based render setup in OpenGL, I attempted to implement his solution. The source image looks like this (at 800 x 600)
The resulting image with 2x2 tiles, each tile at 800 x 600 per tile looks like this.
As you can see they don't exactly match, though I can see something vaguely interesting has happened. I'm sure I've made an elementary error somewhere but I can't quite see it.
I'm doing 4 passes where:
w, h are 2,2 (2x2 tiles)
x, y are (0,0) (1,0) (0,1) and (1,1) in each of the 4 passes
MyFov is 1.30899692 (75 degrees)
MyWindowWidth, MyWindowHeight are 800, 600
MyNearPlane, MyFarPlane are 0.1, 200.0
The algorithm to calculate the frustum for each tile is:
auto aspect = static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight);
auto right = -0.5f * Math::Tan(MyFov) * MyShaderData.Camera_NearPlane;
auto left = -right;
auto top = aspect * right;
auto bottom = -top;
auto shift_X = (right - left) / static_cast<float>(w);
auto shift_Y = (top - bottom) / static_cast<float>(h);
auto frustum = Math::Frustum(left + shift_X * static_cast<float>(x),
left + shift_X * static_cast<float>(x + 1),
bottom + shift_Y * static_cast<float>(y),
bottom + shift_Y * static_cast<float>(y + 1),
MyShaderData.Camera_NearPlane,
MyShaderData.Camera_FarPlane);
, where Math::Frustum is:
template<class T>
Matrix4x4<T> Frustum(T left, T right, T bottom, T top, T nearPlane, T farPlane)
{
Matrix4x4<T> r(InitialiseAs::InitialiseZero);
r.m11 = (static_cast<T>(2) * nearPlane) / (right - left);
r.m22 = (static_cast<T>(2) * nearPlane) / (top - bottom);
r.m31 = (right + left) / (right - left);
r.m32 = (top + bottom) / (top - bottom);
r.m33 = -(farPlane + nearPlane) / (farPlane - nearPlane);
r.m34 = static_cast<T>(-1);
r.m43 = -(static_cast<T>(2) * farPlane * nearPlane) / (farPlane - nearPlane);
return r;
}
For completeness, my Matrx4x4 layout is:
struct
{
T m11, m12, m13, m14;
T m21, m22, m23, m24;
T m31, m32, m33, m34;
T m41, m42, m43, m44;
};
Can anyone spot my error?
Edit:
So derhass explained it to me - a much easier way of doing things is to simply scale and translate the projection matrix. For testing I modified my translation matrix, scaled up by 2x, as follows (changing translate for each tile):
auto scale = Math::Scale(2.f, 2.f, 1.f);
auto translate = Math::Translate(0.5f, 0.5f, 0.f);
auto projection = Math::Perspective(MyFov,
static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight),
MyShaderData.Camera_NearPlane,
MyShaderData.Camera_FarPlane);
MyShaderData.Camera_Projection = scale * translate * projection;
The resulting image is below (stitching 4 together) - the discontinuities in the image are caused by the post processing I think, so that's another issue I might have to deal with at some point.
This isn't a real answer for the question, but it might be an useful alternative approach to what you are trying to solve here. In my opinion, datenwolf's solution in his answer to the stackoverflow question you are referring to is more comlicated that it needs to be. So I'm presenting my alternative here.
Forword: I assume standard OpenGL matrix conventions, so that the vertex transformation with matrix M is done as v'= M *v (like the fixed-function pipeline did).
When a scene is rendered with some projection matrix P, you can extract any axis-aligned sub-rectangle of said scene by applying a scale and transformation operation after the projection matrix is applied.
The key point is that the viewing volume is defined as the [-1,1]^3 cube in NDC space. The clip space (which is what P transforms the data to) is just the homogenous represenation of that volume. As the typical 4x4 transformation matrices are all working in homogenous space, we don't really need to care about w at all and simply can define the transformations as if we were in NDC space.
Since you only need some 2D tiling, z should be left as-is, and only some scale and translation in x and y is required. When composing transformations A and B into a single Matrix C as C=A*B, following the aforementioned conventions this results in B being applied first, and A last (since C*v == A*B*v == A*(B*v)). So to modify the result after projection, we have to pre-multiply some transformations to P and we are done:
P'=S(sx,sy,1) * T(tx,ty,0) * P
The construction of P' will work with any valid projection matrix P, no matter if it is a perspective or ortho transform. In the ortho case, what this does is quite clear. In the perspective case, this actually modifies both the field of view and also shifts the frustum to an asymmetric one.
When you want to tile the image into a grid of m times n segments. it is clear that sx=m and sy=n. As I did use the S * T order (by choice), T is applied before the scale, so for each tile, (tx,ty) is just the vector moving the center of the tile to the new center (which will be the origin). As NDC space is 2 units wide and tall, for a tile x,y, the transformation is
tx= - (-1 + 2/(2*m) + (2/m) * x)
ty= - (-1 + 2/(2*n) + (2/n) * y)
// ^ ^ ^
// | | |
// | | +- size of of each tile in NDC space
// | |
// | +- half the size (as the center offset)
// |
// +- left/bottom border of NDC space

Aligning coordinate systems

Let's say I have 2 coordinate systems as it is shown in image attached
How can I align this coordinate systems? I know that I need to translate second coordinate system around X with 180 grads, and then translate it to (0, 0) of the first coordinate system, but I have some troubles with doing it getting wrong results. Will really appreciate any detailed answer.
EDIT: Actually (0, 0) of second coordinate system is in the same point like Y of the first coordinate system.
The important piece of information is where is the second coordinate system's origin - namely (a,b).
Once you know that, all you need is:
QPointF pos1; // original position
QTransform t;
t.scale(1, -1);
t.translate(a, -b+1);
QPointF pos2 = pos1 * t;
You have to find correct values of a,b,c,d,Ox,Oy in:
X = a * x + b * y + Ox
Y = c * x + d * y + Oy
where x,y are the coordinates on the point in one system and X,Y in the other one.
In your case, a = 1, b = 0, c = 0, d = -1.
Ox,Oy is the offset between the Origins.
see https://en.wikipedia.org/wiki/Change_of_basis
Here's the problem with rotating 180 degrees, it not only affects the direction of your Y coordinate but also X.
Y X <--+
^ => |
| v
+--> X Y
What you probably meant to do was translate the point at Y and then invert your Y-coordinate. You can do this like so:
Translate Y to new origin
Scale (1, -1)
Y +--> X
^ => |
| v
+--> X Y
After more thought, I have to wonder, why are you doing this transformation in the first place? Is it because of differences in OpenGL coordinates and the coordinates Qt uses for its window?
If this is the case, you can actually alter your projection matrix... If you're using an orthographic matrix, for instance:
Try glOrtho (0, X, 0, Y, -1, 1); instead of the traditional glOrtho (0, X, Y, 0, -1, 1);
If you opt to do this, you will probably need to change the winding direction of your polygon "front faces". OpenGL defaults to CCW, change it to glFrontFace (GL_CW) and this should prevent weird lighting / culling behavior.

How would I get this to UV map correctly?

Alright so I have my code to draw out a big landscape using C++ and DirectX. I had it textured with one texture and then needed to add more. I saw people doing it where they had 1 texture image and the image contained 2 textures. Thats what I made, it's a 256x128 image. My problem now is that since my terrain automatically generated the coordinates to UV map 1 texture now it is displaying both textures. I need to make it so when the height of the world is high enough it is 1 texture and everything under is another texture. My code for the UV coordinates,
Vertices[y * WIDTH * x].U = x / 1.28;
Vertices[y * WIDTH * x].V = y / 1.28;
those are my mapping coordinates, X is the current X value of the vertice it is drawing and the Y value is its current y position. The heightmap is 128x128 so I divided by 1.28 to make it so that each polygon had the texture UV mapped on it. The height is calculated as well since I am loading a heightmap and im trying to get it so when it is high enough it UV maps 1 half of the image and if it is the other it UV maps the other side of the image. Someone please help!
bool topTexture = height[x][y] > threshold;
float u = x / 1.28;
float v = y / 1.28;
Vertices[y * WIDTH * x].U = (u - (int)u) / 2 + (topTexture ? 0.5 : 0);
Vertices[y * WIDTH * x].V = (v - (int)v) / 2 + (topTexture ? 0.5 : 0);
You may want to blend the two textures at the threshold level. Then, you have to do it in the PixelShader.

Generating a normal map from a height map?

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.

Perspective correct texture mapping; z distance calculation might be wrong

I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.