Copy texture sub rectangles using shaders and rtt - c++

I need to write a function which shall take a sub-rectangle from a 2D texture (non power-of-2) and copy it to a destination sub-rectangle of an output 2D texture, using a shader (no glSubImage or similar).
Also the source and the destination may not have the same size, so I need to use linear filtering (or even mipmap).
void CopyToTex(GLuint dest_tex,GLuint src_tex,
GLuint src_width,GLuint src_height,
GLuint dest_width,GLuint dest_height,
float srcRect[4],
GLuint destRect[4]);
Here srcRect is in normalized 0-1 coordinates, that is the rectangle [0,1]x[0,1] touch the center of every border pixel of the input texture.
To achieve a good result when the input and source dimensions don't match, I want to use a GL_LINEAR filtering.
I want this function to behave in a coherent manner, i.e. calling it multiple times with many subrects shall produce the same result as one invocation with the union of the subrects; that is the linear sampler should sample the exact center of the input pixel.
Moreover, if the input rectangle fit exactly the destination rectangle an exact copy should occur.
This seems to be particularly hard.
What I've got now is something like this:
//Setup RTT, filtering and program
float vertices[4] = {
float(destRect[0]) / dest_width * 2.0 - 1.0,
float(destRect[1]) / dest_height * 2.0 - 1.0,
//etc..
};
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width - 0.5 / dest_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height - 0.5 / dest_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width + 0.5 / dest_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height + 0.5 / dest_height,
};
glBegin(GL_QUADS);
glTexCoord2f(texcoords[0], texcoords[1]);
glVertex2f(vertices[0], vertices[1]);
glTexCoord2f(texcoords[2], texcoords[1]);
glVertex2f(vertices[2], vertices[1]);
//etc...
glEnd();
To write this code I followed the information from this page.
This seems to work as intended in some corner cases (exact copy, copying a row or a column of one pixel).
My hardest test case is to perform an exact copy of a 2xN rectangle when both the input and output textures are bigger than 2xN.
I probably have some problem with offsets and scaling (the trivial ones don't work).

Solution:
The 0.5/tex_width part in the definition of the texcoords was wrong.
An easy way to work around is to completely remove that part.
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height
};
Instead, we draw a smaller quad, by offsetting the vertices by:
float dx = 1.0 / (dest_rect[2] - dest_rect[0]) - epsilon;
float dy = 1.0 / (dest_rect[3] - dest_rect[1]) - epsilon;
// assume glTexCoord for every vertex
glVertex2f(vertices[0] + dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[3] - dy);
glVertex2f(vertices[0] + dx, vertices[3] - dy);
In this way we draw a quad which pass from the exact center of every border pixel.
Since OpenGL may or may not draw the border pixels in this case, we need the epsilons.
I believe that my original solution (don't offset vertex coords) can still work, but need a bit of extra math to compute the right offsets for the texcoords.

Related

OpenGL Texture render

I am rendering 3 squares on which I have stretched the texture, and as I move them in space, I change the coordinates of the vertices of these squares. Everything works, but as soon as I want to render another image, the texture of which is already on the stage, for some reason it is not displayed in the position where I need it, but is superimposed on the same texture on the stage. That is, when I want to render two identical images in different parts of the scene, they overlap each other (I see this due to the fact that there are almost transparent pixels in the image, and when I add it, it can be seen that they overlap each other), although in the console I see that their coordinates are different. What's wrong?
I use SharpGL
private void DrawTexture(int idTexture)
{
gl.PushMatrix();
gl.LoadIdentity();
gl.Scale(-0.15, 0.28, 1);
gl.Rotate(180, 0, 0, 0);
gl.BindTexture(GL_TEXTURE_2D, _textureParams[idTexture].textures);
gl.Begin(GL_QUADS);
float dist = DistanceTextureMultipler;
gl.TexCoord(0, 0); gl.Vertex(_textureParams[idTexture].textureScaleX * (-1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (-1 + _textureParams[idTexture].texturePostitonY / dist));
gl.TexCoord(1, 0); gl.Vertex(_textureParams[idTexture].textureScaleX * (1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (-1 + _textureParams[idTexture].texturePostitonY / dist));
gl.TexCoord(1, 1); gl.Vertex(_textureParams[idTexture].textureScaleX * (1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (1 + _textureParams[idTexture].texturePostitonY / dist));
gl.TexCoord(0, 1); gl.Vertex(_textureParams[idTexture].textureScaleX * (-1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (1 + _textureParams[idTexture].texturePostitonY / dist));
gl.End();
gl.PopMatrix();
}

Linearize depth

In OpenGL you can linearize a depth value like so:
float linearize_depth(float d,float zNear,float zFar)
{
float z_n = 2.0 * d - 1.0;
return 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
(Source: https://stackoverflow.com/a/6657284/10011415)
However, Vulkan handles depth values somewhat differently (https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/). I don't quite understand the math behind it, what changes would I have to make to the function to linearize a depth value with Vulkan?
The important difference between OpenGL and Vulkan here is that the normalized device coordinates (NDC) have a different range for z (the depth). In OpenGL it's -1 to 1 and in Vulkan it's 0 to 1.
However, in OpenGL when the depth is stored into a depth texture and you read from it, the value is further normalized to 0 to 1. This seems to be the case in your example, since the first line of your function maps it back to -1 to 1.
In Vulkan, your depth is always between 0 and 1, so the above function works in Vulkan as well. You can simplify it a bit though:
float linearize_depth(float d,float zNear,float zFar)
{
return zNear * zFar / (zFar + d * (zNear - zFar));
}

Duplicate OpenGL orthographic projection behaviour without OpenGL

I'm encountering a problem trying to replicate the OpenGL behaviour in an ambient without OpenGL.
Basically I need to create an SVG file from a list of lines my program creates. These lines are created using an othigraphic projection.
I'm sure that these lines are calculated correctly because if I try to use them with a OpenGL context with orthographic projection and save the result into an image, the image is correct.
The problem raises when I use the exactly same lines without OpenGL.
I've replicated the OpenGL projection and view matrices and I process every line point like this:
3D_output_point = projection_matrix * view_matrix * 3D_input_point
and then I calculate it's screen (SVG file) position like this:
2D_point_x = (windowWidth / 2) * 3D_point_x + (windowWidth / 2)
2D_point_y = (windowHeight / 2) * 3D_point_y + (windowHeight / 2)
I calculate the othographic projection matrix like this:
float range = 700.0f;
float l, t, r, b, n, f;
l = -range;
r = range;
b = -range;
t = range;
n = -6000;
f = 8000;
matProj.SetValore(0, 0, 2.0f / (r - l));
matProj.SetValore(0, 1, 0.0f);
matProj.SetValore(0, 2, 0.0f);
matProj.SetValore(0, 3, 0.0f);
matProj.SetValore(1, 0, 0.0f);
matProj.SetValore(1, 1, 2.0f / (t - b));
matProj.SetValore(1, 2, 0.0f);
matProj.SetValore(1, 3, 0.0f);
matProj.SetValore(2, 0, 0.0f);
matProj.SetValore(2, 1, 0.0f);
matProj.SetValore(2, 2, (-1.0f) / (f - n));
matProj.SetValore(2, 3, 0.0f);
matProj.SetValore(3, 0, -(r + l) / (r - l));
matProj.SetValore(3, 1, -(t + b) / (t - b));
matProj.SetValore(3, 2, -n / (f - n));
matProj.SetValore(3, 3, 1.0f);
and the view matrix this way:
CVettore position, lookAt, up;
position.AssegnaCoordinate(rtRay->m_pCam->Vp.x, rtRay->m_pCam->Vp.y, rtRay->m_pCam->Vp.z);
lookAt.AssegnaCoordinate(rtRay->m_pCam->Lp.x, rtRay->m_pCam->Lp.y, rtRay->m_pCam->Lp.z);
up.AssegnaCoordinate(rtRay->m_pCam->Up.x, rtRay->m_pCam->Up.y, rtRay->m_pCam->Up.z);
up[0] = -up[0];
up[1] = -up[1];
up[2] = -up[2];
CVettore zAxis, xAxis, yAxis;
float length, result1, result2, result3;
// zAxis = normal(lookAt - position)
zAxis[0] = lookAt[0] - position[0];
zAxis[1] = lookAt[1] - position[1];
zAxis[2] = lookAt[2] - position[2];
length = sqrt((zAxis[0] * zAxis[0]) + (zAxis[1] * zAxis[1]) + (zAxis[2] * zAxis[2]));
zAxis[0] = zAxis[0] / length;
zAxis[1] = zAxis[1] / length;
zAxis[2] = zAxis[2] / length;
// xAxis = normal(cross(up, zAxis))
xAxis[0] = (up[1] * zAxis[2]) - (up[2] * zAxis[1]);
xAxis[1] = (up[2] * zAxis[0]) - (up[0] * zAxis[2]);
xAxis[2] = (up[0] * zAxis[1]) - (up[1] * zAxis[0]);
length = sqrt((xAxis[0] * xAxis[0]) + (xAxis[1] * xAxis[1]) + (xAxis[2] * xAxis[2]));
xAxis[0] = xAxis[0] / length;
xAxis[1] = xAxis[1] / length;
xAxis[2] = xAxis[2] / length;
// yAxis = cross(zAxis, xAxis)
yAxis[0] = (zAxis[1] * xAxis[2]) - (zAxis[2] * xAxis[1]);
yAxis[1] = (zAxis[2] * xAxis[0]) - (zAxis[0] * xAxis[2]);
yAxis[2] = (zAxis[0] * xAxis[1]) - (zAxis[1] * xAxis[0]);
// -dot(xAxis, position)
result1 = ((xAxis[0] * position[0]) + (xAxis[1] * position[1]) + (xAxis[2] * position[2])) * -1.0f;
// -dot(yaxis, eye)
result2 = ((yAxis[0] * position[0]) + (yAxis[1] * position[1]) + (yAxis[2] * position[2])) * -1.0f;
// -dot(zaxis, eye)
result3 = ((zAxis[0] * position[0]) + (zAxis[1] * position[1]) + (zAxis[2] * position[2])) * -1.0f;
// Set the computed values in the view matrix.
matView.SetValore(0, 0, xAxis[0]);
matView.SetValore(0, 1, yAxis[0]);
matView.SetValore(0, 2, zAxis[0]);
matView.SetValore(0, 3, 0.0f);
matView.SetValore(1, 0, xAxis[1]);
matView.SetValore(1, 1, yAxis[1]);
matView.SetValore(1, 2, zAxis[1]);
matView.SetValore(1, 3, 0.0f);
matView.SetValore(2, 0, xAxis[2]);
matView.SetValore(2, 1, yAxis[2]);
matView.SetValore(2, 2, zAxis[2]);
matView.SetValore(2, 3, 0.0f);
matView.SetValore(3, 0, result1);
matView.SetValore(3, 1, result2);
matView.SetValore(3, 2, result3);
matView.SetValore(3, 3, 1.0f);
The results I get from OpenGL and from the SVG output are quite different, but in two days I couldn't come up with a solution.
This is the OpenGL output
And this is my SVG output
As you can see, it's rotation isn't corrent.
Any idea why? The line points are the same and the matrices too, hopefully.
Pasing the matrices I was creating didn't work. I mean, the matrices were wrong, I think, because OpenGL didn't show anything.
So I tryed doing the opposite, I created the matrices in OpenGL and used them with my code. The result is better, but not perfect yet.
Now I think the I do something wrong mapping the 3D points into 2D screen points because the points I get are inverted in Y and I still have some lines not perfectly matching.
This is what I get using the OpenGL matrices and my previous approach to map 3D points to 2D screen space (this is the SVG, not OpenGL render):
Ok this is the content of the view matrix I get from OpenGL:
This is the projection matrix I get from OpenGL:
And this is the result I get with those matrices and by changing my 2D point Y coordinate calculation like bofjas said:
It looks like some rotations are missing. My camera has a rotation of 30° on both the X and Y axis, and it looks like they're not computed correctly.
Now I'm using the same matrices OpenGL does. So I think that I'm doing some wrong calculations when I map the 3D point into 2D screen coordinates.
Rather than debugging your own code, you can use transform feedback to compute the projections of your lines using the OpenGL pipeline. Rather than rasterizing them on the screen you can capture them in a memory buffer and save directly to the SVG afterwards. Setting this up is a bit involved and depends on the exact setup of your OpenGL codepath, but it might be a simpler solution.
As per your own code, it looks like you either mixed x and y coordinates somewhere, or row-major and column-major matrices.
I've solved this problem in a really simple way. Since when I draw using OpenGL it's working, I've just created the matrices in OpenGL and then retrieved them with glGet(). Using those matrices everything is ok.
You're looking for a specialized version of orthographic (oblique) projections called isometric projections. The math is really simple if you want to know what's inside the matrix. Have a look on Wikipedia
OpenGL loads matrices in column major(opposite of c++).for example this matrix:
[1 ,2 ,3 ,4 ,
5 ,6 ,7 ,8 ,
9 ,10,11,12,
13,14,15,16]
loads this way in memory:
|_1 _|
|_5 _|
|_9 _|
|_13_|
|_2 _|
.
.
.
so i suppose you should transpose those matrices from openGL(if you`re doing it row major)

OpenGL tune from one color to another

I have made a small particle system. What I now would like to do is have the particles fade from one color to another during it's life time. For example from black to white or Yellow to Red.
I use the glColor() functions to set the color of the particle.
How do I do this?
you have to blende the colors by your self:
calculate the blend factor between 0 and 1 and mix the colors
float blend = lifeTime / maxLifeTime;
float red = (destRed * blend) + (srcRed * (1.0 - blend));
float green = (destGreen * blend) + (srcGreen * (1.0 - blend));
float blue = (destBlue * blend) + (srcBlue * (1.0 - blend));
regards
ron

Rotating/aligning texture coordinates

I have 1024x1 gradient texture that I want to map to a quad. This gradient should be aligned along the line (p1,p2) inside that quad. The texture has the GL_CLAMP_TO_EDGE property, so it will fill the entire quad.
I now need to figure out the texture coordinates for the four corners (A,B,C,D) of the quad, but I can't wrap my head around the required math.
I tried to calculate the angle between (p1,p2) and then rotate the corner points around the center of the line between (p1,2), but I couldn't get this to work right. It seems a bit excessive anyway - is there an easier solution?
Are you using shaders? If yes, then assign your quad just default UVs from 0 to 1 .Then based on the p1 p2 segment slope calculate the degrees for rotation (don't forget to convert those to radians). Then in the vertex shader construct 2x2 rotation matrix and rotate the UVs the amount defined by the segment.At the end pass the rotated UVs into fragment shader and use with your gradient texture sampler.
I found another approach that actually works as I want it to, using only additions, multiplications and one div; sparing the expensive sqrt.
I first calculate the slope of the (p1,p2) line and the one orthogonal to it. I then work out the intersection points of the slope (starting at p1) and the orthogonal stating at each corner.
Vector2 slope = {p2.x - p1.x, p2.y - p1.y};
Vector2 ortho = {-slope.y, slope.x};
float div = 1/(slope.y * ortho.x - slope.x * ortho.y);
Vector2 A = {
(ortho.x * -p1.y + ortho.y * p1.x) * div,
(slope.x * -p1.y + slope.y * p1.x) * div
};
Vector2 B = {
(ortho.x * -p1.y + ortho.y * (p1.x - 1)) * div,
(slope.x * -p1.y + slope.y * (p1.x - 1)) * div
};
Vector2 C = {
(ortho.x * (1 - p1.y) + ortho.y * p1.x) * div,
(slope.x * (1 - p1.y) + slope.y * p1.x) * div
};
Vector2 D = {
(ortho.x * (1 - p1.y) + ortho.y * (p1.x - 1)) * div,
(slope.x * (1 - p1.y) + slope.y * (p1.x - 1)) * div
};