I am rendering 3 squares on which I have stretched the texture, and as I move them in space, I change the coordinates of the vertices of these squares. Everything works, but as soon as I want to render another image, the texture of which is already on the stage, for some reason it is not displayed in the position where I need it, but is superimposed on the same texture on the stage. That is, when I want to render two identical images in different parts of the scene, they overlap each other (I see this due to the fact that there are almost transparent pixels in the image, and when I add it, it can be seen that they overlap each other), although in the console I see that their coordinates are different. What's wrong?
I use SharpGL
private void DrawTexture(int idTexture)
{
gl.PushMatrix();
gl.LoadIdentity();
gl.Scale(-0.15, 0.28, 1);
gl.Rotate(180, 0, 0, 0);
gl.BindTexture(GL_TEXTURE_2D, _textureParams[idTexture].textures);
gl.Begin(GL_QUADS);
float dist = DistanceTextureMultipler;
gl.TexCoord(0, 0); gl.Vertex(_textureParams[idTexture].textureScaleX * (-1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (-1 + _textureParams[idTexture].texturePostitonY / dist));
gl.TexCoord(1, 0); gl.Vertex(_textureParams[idTexture].textureScaleX * (1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (-1 + _textureParams[idTexture].texturePostitonY / dist));
gl.TexCoord(1, 1); gl.Vertex(_textureParams[idTexture].textureScaleX * (1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (1 + _textureParams[idTexture].texturePostitonY / dist));
gl.TexCoord(0, 1); gl.Vertex(_textureParams[idTexture].textureScaleX * (-1 + _textureParams[idTexture].texturePostitonX / dist), _textureParams[idTexture].textureScaleY * (1 + _textureParams[idTexture].texturePostitonY / dist));
gl.End();
gl.PopMatrix();
}
Related
Im trying to retain the ratio and sizes of the rendered content when resizing my window/framebuffer texture on which im rendering exclusively on the xy-plane (z=0) and would like to have an orthographic projection.
Some general questions, do i need to resize both glm::ortho and viewport, or just one of them? Do both of them have to have the same amount of pixels or just the same aspect ratio? What about their x and y offsets?
I understand that i need to update the texture size.
What i have tried (among many other things):
One texture attachment on my framebuffer, a gl_RGB on GL_COLOR_ATTACHMENT0.
I render my framebuffers attached texture to a ImGui window with imgui::image(), When this ImGui window is resized I resize the texture assignent to my FBO using this: (i do not reattach the texture with glFramebufferTexture2D!)
void Texture2D::SetSize(glm::ivec2 size)`
{
Bind();
this->size = size;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, size.x, size.y, 0, GL_RGB, GL_UNSIGNED_BYTE, nullptr);
Unbind();
}
I also update the member variable windowSize.
In my renderingloop (for the Framebuffer) i use
view = glm::lookAt(glm::vec3(0, 0, 1), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));
glm::vec2 projWidth = windowSize.x;
glm::vec2 projHeight = windowSize.y;
proj = glm::ortho(-projWidth/2, projWidth/2, -projHeight/2, projHeight/2, -1.f, 1.f);
shaderProgram.SetMat4("view", view, 1);
shaderProgram.SetMat4("proj", proj, 1);
shaderProgram.SetMat4("world", world, 1); // world is identity matrix for now
glViewport(-windowSize.x/2, -windowSize.y/2, windowSize.x, windowSize.y);
Now i now do have some variables here that i use to try to implement paning with but i first want to get resizing to work correctly.
EDIT:
ok i have now tried different parameters for the viewport and orthographic projection matrix:
auto aspect = (float)(windowSize.x)/(float)windowSize.y;
glViewport(0, 0, windowSize.x, windowSize.y);
proj = glm::ortho<float>(-aspect, aspect, -1/aspect, 1/aspect, 1, -1);
Resizing the window still stretches my renderTexture, but now it does it uniformely, x is scaled as much as y. My square sprites remain sqares, but becomes smaller as i decrease the window size.
GIF
I've found a way to resize the 2D interface to scale it based by aspect, give this a try and see if this solves your issue:
auto aspectx = windowSize.x/windowSize.y
auto aspecty = (aspectx < 16f / 9f ? aspectx : 16f / 9f) / (16f / 9f);
float srcaspect = 4f / 3f;
float scale = 0.5f*((float)windowSize.y);
proj = glm::ortho(scale - (scale * aspectx / aspecty) - (scale - (scale * srcaspect)), scale + (scale * aspectx / aspecty) - (scale - (scale * srcaspect)), scale - (scale / aspecty), scale + (scale / aspecty), -1.f, 1.f);
and as a future reference, do this if you want the orthographic scaled within the aspect ratio. The source aspect is what scales the object to be within the original screen aspect:
float srcaspect = 4f / 3f;
float dstaspect = (float)windowSize.x/(float)windowSize.y;
float scale = 0.5f*(bottom - top);
float offset = scale + top;
proj = glm::ortho<float>(offset - (scale * dstaspect / yscale) - (offset - (scale * srcaspect)), offset + (scale * dstaspect / yscale) - (offset - (scale * srcaspect)), offset - (scale / yscale), offset + (scale / yscale), 1, -1);
I believe there were problems with my handling of FBOs as i have now found a satisfactory solution, one that also seems to be the obvious one. Simply using the width and height of the viewport (and FBO attached texture) like this:
proj = glm::ortho<float>(-zoom *viewportSize.x/(2) , zoom*viewportSize.x/(2) , -zoom*viewportSize.y/(2), zoom*viewportSize.y/(2), 1, -1);
You might also want to divide the four parameters by a constant (possibly much larger than 2) to make the zooming range more natural around 1.
While i tried this exact code before, I actually did not recreate the framebuffer, but simply used glTexImage2D(); on the attached texture.
It seems as though one need to delete and recreate the whole Framebuffer, and this was also stated to be safer on some posts here on SO, but I never found any sources saying that it was actually required.
So, lesson learnt, delete and create a new framebuffer. Window resizing is not something that happens very often.
I'm encountering a problem trying to replicate the OpenGL behaviour in an ambient without OpenGL.
Basically I need to create an SVG file from a list of lines my program creates. These lines are created using an othigraphic projection.
I'm sure that these lines are calculated correctly because if I try to use them with a OpenGL context with orthographic projection and save the result into an image, the image is correct.
The problem raises when I use the exactly same lines without OpenGL.
I've replicated the OpenGL projection and view matrices and I process every line point like this:
3D_output_point = projection_matrix * view_matrix * 3D_input_point
and then I calculate it's screen (SVG file) position like this:
2D_point_x = (windowWidth / 2) * 3D_point_x + (windowWidth / 2)
2D_point_y = (windowHeight / 2) * 3D_point_y + (windowHeight / 2)
I calculate the othographic projection matrix like this:
float range = 700.0f;
float l, t, r, b, n, f;
l = -range;
r = range;
b = -range;
t = range;
n = -6000;
f = 8000;
matProj.SetValore(0, 0, 2.0f / (r - l));
matProj.SetValore(0, 1, 0.0f);
matProj.SetValore(0, 2, 0.0f);
matProj.SetValore(0, 3, 0.0f);
matProj.SetValore(1, 0, 0.0f);
matProj.SetValore(1, 1, 2.0f / (t - b));
matProj.SetValore(1, 2, 0.0f);
matProj.SetValore(1, 3, 0.0f);
matProj.SetValore(2, 0, 0.0f);
matProj.SetValore(2, 1, 0.0f);
matProj.SetValore(2, 2, (-1.0f) / (f - n));
matProj.SetValore(2, 3, 0.0f);
matProj.SetValore(3, 0, -(r + l) / (r - l));
matProj.SetValore(3, 1, -(t + b) / (t - b));
matProj.SetValore(3, 2, -n / (f - n));
matProj.SetValore(3, 3, 1.0f);
and the view matrix this way:
CVettore position, lookAt, up;
position.AssegnaCoordinate(rtRay->m_pCam->Vp.x, rtRay->m_pCam->Vp.y, rtRay->m_pCam->Vp.z);
lookAt.AssegnaCoordinate(rtRay->m_pCam->Lp.x, rtRay->m_pCam->Lp.y, rtRay->m_pCam->Lp.z);
up.AssegnaCoordinate(rtRay->m_pCam->Up.x, rtRay->m_pCam->Up.y, rtRay->m_pCam->Up.z);
up[0] = -up[0];
up[1] = -up[1];
up[2] = -up[2];
CVettore zAxis, xAxis, yAxis;
float length, result1, result2, result3;
// zAxis = normal(lookAt - position)
zAxis[0] = lookAt[0] - position[0];
zAxis[1] = lookAt[1] - position[1];
zAxis[2] = lookAt[2] - position[2];
length = sqrt((zAxis[0] * zAxis[0]) + (zAxis[1] * zAxis[1]) + (zAxis[2] * zAxis[2]));
zAxis[0] = zAxis[0] / length;
zAxis[1] = zAxis[1] / length;
zAxis[2] = zAxis[2] / length;
// xAxis = normal(cross(up, zAxis))
xAxis[0] = (up[1] * zAxis[2]) - (up[2] * zAxis[1]);
xAxis[1] = (up[2] * zAxis[0]) - (up[0] * zAxis[2]);
xAxis[2] = (up[0] * zAxis[1]) - (up[1] * zAxis[0]);
length = sqrt((xAxis[0] * xAxis[0]) + (xAxis[1] * xAxis[1]) + (xAxis[2] * xAxis[2]));
xAxis[0] = xAxis[0] / length;
xAxis[1] = xAxis[1] / length;
xAxis[2] = xAxis[2] / length;
// yAxis = cross(zAxis, xAxis)
yAxis[0] = (zAxis[1] * xAxis[2]) - (zAxis[2] * xAxis[1]);
yAxis[1] = (zAxis[2] * xAxis[0]) - (zAxis[0] * xAxis[2]);
yAxis[2] = (zAxis[0] * xAxis[1]) - (zAxis[1] * xAxis[0]);
// -dot(xAxis, position)
result1 = ((xAxis[0] * position[0]) + (xAxis[1] * position[1]) + (xAxis[2] * position[2])) * -1.0f;
// -dot(yaxis, eye)
result2 = ((yAxis[0] * position[0]) + (yAxis[1] * position[1]) + (yAxis[2] * position[2])) * -1.0f;
// -dot(zaxis, eye)
result3 = ((zAxis[0] * position[0]) + (zAxis[1] * position[1]) + (zAxis[2] * position[2])) * -1.0f;
// Set the computed values in the view matrix.
matView.SetValore(0, 0, xAxis[0]);
matView.SetValore(0, 1, yAxis[0]);
matView.SetValore(0, 2, zAxis[0]);
matView.SetValore(0, 3, 0.0f);
matView.SetValore(1, 0, xAxis[1]);
matView.SetValore(1, 1, yAxis[1]);
matView.SetValore(1, 2, zAxis[1]);
matView.SetValore(1, 3, 0.0f);
matView.SetValore(2, 0, xAxis[2]);
matView.SetValore(2, 1, yAxis[2]);
matView.SetValore(2, 2, zAxis[2]);
matView.SetValore(2, 3, 0.0f);
matView.SetValore(3, 0, result1);
matView.SetValore(3, 1, result2);
matView.SetValore(3, 2, result3);
matView.SetValore(3, 3, 1.0f);
The results I get from OpenGL and from the SVG output are quite different, but in two days I couldn't come up with a solution.
This is the OpenGL output
And this is my SVG output
As you can see, it's rotation isn't corrent.
Any idea why? The line points are the same and the matrices too, hopefully.
Pasing the matrices I was creating didn't work. I mean, the matrices were wrong, I think, because OpenGL didn't show anything.
So I tryed doing the opposite, I created the matrices in OpenGL and used them with my code. The result is better, but not perfect yet.
Now I think the I do something wrong mapping the 3D points into 2D screen points because the points I get are inverted in Y and I still have some lines not perfectly matching.
This is what I get using the OpenGL matrices and my previous approach to map 3D points to 2D screen space (this is the SVG, not OpenGL render):
Ok this is the content of the view matrix I get from OpenGL:
This is the projection matrix I get from OpenGL:
And this is the result I get with those matrices and by changing my 2D point Y coordinate calculation like bofjas said:
It looks like some rotations are missing. My camera has a rotation of 30° on both the X and Y axis, and it looks like they're not computed correctly.
Now I'm using the same matrices OpenGL does. So I think that I'm doing some wrong calculations when I map the 3D point into 2D screen coordinates.
Rather than debugging your own code, you can use transform feedback to compute the projections of your lines using the OpenGL pipeline. Rather than rasterizing them on the screen you can capture them in a memory buffer and save directly to the SVG afterwards. Setting this up is a bit involved and depends on the exact setup of your OpenGL codepath, but it might be a simpler solution.
As per your own code, it looks like you either mixed x and y coordinates somewhere, or row-major and column-major matrices.
I've solved this problem in a really simple way. Since when I draw using OpenGL it's working, I've just created the matrices in OpenGL and then retrieved them with glGet(). Using those matrices everything is ok.
You're looking for a specialized version of orthographic (oblique) projections called isometric projections. The math is really simple if you want to know what's inside the matrix. Have a look on Wikipedia
OpenGL loads matrices in column major(opposite of c++).for example this matrix:
[1 ,2 ,3 ,4 ,
5 ,6 ,7 ,8 ,
9 ,10,11,12,
13,14,15,16]
loads this way in memory:
|_1 _|
|_5 _|
|_9 _|
|_13_|
|_2 _|
.
.
.
so i suppose you should transpose those matrices from openGL(if you`re doing it row major)
here is the code which draws a cylinder primitive :
glBegin(GL_TRIANGLE_FAN);
glVertex3f(0, height / 2.0f, 0);
for (int i = 0; i < SIDENUMS +1; i++)
{
glVertex3f(radius * cos(i * 2 * PI / SIDENUMS), height / 2.0f, radius * sin(i * 2 * PI / SIDENUMS));
}
glEnd();
glBegin(GL_QUADS);
for (int i = 0; i < SIDENUMS + 1;i++)
{
glVertex3f(radius * cos(i * 2 * PI / SIDENUMS), height / 2.0f, radius * sin(i * 2 * PI / SIDENUMS));
glVertex3f(radius * cos(i * 2 * PI / SIDENUMS), -height / 2.0f, radius * sin(i * 2 * PI / SIDENUMS));
glVertex3f(radius * cos((i+1) * 2 * PI / SIDENUMS), -height / 2.0f, radius * sin((i+1) * 2 * PI / SIDENUMS));
glVertex3f(radius * cos((i+1) * 2 * PI / SIDENUMS), height / 2.0f, radius * sin((i+1) * 2 * PI / SIDENUMS));
}
glEnd();
glBegin(GL_TRIANGLE_FAN);
glVertex3f(0, -height / 2.0f, 0);
for (int i = 0; i < SIDENUMS + 1; i++)
{
glVertex3f(radius * cos(i * 2 * PI / SIDENUMS), -height / 2.0f, radius * sin(i * 2 * PI / SIDENUMS));
}
glEnd();
And the output is something like image below :
Now in picture , please note on Point A.
This point has drawn two times , one in TRIANGLE_FAN and second in GL_QUADS , My question is that , is this two distinct points welded automatically to each other to make a one point ? or they are on top of each other ? In summary how many points are there ? one or two ? how can I fix if they are not welded to each other?
is this two distinct points welded automatically to each other to make a one point ?
Nope.
or they are on top of each other ?
Yep.
how many points are there ? one or two ?
Two.
You'll have to fuse vertices client-side if that's important to you. There's no magic coalescing in OpenGL.
The triangle fan primitives re-use "points" when creating the triangles, but for quads each call to glVertex3f (...) is generating a single "point."
To put this another way:
When you use triangle fans, the first triangle requires 3 vertices to be transformed, each following triangle only requires 1 new vertex (and reuses 2). This matters for post-T&L cache efficiency, which is the concept you were ultimately getting at in your question.
When you use vertex arrays and glDrawElements (...) or primitive modes that share vertices (e.g. fans and strips) you can avoid having to transform the same point multiple times. If you were using more per-vertex attributes than you have in your example, however, this might not be practical. That is, if each of the sides of the object you are drawing had a different set of texture coordinates or normals for the same spatial location.
I need to write a function which shall take a sub-rectangle from a 2D texture (non power-of-2) and copy it to a destination sub-rectangle of an output 2D texture, using a shader (no glSubImage or similar).
Also the source and the destination may not have the same size, so I need to use linear filtering (or even mipmap).
void CopyToTex(GLuint dest_tex,GLuint src_tex,
GLuint src_width,GLuint src_height,
GLuint dest_width,GLuint dest_height,
float srcRect[4],
GLuint destRect[4]);
Here srcRect is in normalized 0-1 coordinates, that is the rectangle [0,1]x[0,1] touch the center of every border pixel of the input texture.
To achieve a good result when the input and source dimensions don't match, I want to use a GL_LINEAR filtering.
I want this function to behave in a coherent manner, i.e. calling it multiple times with many subrects shall produce the same result as one invocation with the union of the subrects; that is the linear sampler should sample the exact center of the input pixel.
Moreover, if the input rectangle fit exactly the destination rectangle an exact copy should occur.
This seems to be particularly hard.
What I've got now is something like this:
//Setup RTT, filtering and program
float vertices[4] = {
float(destRect[0]) / dest_width * 2.0 - 1.0,
float(destRect[1]) / dest_height * 2.0 - 1.0,
//etc..
};
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width - 0.5 / dest_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height - 0.5 / dest_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width + 0.5 / dest_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height + 0.5 / dest_height,
};
glBegin(GL_QUADS);
glTexCoord2f(texcoords[0], texcoords[1]);
glVertex2f(vertices[0], vertices[1]);
glTexCoord2f(texcoords[2], texcoords[1]);
glVertex2f(vertices[2], vertices[1]);
//etc...
glEnd();
To write this code I followed the information from this page.
This seems to work as intended in some corner cases (exact copy, copying a row or a column of one pixel).
My hardest test case is to perform an exact copy of a 2xN rectangle when both the input and output textures are bigger than 2xN.
I probably have some problem with offsets and scaling (the trivial ones don't work).
Solution:
The 0.5/tex_width part in the definition of the texcoords was wrong.
An easy way to work around is to completely remove that part.
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height
};
Instead, we draw a smaller quad, by offsetting the vertices by:
float dx = 1.0 / (dest_rect[2] - dest_rect[0]) - epsilon;
float dy = 1.0 / (dest_rect[3] - dest_rect[1]) - epsilon;
// assume glTexCoord for every vertex
glVertex2f(vertices[0] + dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[3] - dy);
glVertex2f(vertices[0] + dx, vertices[3] - dy);
In this way we draw a quad which pass from the exact center of every border pixel.
Since OpenGL may or may not draw the border pixels in this case, we need the epsilons.
I believe that my original solution (don't offset vertex coords) can still work, but need a bit of extra math to compute the right offsets for the texcoords.
I have 1024x1 gradient texture that I want to map to a quad. This gradient should be aligned along the line (p1,p2) inside that quad. The texture has the GL_CLAMP_TO_EDGE property, so it will fill the entire quad.
I now need to figure out the texture coordinates for the four corners (A,B,C,D) of the quad, but I can't wrap my head around the required math.
I tried to calculate the angle between (p1,p2) and then rotate the corner points around the center of the line between (p1,2), but I couldn't get this to work right. It seems a bit excessive anyway - is there an easier solution?
Are you using shaders? If yes, then assign your quad just default UVs from 0 to 1 .Then based on the p1 p2 segment slope calculate the degrees for rotation (don't forget to convert those to radians). Then in the vertex shader construct 2x2 rotation matrix and rotate the UVs the amount defined by the segment.At the end pass the rotated UVs into fragment shader and use with your gradient texture sampler.
I found another approach that actually works as I want it to, using only additions, multiplications and one div; sparing the expensive sqrt.
I first calculate the slope of the (p1,p2) line and the one orthogonal to it. I then work out the intersection points of the slope (starting at p1) and the orthogonal stating at each corner.
Vector2 slope = {p2.x - p1.x, p2.y - p1.y};
Vector2 ortho = {-slope.y, slope.x};
float div = 1/(slope.y * ortho.x - slope.x * ortho.y);
Vector2 A = {
(ortho.x * -p1.y + ortho.y * p1.x) * div,
(slope.x * -p1.y + slope.y * p1.x) * div
};
Vector2 B = {
(ortho.x * -p1.y + ortho.y * (p1.x - 1)) * div,
(slope.x * -p1.y + slope.y * (p1.x - 1)) * div
};
Vector2 C = {
(ortho.x * (1 - p1.y) + ortho.y * p1.x) * div,
(slope.x * (1 - p1.y) + slope.y * p1.x) * div
};
Vector2 D = {
(ortho.x * (1 - p1.y) + ortho.y * (p1.x - 1)) * div,
(slope.x * (1 - p1.y) + slope.y * (p1.x - 1)) * div
};