Texture flipping behaviour - opengl

I was having this problem with OpenGL where I'd have all my textures being rendered upside down. I did the most logical thing I could think of and tried reversing the order of the mapping and that fixed the issue. But why? I know that OpenGL's coordinate system is cartesian and that it works by mapping each uv to a specific vertex. I had something like [0,0], [0,1], [1,1], [1,0] which would theoretically as far as I understand it go from:
top-left -> bottom-left -> bottom-right -> top-right.
I changed this to: [1,1], [1,0], [0,0], [0,1] which would represent:
bottom-right -> top-right -> top-left -> bottom left.
In my understanding it should be quite the opposite. Making a pretty quick sketch on paper it shows me that the initial order would theoretically render my texture correctly and not upside down. Am I correct and theres something else messing with my rendering? Like my perspective matrix?
This is my orthographic matrix:
Matrix4f ortho = new Matrix4f();
ortho.setIdentity();
float zNear = 0.01f;
float zFar = 100f;
ortho.m00 = 2 / (float) RenderManager.getWindowWidth();
ortho.m11 = 2 / -(float) RenderManager.getWindowHeight();
ortho.m22 = -2 / (zFar - zNear);
return ortho;
I can't really say I understand it though, I had quite a hard time with it. And looking through youtube tutorials and articles you can see most people don't really understand it either, they just use it. I do have a good linear algebra background but still can't wrap my head around how is this matrix normalizing my coordinates (screen coords to OpenGL coords (-1,1)). Anyway, I digress, any help is appreciated.

Related

Artefacts when billboarding at positions just behind camera

I am drawing a bunch of billboards/triangles in opengl/rust. I get these artefacts where half my screen has blurry color. after debugging i could find that after transformation, some of the vertices become extremely large. i will put a minimally reproducible example below in rust with glam library for matrices .
let camera_position = [-696.2638, 31.615665, 346.61575];
let camera_front = [0.72691596, -0.2045585, 0.65555245];
let vtx_pos = [-714.08, 21.8639, 363.706];
let fov: f32 = 1.012;
let width = 1920_f32;
let height = 1080_f32;
let up = vec3(0.0, 1.0, 0.0).normalize();
let camera_position = vec3(camera_position[0], camera_position[1], camera_position[2]);
let camera_front = vec3(camera_front[0], camera_front[1], camera_front[2]);
let camera_dvec = camera_position + camera_front;
dbg!(camera_dvec);
// make matrix
let view = Mat4::look_at_lh(camera_position, camera_dvec, up);
dbg!(view);
let projection = Mat4::perspective_lh(fov, width / height, 1.0, 5000.0);
dbg!(projection);
let vp = projection * view;
dbg!(vp);
// left bottom vertex
let world_vtx = vec4(vtx_pos[0], vtx_pos[1], vtx_pos[2], 1.0);
dbg!(world_vtx);
let clip_vtx = vp * world_vtx;
dbg!(clip_vtx);
let ndc_vtx = clip_vtx.xyz() / clip_vtx.w;
dbg!(ndc_vtx);
from the above code, we get
[src/main.rs:29] world_vtx = Vec4(
-714.08,
21.8639,
363.706,
1.0,
)
[src/main.rs:31] clip_vtx = Vec4(
-24.995728,
-17.885654,
-0.7529907,
0.24725342,
)
[src/main.rs:33] ndc_vtx = Vec3(
-101.09356,
-72.33734,
-3.045421,
)
I think the W part of the clip_vtx should be around the value of -Z . but its smaller value is causing the normalized device coordinates after perspective division to become huge. this is making triangles obnoxiously large and whenever part of that triangle enters my view frustum, i get weird color artefacts.
Can someone help me find out what i am doing wrong? or this is a normal part of matrix transformations that tutorials don't teach me.
from what i learnt asking around in gamedev discords:
i make a billboard at a position, which is like right behind the camera. this causes some kind of discontinuity where the left vertices stay on extreme left and right vertices go to extreme right due to changing signs become HUGE in ndc and create issues if i don't manually clip them. i still don't know how to deal with that issue of discontinuity. right now i'm just calculating the distance between left and right vertices, and if it is bigger than 2.0 in ndc, i skip rendering that billboard.
is there a better way of determining which positions will have the issue of billboards becoming huge?
It took a few hours of asking and poking other experienced people in graphics programming discord, but i have a better idea of what's happening.
and now that i changed my question, i could see similar question in the past on stack overflow. i just checked a few of them and found a good answer
https://stackoverflow.com/a/65168709/7484780 .
The whole issue of clipping was because i send triangle/billboard vertices as a vec3 and in vertex shader i do gl_Position = vec4(pos, 1.0), so opengl doesn't do any clipping. at the same time, i don't do any clipping either before doing perspective division. So, the unclipped billboards are the artefacts that i am getting, especially right behind the camera.

OpenGL/GLM - Switching coordinate system direction

I am converting some old software to support OpenGL. DirectX and OpenGL have different coordinate systems (OpenGL is right, DirectX is left). I know that in the old fixed pipeline functionality, I would use:
glScalef(1.0f, 1.0f, -1.0f);
This time around, I am working with GLM and shaders and need a compatible solution. I have tried multiplying my camera matrix by a scaling vector with no luck.
Here is my camera set up:
// Calculate the direction, right and up vectors
direction = glm::vec3(cos(anglePitch) * sin(angleYaw), sin(anglePitch), cos(anglePitch) * cos(angleYaw));
right = glm::vec3(sin(angleYaw - 3.14f/2.0f), 0, cos(angleYaw - 3.14f/2.0f));
up = glm::cross(right, direction);
// Update our camera matrix, projection matrix and combine them into my view matrix
cameraMatrix = glm::lookAt(position, position+direction, up);
projectionMatrix = glm::perspective(50.0f, 4.0f / 3.0f, 0.1f, 1000.f);
viewMatrix = projectionMatrix * cameraMatrix;
I have tried a number of things including reversing the vectors and reversing the z coordinate in the shader. I have also tried multiplying by the inverse of the various matrices and vectors and multiplying the camera matrix by a scaling vector.
Don't think about the handedness that much. It's true, they use different conventions, but you can just choose not to use them and it boils down to almost the same thing in both APIs. My advice is to use the exact same matrices and setups in both APIs except for these two things:
All you should need to do to port from DX to GL is:
Reverse the cull-face winding - DX culls counter-clockwise by default, while that's what GL keeps.
Adjust for the different depth range: DX uses a depth range of 0(near) to 1(far), while GL uses a signed range from -1 for near and 1 for far. You can just do this as "a last step" in the projection matrix.
DX9 also has issues with pixel-coordinate offsets, but that's something else entirely and it's no longer an issue with DX10 onward.
From what you describe, the winding is probably your problem, since you are using the GLM functions to generate matrices that should be alright for OpenGL.

texture mapping a trapezoid with a square texture in OpenGL

I've been trying to render a GL_QUAD (which is shaped as a trapezoid) with a square texture. I'd like to try and use OpenGL only to pull this off. Right now the texture is getting heavily distorted and it's really annoying.
Normally, I would load the texture compute a homography but that means a lot of work and an additional linear programming library/direct linear transform function. I'm under the impression OpenGL can simplify this process for me.
I've looked around the web and have seen "Perspective-Correct Texturing, Q Coordinates, and GLSL" and "Skewed/Sheared Texture Mapping in OpenGL".
These all seem to assume you'll do some type of homography computation or use some parts of OpenGL I'm ignorant of ... any advice?
Update:
I've been reading "Navigating Static Environments Using Image-Space Simplification and Morphing" [PDF] - page 9 appendix A.
It looks like they disable perspective correction by multiplying the (s,t,r,q) texture coordinate with the vertex of a model's world space z component.
so for a given texture coordinate (s, r, t, q) for a quad that's shaped as a trapezoid, where the 4 components are:
(0.0f, 0.0f, 0.0f, 1.0f),
(0.0f, 1.0f, 0.0f, 1.0f),
(1.0f, 1.0f, 0.0f, 1.0f),
(1.0f, 0.0f, 0.0f, 1.0f)
This is as easy as glTexCoord4f (svert.z, rvert.z, t, q*vert.z)? Or am I missing some step? like messing with the GL_TEXTURE glMatrixMode?
Update #2:
That did the trick! Keep it in mind folks, this problem is all over the web and there weren't any easy answers. Most involved directly recalculating the texture with a homography between the original shape and the transformed shape...aka lots of linear algebra and an external BLAS lib dependency.
Here is a good explanation of the issue & solution.
http://www.xyzw.us/~cass/qcoord/
working link: http://replay.web.archive.org/20080209130648/http://www.r3.nu/~cass/qcoord/
Partly copied and adapted from above link, created by Cass
One of the more interesting aspects of texture mapping is the space that texture coordinates live in. Most of us like to think of texture space as a simple 2D affine plane. In most cases this is perfectly acceptable, and very intuitive, but there are times when it becomes problematic.
For example, suppose you have a quad that is trapezoidal in its spatial coordinates but square in its texture coordinates.
OpenGL will divide the quad into triangles and compute the slopes of the texture coordinates (ds/dx, ds/dy, dt/dx, dt/dy) and use those to interpolate the texture coordinate over the interior of the polygon. For the lower left triangle, dx = 1 and ds = 1, but for the upper right triangle, dx < 1 while ds = 1. This makes ds/dx for the upper right triangle greater than ds/dx for the lower one. This produces an unpleasant image when texture mapped.
Texture space is not simply a 2D affine plane even though we generally leave the r=0 and q=1defaults alone. It's really a full-up projective space (P3)! This is good, because instead of specifying the texture coordinates for the upper vertices as (s,t) coordinates of (0, 1) and (1, 1), we can specify them as (s,t,r,q) coordinates of (0, width, 0, width) and (width, width, 0, width)! These coordinates correspond to the same location in the texture image, but LOOK at what happened to ds/dx - it's now the same for both triangles!! They both have the same dq/dx and dq/dy as well.
Note that it is still in the z=0 plane. It can become quite confusing when using this technique with a perspective camera projection because of the "false depth perception" that this produces. Still, it may be better than using only (s,t). That is for you to decide.
I would guess that most people wanting to fit a rectangular texture on a trapezoid are thinking of one of two results:
perspective projection: the trapezoid looks like a rectangle seen from an oblique angle.
"stretchy" transformation: the trapezoid looks like a rectangular piece of rubber that has been stretched/shrunk into shape.
Most solutions here on SO fall into the first group, whereas I recently found myself in the second.
The easiest way I found to achieve effect 2. was to split the trapezoid into a rectangle and right triangles. In my case the trapezoid was regular, so a quad and two triangles solved the problem.
Hope this can help:
Quoted from the paper:
"
At each pixel, a division is performed using the interpolated values of (s=w; t=w; r=w; q=w), yielding (s=q; t=q), which
are the final texture coordinates. To disable this effect, which is not
possible in OpenGL directly. "
In GLSL, (now at least) this is possible. You can add:
noperspective out vec4 v_TexCoord;
there's an explanation:
https://www.geeks3d.com/20130514/opengl-interpolation-qualifiers-glsl-tutorial/

When loading an OBJ i get this

and this time i have loaded a model successfully! yay!!
but theres a slight problem, one that i had with another obj loader...
heres what it looks like:
http://img132.imageshack.us/i/newglitch2.jpg/
heres another angle if u cant see it right away:
http://img42.imageshack.us/i/newglitch3.jpg/
now this is supposed to look like a cube, but as you can see, the edges of the faces on the cube are being very choppy
is anyone else having this problem, or if anyone knows how to solve this then let me know
also comment if theres any code that needs to be shown, ill be happy to post it.
hey i played around with the code(changed some stuff) and this is what i have come up with
ORIGINAL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
glTranslatef(0.f, 0.f, -10.0f);
result: choopy image(look at images)
CURRENT:
glMatrixMode(GL_MODELVIEW);
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
glTranslatef(0.f, 0.f, -50.0f);
glLoadIdentity();
result: model is not choppy but cannot move camera(model is right in front of me)
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
^^^
|
That's your problem right there ---------------+
The near clip distance must be greater than 0 for perspective projections. Actually you should choose near to be as far away as possible and the far clip plane to be as near as possible.
Say your depth buffer is 16 bits wide, then you slice the scene into 32768 slices. The slice distribution follows a 1/x law. Technically you're dividing by zero.
Well this looks like a projection setting issue. Some parts of your cube, when transformed into clip space, exceed near/far planes.
From what I see you are using orthogonal projection matrix - it's standard for making 2D UI. Please review nearVal and farVal of your glOrtho call. For 2D UI they are usually set as -1 and 1 respectively (or 0 and 1), so may want to either scale down cube or increase view frustum depth by modifying mentioned parameters.

OpenGL: scale then translate? and how?

I've got some 2D geometry. I want to take some bounding rect around my geometry, and then render a smaller version of it somewhere else on the plane. Here's more or less the code I have to do scaling and translation:
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
float translateX = dest.x - source.x;
float translateY = dest.y - source.y;
glScalef(scaleX, scaleY, 0.0);
glTranslatef(translateX, translateY, 0.0);
// Draw geometry in question with its normal verts.
This works exactly as expected for a given dimension when the dest origin is 0. But if the origin for, say, x, is nonzero, the result is still scaled correctly but looks like (?) it's translated to something near zero on that axis anyways-- turns out it's not exactly the same as if dest.x were zero.
Can someone point out something obvious I'm missing?
Thanks!
FINAL UPDATE Per Bahbar's and Marcus's answers below, I did some more experimentation and solved this. Adam Bowen's comment was the tip off. I was missing two critical facts:
I needed to be scaling around the center of the geometry I cared about.
I needed to apply the transforms in the opposite order of the intuition (for me).
The first is kind of obvious in retrospect. But for the latter, for other good programmers/bad mathematicians like me: Turns out my intuition was operating in what the Red Book calls a "Grand, Fixed Coordinate System", in which there is an absolute plane, and your geometry moves around on that plane using transforms. This is OK, but given the nature of the math behind stacking multiple transforms into one matrix, it's the opposite of how things really work (see answers below or Red Book for more). Basically, the transforms are "applied" in "reverse order" to how they appear in code. Here's the final working solution:
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
Point sourceCenter = centerPointOfRect(source);
Point destCenter = centerPointOfRect(dest);
glTranslatef(destCenter.x, destCenter.y, 0.0);
glScalef(scaleX, scaleY, 0.0);
glTranslatef(sourceCenter.x * -1.0, sourceCenter.y * -1.0, 0.0);
// Draw geometry in question with its normal verts.
In OpenGL, matrices you specify are multiplied to the right of the existing matrix, and the vertex is on the far right of the expression.
Thus, the last operation you specify are in the coordinate system of the geometry itself.
(The first is usually the view transform, i.e. inverse of your camera's to-world transform.)
Bahbar makes a good point that you need to consider the center point for scaling. (or the pivot point for rotations.) Usually you translate there, rotate/scale, then translate back. (or in general, apply basis transform, the operation, then the inverse). This is called Change of Basis, which you might want to read up on.
Anyway, to get some intuition about how it works, try with some simple values (zero, etc) then alter them slightly (perhaps an animation) and see what happens with the output. Then it's much easier to see what your transforms are actually doing to your geometry.
Update
That the order is "reversed" w.r.t. intuition is rather common among beginner OpenGL-coders. I've been tutoring a computer graphics course and many react in a similar manner. It becomes easier to think about how OpenGL does it if you consider the use of pushmatrix/popmatrix while rendering a tree (scene-graph) of transforms and geometries. Then the current order-of-things becomes rather natural, and the opposite would make it rather difficult to get anything useful done.
Scale, just like Rotate, operates from the origin. so if you scale by half an object that spans the segment [10:20] (on axis X, e.g.), you get [5:10]. The object therefore was scaled, and moved closer to the origin. Exactly what you observed.
This is why you apply Scale first in general (because objects tend to be defined around 0).
So if you want to scale an object around point Center, you can translate the object from Center to the origin, scale there, and translate back.
Side note, if you translate first, and then scale, then your scale is applied to the previous translation, which is why you probably had issues with this method.
I haven't played with OpenGL ES, just a bit with OpenGL.
It sounds like you want to transform from a different position as opposed to the origin, not sure, but can you try to do the transforms and draws that bit within glPushMatrix() and glPopMatrix() ?
e.g.
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
float translateX = dest.x - source.x;
float translateY = dest.y - source.y;
glPushMatrix();
glScalef(scaleX, scaleY, 0.0);
glTranslatef(translateX, translateY, 0.0);
// Draw geometry in question with its normal verts.
//as if it were drawn from 0,0
glPopMatrix();
Here's a simple Processing sketch I wrote to illustrate the point:
import processing.opengl.*;
import javax.media.opengl.*;
void setup() {
size(500, 400, OPENGL);
}
void draw() {
background(255);
PGraphicsOpenGL pgl = (PGraphicsOpenGL) g;
GL gl = pgl.beginGL();
gl.glPushMatrix();
//transform the 'pivot'
gl.glTranslatef(100,100,0);
gl.glScalef(10,10,10);
//draw something from the 'pivot'
gl.glColor3f(0, 0.77, 0);
drawTriangle(gl);
gl.glPopMatrix();
//matrix poped, we're back to orginin(0,0,0), continue as normal
gl.glColor3f(0.77, 0, 0);
drawTriangle(gl);
pgl.endGL();
}
void drawTriangle(GL gl){
gl.glBegin(GL.GL_TRIANGLES);
gl.glVertex2i(10, 0);
gl.glVertex2i(0, 20);
gl.glVertex2i(20, 20);
gl.glEnd();
}
Here is an image of the sketch running, the same green triangle is drawn, with translation and scale applied, then the red one, outsie the push/pop 'block', so it is not affected by the transform:
HTH,
George