I am drawing a bunch of billboards/triangles in opengl/rust. I get these artefacts where half my screen has blurry color. after debugging i could find that after transformation, some of the vertices become extremely large. i will put a minimally reproducible example below in rust with glam library for matrices .
let camera_position = [-696.2638, 31.615665, 346.61575];
let camera_front = [0.72691596, -0.2045585, 0.65555245];
let vtx_pos = [-714.08, 21.8639, 363.706];
let fov: f32 = 1.012;
let width = 1920_f32;
let height = 1080_f32;
let up = vec3(0.0, 1.0, 0.0).normalize();
let camera_position = vec3(camera_position[0], camera_position[1], camera_position[2]);
let camera_front = vec3(camera_front[0], camera_front[1], camera_front[2]);
let camera_dvec = camera_position + camera_front;
dbg!(camera_dvec);
// make matrix
let view = Mat4::look_at_lh(camera_position, camera_dvec, up);
dbg!(view);
let projection = Mat4::perspective_lh(fov, width / height, 1.0, 5000.0);
dbg!(projection);
let vp = projection * view;
dbg!(vp);
// left bottom vertex
let world_vtx = vec4(vtx_pos[0], vtx_pos[1], vtx_pos[2], 1.0);
dbg!(world_vtx);
let clip_vtx = vp * world_vtx;
dbg!(clip_vtx);
let ndc_vtx = clip_vtx.xyz() / clip_vtx.w;
dbg!(ndc_vtx);
from the above code, we get
[src/main.rs:29] world_vtx = Vec4(
-714.08,
21.8639,
363.706,
1.0,
)
[src/main.rs:31] clip_vtx = Vec4(
-24.995728,
-17.885654,
-0.7529907,
0.24725342,
)
[src/main.rs:33] ndc_vtx = Vec3(
-101.09356,
-72.33734,
-3.045421,
)
I think the W part of the clip_vtx should be around the value of -Z . but its smaller value is causing the normalized device coordinates after perspective division to become huge. this is making triangles obnoxiously large and whenever part of that triangle enters my view frustum, i get weird color artefacts.
Can someone help me find out what i am doing wrong? or this is a normal part of matrix transformations that tutorials don't teach me.
from what i learnt asking around in gamedev discords:
i make a billboard at a position, which is like right behind the camera. this causes some kind of discontinuity where the left vertices stay on extreme left and right vertices go to extreme right due to changing signs become HUGE in ndc and create issues if i don't manually clip them. i still don't know how to deal with that issue of discontinuity. right now i'm just calculating the distance between left and right vertices, and if it is bigger than 2.0 in ndc, i skip rendering that billboard.
is there a better way of determining which positions will have the issue of billboards becoming huge?
It took a few hours of asking and poking other experienced people in graphics programming discord, but i have a better idea of what's happening.
and now that i changed my question, i could see similar question in the past on stack overflow. i just checked a few of them and found a good answer
https://stackoverflow.com/a/65168709/7484780 .
The whole issue of clipping was because i send triangle/billboard vertices as a vec3 and in vertex shader i do gl_Position = vec4(pos, 1.0), so opengl doesn't do any clipping. at the same time, i don't do any clipping either before doing perspective division. So, the unclipped billboards are the artefacts that i am getting, especially right behind the camera.
Related
I was having this problem with OpenGL where I'd have all my textures being rendered upside down. I did the most logical thing I could think of and tried reversing the order of the mapping and that fixed the issue. But why? I know that OpenGL's coordinate system is cartesian and that it works by mapping each uv to a specific vertex. I had something like [0,0], [0,1], [1,1], [1,0] which would theoretically as far as I understand it go from:
top-left -> bottom-left -> bottom-right -> top-right.
I changed this to: [1,1], [1,0], [0,0], [0,1] which would represent:
bottom-right -> top-right -> top-left -> bottom left.
In my understanding it should be quite the opposite. Making a pretty quick sketch on paper it shows me that the initial order would theoretically render my texture correctly and not upside down. Am I correct and theres something else messing with my rendering? Like my perspective matrix?
This is my orthographic matrix:
Matrix4f ortho = new Matrix4f();
ortho.setIdentity();
float zNear = 0.01f;
float zFar = 100f;
ortho.m00 = 2 / (float) RenderManager.getWindowWidth();
ortho.m11 = 2 / -(float) RenderManager.getWindowHeight();
ortho.m22 = -2 / (zFar - zNear);
return ortho;
I can't really say I understand it though, I had quite a hard time with it. And looking through youtube tutorials and articles you can see most people don't really understand it either, they just use it. I do have a good linear algebra background but still can't wrap my head around how is this matrix normalizing my coordinates (screen coords to OpenGL coords (-1,1)). Anyway, I digress, any help is appreciated.
Hullo, I want to implement a simple 2D lighting technique in GLSL. My projection matrix is set up so that the top left corner of the window is (0, 0) and the bottom right is (window.width, window.height). I have one uniform variable in the fragment shader uniform vec2 lightPosition; which is currently set to the mouse position (again, in the same coordinate system). I have also calculated the distance from the light to the pixel.
I want to light up the pixel according to its distance from the light source. But here's the catch, I don't want to light it up more than its original color. For instance if the color of the pixel is (1, 0, 0 (red)), no matter how close the light gets to it, it will not change more that that, which adds annoying specularity. And the farther the light source moves away from the pixel, the darker I want it to get.
I really feel that I'm close to getting what I want, but I just can't get it!
I would really appreciate some help. I feel that this is a rather simple code to implement (and I feel ashamed for not knowing it).
why not scale up the distance to <0..1> range by dividing it and max it by some max visibility distance vd so:
d = min( length(fragment_pos-light_pos) , vd ) / vd;
that should get you <0..1> range for the distance of fragment to light. Now you can optionaly perform simple nonlinearization if you want (using pow which does not change the range...)
d = pow(d,0.5);
or
d = pow(d,2.0);
depending on what you think looks better (you can play with the exponent ...) and finally compute the color:
col = face_color * ((1.0-d)*0.8 + 0.2);
where 0.8 is your lightsource strength and 0.2 is ambient lighting.
The problem I have is that I cant get "line of sight" vector in OpenGL. I've done some research and found that it should be Z vector after transformations, but it doesn't want to work. I've got this code to retrive velocity of the block( i want it to move foward from "camera"), but all the time it moves irrelevant to camera, but all the time same way compared to rendered world:
GLfloat matrix[16];
glGetFloatv (GL_MODELVIEW_MATRIX, matrix);
GLfloat d = sqrt( matrix[8]*matrix[8] + matrix[9]*matrix[9] + matrix[10]*matrix[10]);
xmov = matrix[8]/d;
ymov = matrix[9]/d;
zmov = matrix[10]/d;
What I've done wrong?
Okay, now after you clarified what you really want to do, I'm pretty sure this is the correct answer:
You are probably used to something called ModelView matrix. Didn't it seem strange for you that it's essentially combined of two parts? Well, it was for me, and after I thought about it for a while, it made sense. The final vertex position is calculated like this:
gl_Position = ProjectionMat * ModelViewMat * VertexPos;
You see, there's no difference for OpenGL whether you move "camera" from origin by [x,y,z] or move objects by [-x,-y,-z] - you'll get the same results. It is however useful to distinguish a "camera" position as something that can be different from the origin.
gl_Position = ProjectionMat * ViewMat * ModelMat * VertexPos;
I think the most natural way to do it is, as I said, split the calculations into two matrices : Model and View. Every object on the scene now has to change the Model matrix, and camera position is set by changing View matrix. Makes sense?
I'll give you an example. If your camera is in [5,0,0] (which corresponds to Translate(-5,0,0)), and your object in [-5,0,0], it will land 10 units from the camera. Now, when you move your camera further away from origin (increasing the first translation distance), distance between "camera" and the object grows.
The object translation is the Model, the camera translation is the View.
So it's not hard to come to conclusion, that if you want to ignore camera position, just strip the View part from the equation; every object will now be drawn without taking camera position into account, thus relative only to your viewport.
gl_Position = ProjectionMat * ModelMat * VertexPos;
Our hypothetical model will now land 5 units along the X axis from the viewport regardless of what you're currently "looking at", and that's, I think, pretty much what you wanted to achieve.
Anyway, you could probably use a nice tutorial about it
I'm developing an OpenGL 2.1 application using shaders and I'm having a problem with my per-fragment lighting. The lighting is correct when my scene initial loads, but as I navigate around the scene, the lighting moves around with the "camera", rather than staying in a static location. For example, if I place my light way off to the right, the right side of objects will be illuminated. If I then move the camera to the other side of the object and point in the opposite direction, the lighting is still on the right side of the object (rather than being on the left, like it should now). I assume that I am calculating the lighting in the wrong coordinate system, and this is causing the wrong behavior. I'm calculating lighting the following way...
In vertex shader...
ECPosition = gl_ModelViewMatrix * gl_Vertex;
WCNormal = gl_NormalMatrix * vertexNormal;
where vertexNormal is the normal in object/model space.
In the fragment shader...
float lightIntensity = 0.2; //ambient
lightIntensity += max(dot(normalize(LightPosition - vec3(ECPosition)), WCNormal), 0.0) * 1.5; //diffuse
where LightPosition is, for example, (100.0, 10.0, 0.0), which would put the light on the right side of the world as described above. The part that I'm unsure of, is the gl_NormalMatrix part. I'm not exactly sure what this matrix is and what coordinate space it puts my normal into (I assume world space). If the normal is put into world space, then I figured the problem was that ECPosition is in eye space while LightPosition and WCNormal are in world space. Something about this doesn't seem right but I can't figure it out. I also tried putting ECPosition into world space my multiplying it by my own modelMatrix that only contains the transformations I do to get the coordinate into world space, but this didn't work. Let me know if I need to provide other information about my shaders or code.
gl_NormalMatrix transforms your normal into eye-space (see this tutorial for more details).
I think in order for your light to be in a static world position you ought to be transforming your LightPosition into eye-space as well by multiplying it by your current view matrix.
I've got some 2D geometry. I want to take some bounding rect around my geometry, and then render a smaller version of it somewhere else on the plane. Here's more or less the code I have to do scaling and translation:
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
float translateX = dest.x - source.x;
float translateY = dest.y - source.y;
glScalef(scaleX, scaleY, 0.0);
glTranslatef(translateX, translateY, 0.0);
// Draw geometry in question with its normal verts.
This works exactly as expected for a given dimension when the dest origin is 0. But if the origin for, say, x, is nonzero, the result is still scaled correctly but looks like (?) it's translated to something near zero on that axis anyways-- turns out it's not exactly the same as if dest.x were zero.
Can someone point out something obvious I'm missing?
Thanks!
FINAL UPDATE Per Bahbar's and Marcus's answers below, I did some more experimentation and solved this. Adam Bowen's comment was the tip off. I was missing two critical facts:
I needed to be scaling around the center of the geometry I cared about.
I needed to apply the transforms in the opposite order of the intuition (for me).
The first is kind of obvious in retrospect. But for the latter, for other good programmers/bad mathematicians like me: Turns out my intuition was operating in what the Red Book calls a "Grand, Fixed Coordinate System", in which there is an absolute plane, and your geometry moves around on that plane using transforms. This is OK, but given the nature of the math behind stacking multiple transforms into one matrix, it's the opposite of how things really work (see answers below or Red Book for more). Basically, the transforms are "applied" in "reverse order" to how they appear in code. Here's the final working solution:
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
Point sourceCenter = centerPointOfRect(source);
Point destCenter = centerPointOfRect(dest);
glTranslatef(destCenter.x, destCenter.y, 0.0);
glScalef(scaleX, scaleY, 0.0);
glTranslatef(sourceCenter.x * -1.0, sourceCenter.y * -1.0, 0.0);
// Draw geometry in question with its normal verts.
In OpenGL, matrices you specify are multiplied to the right of the existing matrix, and the vertex is on the far right of the expression.
Thus, the last operation you specify are in the coordinate system of the geometry itself.
(The first is usually the view transform, i.e. inverse of your camera's to-world transform.)
Bahbar makes a good point that you need to consider the center point for scaling. (or the pivot point for rotations.) Usually you translate there, rotate/scale, then translate back. (or in general, apply basis transform, the operation, then the inverse). This is called Change of Basis, which you might want to read up on.
Anyway, to get some intuition about how it works, try with some simple values (zero, etc) then alter them slightly (perhaps an animation) and see what happens with the output. Then it's much easier to see what your transforms are actually doing to your geometry.
Update
That the order is "reversed" w.r.t. intuition is rather common among beginner OpenGL-coders. I've been tutoring a computer graphics course and many react in a similar manner. It becomes easier to think about how OpenGL does it if you consider the use of pushmatrix/popmatrix while rendering a tree (scene-graph) of transforms and geometries. Then the current order-of-things becomes rather natural, and the opposite would make it rather difficult to get anything useful done.
Scale, just like Rotate, operates from the origin. so if you scale by half an object that spans the segment [10:20] (on axis X, e.g.), you get [5:10]. The object therefore was scaled, and moved closer to the origin. Exactly what you observed.
This is why you apply Scale first in general (because objects tend to be defined around 0).
So if you want to scale an object around point Center, you can translate the object from Center to the origin, scale there, and translate back.
Side note, if you translate first, and then scale, then your scale is applied to the previous translation, which is why you probably had issues with this method.
I haven't played with OpenGL ES, just a bit with OpenGL.
It sounds like you want to transform from a different position as opposed to the origin, not sure, but can you try to do the transforms and draws that bit within glPushMatrix() and glPopMatrix() ?
e.g.
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
float translateX = dest.x - source.x;
float translateY = dest.y - source.y;
glPushMatrix();
glScalef(scaleX, scaleY, 0.0);
glTranslatef(translateX, translateY, 0.0);
// Draw geometry in question with its normal verts.
//as if it were drawn from 0,0
glPopMatrix();
Here's a simple Processing sketch I wrote to illustrate the point:
import processing.opengl.*;
import javax.media.opengl.*;
void setup() {
size(500, 400, OPENGL);
}
void draw() {
background(255);
PGraphicsOpenGL pgl = (PGraphicsOpenGL) g;
GL gl = pgl.beginGL();
gl.glPushMatrix();
//transform the 'pivot'
gl.glTranslatef(100,100,0);
gl.glScalef(10,10,10);
//draw something from the 'pivot'
gl.glColor3f(0, 0.77, 0);
drawTriangle(gl);
gl.glPopMatrix();
//matrix poped, we're back to orginin(0,0,0), continue as normal
gl.glColor3f(0.77, 0, 0);
drawTriangle(gl);
pgl.endGL();
}
void drawTriangle(GL gl){
gl.glBegin(GL.GL_TRIANGLES);
gl.glVertex2i(10, 0);
gl.glVertex2i(0, 20);
gl.glVertex2i(20, 20);
gl.glEnd();
}
Here is an image of the sketch running, the same green triangle is drawn, with translation and scale applied, then the red one, outsie the push/pop 'block', so it is not affected by the transform:
HTH,
George