Alright, so I started looking up tutorials on openGL for the sake of making a minecraft mod. I still don't know too much about it because I figured that I really shouldn't have to when it comes to making the small modification that I want, but this is giving me such a headache. All I want to do is be able to properly map a texture to an irregular concave quad.
Like this:
I went into openGL to figure out how to do this before I tried running code in the game. I've read that I need to do a perspective-correct transformation, and I've gotten it to work for trapezoids, but for the life of me I can't figure out how to do it if both pairs of edges aren't parallel. I've looked at this: http://www.imagemagick.org/Usage/scripts/perspective_transform, but I really don't have a clue where the "8 constants" this guy is talking about came from or if it will even help me. I've also been told to do calculations with matrices, but I've got no idea how much of that openGL does or doesn't take care of.
I've looked at several posts regarding this, and the answer I need is somewhere in those, but I can't make heads or tails of 'em. I can never find a post that tells me what arguments I'm supposed to put in the glTexCoord4f() method in order to have the perspective-correct transform.
If you're thinking of the "Getting to know the Q coordinate" page as a solution to my problem, I'm afraid I've already looked at it, and it has failed me.
Is there something I'm missing? I feel like this should be a lot easier. I find it hard to believe that openGL, with all its bells and whistles, would have nothing for making something other than a rectangle.
So, I hope I haven't pissed you off too much with my cluelessness, and thanks in advance.
EDIT: I think I need to make clear that I know openGL does perspective transform for you when your view of the quad is not orthogonal. I'd know to just change the z coordinates or my fov. I'm looking to smoothly texture non-rectangular quadrilateral, not put a rectangular shape in a certain fov.
OpenGL will do a perspective correct transform for you. I believe you're simply facing the issue of quad vs triangle interpolation. The difference between affine and perspective-correct transforms are related to the geometry being in 3D, where the interpolation in image space is non-linear. Think of looking down a road: the evenly spaced lines appear more frequent in the distance. Anyway, back to triangles vs quads...
Here are some related posts:
How to do bilinear interpolation of normals over a quad?
Low polygon cone - smooth shading at the tip
https://gamedev.stackexchange.com/questions/66312/quads-vs-triangles
Applying color to single vertices in a quad in opengl
An answer to this one provides a possible solution, but it's not simple:
The usual approach to solve this, is by performing the interpolation "manually" in a fragment shader, that takes into account the target topology, in your case a quad. Or in short you have to perform barycentric interpolation not based on a triangle but on a quad. You might also want to apply perspective correction.
The first thing you should know is that nothing is easy with OpenGL. It's a very complex state machine with a lot of quirks and a poor interface for developers.
That said, I think you're confusing a lot of different things. To draw a textured rectangle with perspective correction, you simply draw a textured rectangle in 3D space after setting the projection matrix appropriately.
First, you need to set up the projection you want. From your description, you need to create a perspective projection. In OpenGL, you usually have 2 main matrixes you're concerned with - projection and model-view. The projection matrix is sort of like your "camera".
How you do the above depends on whether you're using Legacy OpenGL (less than version 3.0) or Core Profile (modern, 3.0 or greater) OpenGL. This page describes 2 ways to do it, depending on which you're using.
void BuildPerspProjMat(float *m, float fov, float aspect, float znear, float zfar)
{
float xymax = znear * tan(fov * PI_OVER_360);
float ymin = -xymax;
float xmin = -xymax;
float width = xymax - xmin;
float height = xymax - ymin;
float depth = zfar - znear;
float q = -(zfar + znear) / depth;
float qn = -2 * (zfar * znear) / depth;
float w = 2 * znear / width;
w = w / aspect;
float h = 2 * znear / height;
m[0] = w;
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = h;
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = q;
m[11] = -1;
m[12] = 0;
m[13] = 0;
m[14] = qn;
m[15] = 0;
}
and here is how to use it in an OpenGL 1 / OpenGL 2 code:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(m);
// okay we can switch back to modelview mode
// for all other matrices
glMatrixMode(GL_MODELVIEW);
With a real OpenGL 3.0 code, we must use GLSL shaders and uniform variables to pass and exploit the transformation matrices:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glUseProgram(shaderId);
glUniformMatrix4fv("projMat", 1, GL_FALSE, m);
RenderObject();
glUseProgram(0);
Since I've not used Minecraft, I don't know whether it gives you a projection matrix to use or if you have the other information to construct it.
Related
I am working on a tile-based rendering application in GLES. It works like tile based maps (OSM etc.). As the user zooms in, smaller and smaller tiles are displayed. For projection, I setup a orthographic matrix (using glm library), that looks like this:
auto projMatrix = glm::ortho(-viewCenter_.x / scale_,
viewCenter_.x / scale_,
viewCenter_.y / scale_,
-viewCenter_.y / scale_,
-1.0f, 1.0f);
you can see the parameters are viewCenter (which doesn't change, it is half the screen resolution) and scale, which changes as user zooms in or out. All the numbers are of float type.
Than I multiply this projection with translation and model matrices to get the final MVP, passed to GLSL. The problem I am seeing is, when the scene is zoomed in a lot (scale_ > 200000), I can see the movement stops being smooth, the shapes start to slightly jitter.
Here is an example of a model matrix:
transform_ = glm::scale(glm::translate(glm::mat4(), { x * tileSize, y * tileSize, 0.f }), { tileSize, tileSize, 1.f });
I am guessing this is due to the floating point precision, buy I have no idea how to fix it. I think replacing the variables with double wouldn't help.
I'm experimenting trying to implement "as simple as possible" SSR in GLSL. Any chance someone could please help me set up an extremely basic ssr code?
I do not need (for now) any roughness/metalness calculations, no Fresnel, no fading in-out effect, nothing fancy - I just want the most simple setup that I can understand, learn and maybe later improve upon.
I have 4 source textures: Color, Position, Normal, and a copy of the previous final frame's image (Reflection)
attributes.position is the position texture (FLOAT16F) - WorldSpace
attributes.normal is the normal texture (FLOAT16F) - WorldSpace
sys_CameraPosition is the eye's position in WorldSpace
rd is supposed to be the reflection direction in WorldSpace
texture(SOURCE_2, projectedCoord.xy) is the position texture at the current reflection-dir endpoint
vec3 rd = normalize(reflect(attributes.position - sys_CameraPosition, attributes.normal));
vec2 uvs;
vec4 projectedCoord;
for (int i = 0; i < 10; i++)
{
// Calculate screen space position from ray's current world position:
projectedCoord = sys_ProjectionMatrix * sys_ViewMatrix * vec4(attributes.position + rd, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
// this bit is tripping me up
if (distance(texture(SOURCE_2, projectedCoord.xy).xyz, (attributes.position + rd)) > 0.1)
rd += rd;
else
uvs = projectedCoord.xy;
break;
}
out_color += (texture(SOURCE_REFL, uvs).rgb);
Is this even possible, using worldSpace coordinates? When I multiply the first pass' outputs with the viewMatrix aswell as the modelMatrix, my light calculations go tits up, because they are also in worldSpace...
Unfortunately there is no basic SSR tutorials on the internet, that explain just the SSR bit and nothing else, so I thought I'd give it a shot here, I really can't seem to get my head around this...
I have some scatter graph data that I want to plot accurately. The actual data is a grid of data.
If I do I simply put all the graph data into a vbo and draw it as points, I see lines between my data points.
I believe this is to do with the conversion into screen space. So therefore I need to apply a projection matrix.
Im sure I want Ortho projection but currently I seem to have a bug in my ortho matrix generation:
void OrthoMatrix(float[] matrix, float left, float right, float top, float bottom, float near, float far)
{
float r_l = right - left;
float t_b = top - bottom;
float f_n = far - near;
float tx = - (right + left) / (right - left);
float ty = - (top + bottom) / (top - bottom);
float tz = - (far + near) / (far - near);
matrix[0] = 2.0f / r_l;
matrix[1] = 0.0f;
matrix[2] = 0.0f;
matrix[3] = tx;
matrix[4] = 0.0f;
matrix[5] = 2.0f / t_b;
matrix[6] = 0.0f;
matrix[7] = ty;
matrix[8] = 0.0f;
matrix[9] = 0.0f;
matrix[10] = 2.0f / f_n;
matrix[11] = tz;
matrix[12] = 0.0f;
matrix[13] = 0.0f;
matrix[14] = 0.0f;
matrix[15] = 1.0f;
}
But I can't work it out. My shader is:
gl_Position = projectionMatrix * vec4(position, 0, 1.0);
and my data is plotted like (subset)
-15.9 -9.6
-15.8 -9.6
-15.7 -9.6
-15.6 -9.6
-16.4 -9.5
-16.3 -9.5
-16.2 -9.5
-16.1 -9.5
-16 -9.5
-15.9 -9.5
The image on the left is correct (but with lines) and the image on the right is with ortho:
A colleague has suggested to first put the data into a bitmap that conforms to the plot region and then load that. Which to some degree makes sense but it seems like a step backwards. Particularly as the image is still "stretched" and in realty we are just filling in those gaps.
EDIT
I tried a frustrum projection using glm.net and that works exactly how I want it to.
The frustrum function seems to take similar paramaters to ortho.. I think its time I went and read up a bit more about projection matrices!
If anyone can explain the strange image I got (the ortho) that would be fantastic.
EDIT 2
Now that I am adding zooming I am seeing the same lines. I will have to either draw them as quads or map the point size to the zoom level.
The "lines" you see are actually an artifact of your data samples locations becoming coerced into the pixel grid, which of course involves a rounding step. OpenGL assumes pixel centers to be at x+0.5 in viewport coordinates (i.e. after the NDC to viewport mapping).
The only way to get rid of these lines is to increase the resolution at which points are mapped into viewport space. But then your viewport contains only that much of pixels, so what can you do? Well the answer is "supersampling" or "multisampling", i.e. have several samples per pixels and the coverage of these sample points modulates the rasterization weighting. Now if you fear implementing this yourself, fear not. While implementing it "by hand" is certainly possible it's not efficient: Most (all) modern GPU come with some kind of support for multisampling, usually advertised as "antialiasing".
So that's what you should do: Create an OpenGL context with full screen antialiasing support, enable multisampling and see the grid frequency beat artifacts (aliasing) vanish.
I have a basic camera class, of which has the following notable functions:
// Get near and far plane dimensions in view space coordinates.
float GetNearWindowWidth()const;
float GetNearWindowHeight()const;
float GetFarWindowWidth()const;
float GetFarWindowHeight()const;
// Set frustum.
void SetLens(float fovY, float aspect, float zn, float zf);
Where the params zn and zf in the SetLens function correspond to the near and far clip plane distance, respectively.
SetLens basically creates a perspective projection matrix, along with computing both the far and near clip plane's height:
void Camera::SetLens(float fovY, float aspect, float zn, float zf)
{
// cache properties
mFovY = fovY;
mAspect = aspect;
mNearZ = zn;
mFarZ = zf;
float tanHalfFovy = tanf( 0.5f * glm::radians( fovY ) );
mNearWindowHeight = 2.0f * mNearZ * tanHalfFovy;
mFarWindowHeight = 2.0f * mFarZ * tanHalfFovy;
mProj = glm::perspective( fovY, aspect, zn, zf );
}
So, GetFarWindowHeight() and GetNearWindowHeight() naturally return their respective height class member values. Their width counterparts, however, return the respective height value multiplied by the view aspect ratio. So, for GetNearWindowWidth():
float Camera::GetNearWindowWidth()const
{
return mAspect * mNearWindowHeight;
}
Where GetFarWindowWidth() performs the same computation, of course replacing mNearWindowHeight with mFarWindowHeight.
Now that's all out of the way, something tells me that I'm computing the height and width of the near and far clip planes improperly. In particular, I think what causes this confusion is the fact that I'm specifying the field of view on the y axis in degrees, and then converting it to radians in the tangent function. Where I think this is causing problems is in my frustum culling function, which uses the width/height of the near and far planes to obtain points for the top, right, left and bottom planes as well.
So, am I correct in that I'm doing this completely wrong? If so, what should I do to fix it?
Disclaimer
This code originally stems from a D3D11 book, which I decided to quit reading and move back to OpenGL. In order to make the process less painful, I figured converting some of the original code to be more OpenGL compliant would be nice. So far, it's worked fairly well, with this one minor issue...
Edit
I should have originally mentioned a few things:
This is not my first time with OpenGL; I'm well aware of the transformation processes, as well the as the coordinate system differences between GL and D3D.
This isn't my entire camera class, although the only other thing which I think may be questionable in this context is using my camera's mOrientation matrix to compute the look, up, and right direction vectors, via transforming each on a +x, +y, and -z basis, respectively. So, as an example, to compute my look vector I would do: mOrientation * vec4(0.0f, 0.0f, -1.0f, 1.0f), and then convert that to a vec3. The context that I'm referring to here involves how these basis vectors would be used in conjunction with culling the frustum.
I'm having some trouble with the eyeZ value of gluLookAt.
The way I'd imagine it to work is like moving a camera further away, thus shrinking the object in your field of view.
I have a simple setup with a simple shape in 3d space draw via glDrawElements with an 100x100x100 ortho where 0, 0, 0 is the center of the universe. The object is at 0, 0, 0.
I'm trying to make it so when you scroll the mouse wheel you get further away/closer to the object. Here's how glulookat is called.
float eyeX = 0;
float eyeY = 0;
float eyeZ = differenceInMouseWheel();
float centerX = 0;
float centerY = 0;
float centerZ = 0;
float upX = 0;
float upY = 1;
float upZ = 0;
gluLookAt(eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ);
The only thing changing here is eyeZ.
The effect is strange, I scroll for about 10 seconds and then suddenly half of the object disappears. From there more and more of it disappears. This is probably because the camera is going out off into the 50 z distance limit, but I can't understand why the object doesn't scale like it would in 3D space.
Maybe I'm misunderstanding how the center values work?
I've also tried applying differenceInMouseWheel() to centerZ but that changed nothing, I'm going to assume the center values are just so glu can get a direction and nothing more.
Maybe the up vector should change? I don't know at this point.
You are using an orthographic projection. This means that no matter how great the distance, your objects will always appear to have the same size. Your object will disappear once it reaches the far clipping plane however, which is what you are seeing when you scroll for a long time.
You have two options: Either you use a perspective projection or you implement a zoom by modifying the orthographic projection matrix like so:
Let zoom be in (0, 1], and let viewport be a rectangle that is set to your current viewport. Let near be your near clipping plane distance and far be your far clipping plane distance.
glOrtho(zoom * viewport.width / 2, zoom * viewport.width / 2, zoom * viewport.height / 2, zoom * viewport.height / 2, near, far);
Are you using a perspective projection matrix, or an orthographic one? If you don't use a perspective matrix the object's wont appear to change in size as you move the camera around.