Why does QMatrix4x4::lookAt() result in a upside down camera - opengl

I have a got a simple OpenGL program which sets up the camera as follows :
void
SimRenderer::render() {
glDepthMask(true);
glClearColor(0.5f, 0.5f, 0.7f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glFrontFace(GL_CW);
glCullFace(GL_FRONT);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
QMatrix4x4 mMatrix;
QMatrix4x4 vMatrix;
QMatrix4x4 cameraTransformation;
cameraTransformation.rotate(mAlpha, 0, 1, 0); // mAlpha = 25
cameraTransformation.rotate(mBeta, 1, 0, 0); // mBeta = 25
QVector3D cameraPosition = cameraTransformation * QVector3D(0, 0, mDistance);
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0);
vMatrix.lookAt(cameraPosition, QVector3D(0, 0, 0), cameraUpDirection);
mProgram.bind();
mProgram.setUniformValue(mMatrixUniformLoc, mProjMatrix * vMatrix * mMatrix );
// render a grid....
}
But the result is an upside down camera !!
!1
When I change the view matrix to be set up as:
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, -1, 0);
It works ! But why should I need to set my up direction as negative Y when my real up direction is positive Y ?
Complete class here : https://code.google.com/p/rapid-concepts/source/browse/trunk/simviewer/simrenderer.cpp
Other info: I am rendering to a QQuickFramebufferObject which binds a FBO to a widgets surface before calling the rendering function. Dont think that would be an issue but anyway. And this is not a texturing issue at all, there arent any textures to be flipped etc. Seems the camera is interpreting the up direction in the opposite way !!
http://doc.qt.digia.com/qt-maemo/qmatrix4x4.html#lookAt
Update :
So since using lookat and cameraTransformations both together may not work I am trying :
QMatrix4x4 mMatrix;
QMatrix4x4 vMatrix;
QMatrix4x4 cameraTransformation;
cameraTransformation.rotate(mAlpha, 0, 1, 0); // 25
cameraTransformation.rotate(mBeta, 1, 0, 0); // 25
cameraTransformation.translate(0, 0, mDistance);
vMatrix = cameraTransformation.inverted();
That produces exactly the same result :)
I think the camera up axis needs to be accounted for in some way.

It is actually not the camera that upside down but the texture was rendered to QML surface upside down. That is really confusing because you do get the correct direction (Y up) if you are using widget based stacks (QOpenGLWidget) or simply QOpenGLWindow.
Basically the same as this question. some explanation can be found on the forum or in the bug tracker.
I think the best solution is the one in bug tracker which requires no additional transformation on either the QML item or in matrix: overriding updatePaintNode to setTextureCoordinatesTransform to vertically mirrored.
QSGNode *MyQQuickFramebufferObject::updatePaintNode(QSGNode *node, QQuickItem::UpdatePaintNodeData *nodeData)
{
if (!node) {
node = QQuickFramebufferObject::updatePaintNode(node, nodeData);
QSGSimpleTextureNode *n = static_cast<QSGSimpleTextureNode *>(node);
if (n)
n->setTextureCoordinatesTransform(QSGSimpleTextureNode::MirrorVertically);
return node;
}
return QQuickFramebufferObject::updatePaintNode(node, nodeData);
}

Typically this effect is caused by one of several things.
Mixing up radians and degrees
Forgetting to set the modelview matrix to the inverse of the camera transform
Screwing up the inputs to lookat
I suspect the issue with this is the last.
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0);
Why are you multiplying the up vector by this transformation? I can understand multiplying the distance, so that the camera position is transformed, rotating the up axis sent to a lookat function will result in weirdness I suspect.
Generally, doing transforms using a camera matrix AND using lookat is a little odd. If you already have a camera matrix with the proper rotation, you can just translate that matrix by the distance required, expressed as a Z vector of the appropriate length, probably QVector3D(0, 0, mDistance), and then setting the view matrix to the inverse of the camera matrix:
vMatrix = cameraTransformation.inverted();

Related

OpenGL cubemap face order & sampling issue

I have a renderer based on SDL2 and OpenGL (3.3 core profile), which gives me expected results with regards to transformations and texture(2D)ing.
However, when I'm trying to display a skybox using a cubemap created from these textures (though I've tried others too), there are two steps in the process that no other tutorial or example that I have encountered seems to have to do, and I cannot explain:
1, The top / bottom faces have to be swapped upon uploading, i.e.: the top one is uploaded as GL_TEXTURE_CUBEMAP_NEGATIVE_Y, and the bottom one is GL_TEXTURE_CUBEMAP_POSITIVE_Y;
2, When sampling the cube map, I have to invert vertex positions along y, but also along z;
Without this, I'm getting the following result:
(N.B. the left-bottom-far vertex was scaled by .8 to clarify that my coordinate system is the right way around)
The image files are named correctly.
The cube is the only draw I'm performing.
If I remove [the indices for] any of the sides, I get the expected results (i.e. no swapping / mirroring there).
I seem to be getting the same results with my integrated and dedicated GPUs.
My OpenGL constants, from a glLoadGen (originally) generated header:
#define GL_TEXTURE_CUBE_MAP_NEGATIVE_X 0x8516
#define GL_TEXTURE_CUBE_MAP_NEGATIVE_Y 0x8518
#define GL_TEXTURE_CUBE_MAP_NEGATIVE_Z 0x851A
#define GL_TEXTURE_CUBE_MAP_POSITIVE_X 0x8515
#define GL_TEXTURE_CUBE_MAP_POSITIVE_Y 0x8517
#define GL_TEXTURE_CUBE_MAP_POSITIVE_Z 0x8519
The texture uploading code (much the same as LearnOpenGL's tutorial):
GLuint name;
glGenTextures(1, &name);
glBindTexture(GL_TEXTURE_CUBE_MAP, name);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLint target = GL_TEXTURE_CUBE_MAP_POSITIVE_X;
for (uint8_t i = 0; i < 6; ++i)
{
glTexImage2D(target + i, 0, GL_RGB8, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, pixelData[i]));
}
Vertex shader:
#version 330
precision mediump float;
uniform mat4 uModelViewProjection;
in vec3 aPosition;
out vec3 vTexCoord;
void main()
{
vec4 position = uModelViewProjection * vec4(aPosition, 1.f);
gl_Position = position.xyww;
vTexCoord = aPosition;
}
Fragment shader:
#version 330
precision mediump float;
uniform samplerCube uTexture0;
in vec3 vTexCoord;
out vec4 FragColor;
void main()
{
FragColor = texture(uTexture0, vTexCoord);
// using textureCube() yields a compile error asking for #extension GL_NV_shadow_samplers_cube : enable, but even with that, the issue perists.
}
Mesh setup (semi-pseudo-code):
// 4----5
// /| /|
// 6----7 |
// | | | |
// | 0--|-1
// |/ |/
// 2----3
VertexType vertices[8] = {
Vector3(-1.f, -1.f, -1.f) * .8f, // debug coordinate system
Vector3(1.f, -1.f, -1.f),
Vector3(-1.f, -1.f, 1.f),
Vector3(1.f, -1.f, 1.f),
Vector3(-1.f, 1.f, -1.f),
Vector3(1.f, 1.f, -1.f),
Vector3(-1.f, 1.f, 1.f),
Vector3(1.f, 1.f, 1.f),
};
uint16_t indices[] = {
4, 0, 5,
0, 1, 5,
6, 2, 4,
2, 0, 4,
7, 3, 6,
3, 2, 6,
5, 1, 7,
1, 3, 7,
0, 2, 1,
2, 3, 1,
5, 7, 4,
7, 6, 4,
};
// create buffers & upload data
Rendering (pseudo-code):
// -clear color & depth buffers;
// -set the model transform to a translation of -10 units along z;
// view transform is identity; projection is perspective with .25
// radians vertical FOV, zNear of .1, zFar of 100.; viewport is full screen
// -set shader program;
// -bind texture (same name, same target as upon uploading);
// -enable backface culling only (no depth test / write);
// -draw the cube
// -glFlush() and swap buffers;
What on earth can be causing the two issues described above?
The issue is caused by the mapping of the .str texture coordinates to the cubemap:
OpenGL 4.6 API Core Profile Specification, 8.13 Cube Map Texture Selection, page 253:
When a cube map texture is sampled, the (s, t, r) texture coordinates are treated as a direction vector (rx, ry, rz) emanating from the center of a cube. The q coordinate is ignored. At texture application time, the interpolated per-fragment direction vector selects one of the cube map face’s two-dimensional images based on the largest magnitude coordinate direction (the major axis direction). If two or more coordinates have the identical magnitude, the implementation may define the rule to disambiguate this situation. The rule must be deterministic and depend only on (rx, ry, rz). The target column in table 8.19 explains how the major axis direction maps to the two-dimensional image of a particular cube map target.
Using the sc, tc, and ma determined by the major axis direction as specified in table 8.19, an updated (s, t) is calculated as follows:
s = 1/2 (sc / |m_a| + 1)
t = 1/2 (tc / |m_a| + 1)
Major Axis Direction| Target |sc |tc |ma |
--------------------+---------------------------+---+---+---+
+rx |TEXTURE_CUBE_MAP_POSITIVE_X|−rz|−ry| rx|
−rx |TEXTURE_CUBE_MAP_NEGATIVE_X| rz|−ry| rx|
+ry |TEXTURE_CUBE_MAP_POSITIVE_Y| rx| rz| ry|
−ry |TEXTURE_CUBE_MAP_NEGATIVE_Y| rx|−rz| ry|
+rz |TEXTURE_CUBE_MAP_POSITIVE_Z| rx|−ry| rz|
−rz |TEXTURE_CUBE_MAP_NEGATIVE_Z|−rx|−ry| rz|
--------------------+---------------------------+---+---+---+
Table 8.19: Selection of cube map images based on major axis direction of texture
coordinates
The rotation can be achieved by either rotating the 6 cubemap images before loading them to the cubemap sampler or by rotating the texture coordinates.
It cubemap is used as an environment map in a scene and the texture coordinates are get by a direction vector, then it makes sense to rotate the images. If the cubemap is wrapped on a mesh then the texture coordinates can be specified in the right manner.
The previous answer's reasoning from the quoted spec. text is wrong.
What is going on is that the quoted text, if you look carefully at the math, requires the cubemap's images to have a top-down orientation and be arranged in a left-handed coordinate system with +Y up. That means sky at +Y and if you’re facing +Z, -X should be on your left and +X on your right. This was apparently inherited from Renderman where cube maps first appeared.
The coordinates of the cube you are rendering as the skybox, which will be used to sample the cube map, are in OpenGL's coordinate system which is a right-handed system. These must be transformed to the cubemap's left-handed system before sampling. This is done by simply scaling the Z coord by -1. Failure to do that means the scene will be a mirror image of what it should be. A very common failing in samples I've looked at.
The OPs upside down images are because they had standard OpenGL bottom-up orientation.
If you're using Vulkan, that has a left-handed system but Y is down. So to correctly render the cubemap on Vulkan you still need to transform the skybox cube's coordinates, in this case by rotating them 180° around the X axis. Fail to do that and you'll have upside down images.

Scene rendering wonky when camera transformations occur

I've recently switched over to GLM for managing my matrices and vectors, however when I change my variables such as camera angles or position, the whole rendered scene goes haywire.
I really don't know how to describe it other than stretching and moving all over the place.
Problem:
"Camera" transformations such as panning the camera result in strange atypical/unexpected changes. Typically, when the camera pan variables like X and Y deviate from "0"
Note:
I used to perform these very same types of transformations on Qt's datatypes for QMatrix4x4 and QVector3D, rather than glm::mat4x4 and glm::vec4, and it worked fine
Here is the way I'm implementing the camera in my render function (alpha and beta are rotation vars, and = 0 by default, camX and camY are panning vars, and also = 0 by default):
glm::mat4x4 mMatrix;
glm::mat4x4 vMatrix;
glm::mat4x4 cameraTransformation;
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(alpha)/*alpha*(float)M_PI/180*/, glm::vec3(0, 1 ,0));
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(beta)/*beta*(float)M_PI/180*/, glm::vec3(1, 0, 0));
glm::vec4 cameraPosition = (cameraTransformation * glm::vec4(camX, camY, distance, 0));
glm::vec4 cameraUpDirection = cameraTransformation * glm::vec4(0, 1, 0, 0);
vMatrix = glm::lookAt(glm::vec3(cameraPosition[0],cameraPosition[1],cameraPosition[2]), glm::vec3(camX, camY, 0.0), glm::vec3(cameraUpDirection[0],cameraUpDirection[1],cameraUpDirection[2]));
glm::mat4x4 glmat = pMatrix * vMatrix * mMatrix;
QMatrix4x4 qmat = QMatrix4x4(glmat[0][0],glmat[0][1],glmat[0][2],glmat[0][3],
glmat[1][0],glmat[1][1],glmat[1][2],glmat[1][3],
glmat[2][0],glmat[2][1],glmat[2][2],glmat[2][3],
glmat[3][0],glmat[3][1],glmat[3][2],glmat[3][3]);
shaderProgram.bind();
shaderProgram.setUniformValue("mvpMatrix", qmat);
I set up my projection matrix as so (fov = 30 degrees):
pMatrix = glm::perspective( glm::radians(fov), (float)width/(float)height, (float)0.001, (float)10000 );
My matrices look like this at the time they are used:
Here's an example of how it looks
Before any changes, all values are at 0:
When camX changes to 14 (note, I didn't rotate my camera around!):
glm::mat4x4 cameraTransformation;
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(alpha)/*alpha*(float)M_PI/180*/, glm::vec3(0, 1 ,0));
cameraTransformation = glm::rotate(cameraTransformation, glm::radians(beta)/*beta*(float)M_PI/180*/, glm::vec3(1, 0, 0));
This can be simplified by using matrix multiplication and using a different glm call:
glm::mat4x4 cameraTransformation =
glm::rotate(glm::radians(alpha), glm::vec3(0,1,0)) *
glm::rotate(glm::radians(beta), glm::vec3(1,0,0));
Next:
glm::vec4 cameraPosition = (cameraTransformation * glm::vec4(camX, camY, distance, 0));
glm::vec4 cameraUpDirection = cameraTransformation * glm::vec4(0, 1, 0, 0);
Having a zero in the w component of a vector indicates that the vector is a direction, not a position. Yet you are obtaining a position vector as the output. This happens to work because cameraTransformation has only rotation operations, not translating operations, but it's better to be clear:
glm::vec3 cameraPosition = glm::vec3(cameraTransformation * glm::vec4(camX, camY, distance, 1));
Note- I use a vec3 not a vec4 because I just like to do that.
For the next part you actually do want a direction vector and not a position vector, so you should have a zero in the w component. Still cast it to a vec3, because it's just clearer in my opinion.
glm::vec3 cameraUpDirection = glm::vec3(cameraTransformation * glm::vec4(0, 1, 0, 0));
Next:
vMatrix=
glm::lookAt(glm::vec3(cameraPosition[0],cameraPosition[1],cameraPosition[2]),
glm::vec3(camX, camY, 0.0),
glm::vec3(cameraUpDirection[0],cameraUpDirection[1],cameraUpDirection[2]));
Glm lets you pass a vec3 into a vec4 as a constructor parameter so you can shorten your code like this:
vMatrix=
glm::lookAt(glm::vec3(cameraPosition),
glm::vec3(camX, camY, 0.0),
glm::vec3(cameraUpDirection));
But we don't even need to do that because i changed the variables into vec3s not vec4s:
vMatrix= glm::lookAt(cameraPosition, glm::vec3(camX, camY, 0.0), cameraUpDirection);
And finally, you can access the components of a glm vector using .x,.y,.z,.w instead of the [] operator, which I imagine is much safer and easier to read.
I made a very stupid error!
In attempt to convert my glm::mat4x4 to QMatrix4x4, I accidentally swapped the rows and columns.
I needed to change:
QMatrix4x4 qmat = QMatrix4x4(glmat[0][0],glmat[0][1],glmat[0][2],glmat[0][3],
glmat[1][0],glmat[1][1],glmat[1][2],glmat[1][3],
glmat[2][0],glmat[2][1],glmat[2][2],glmat[2][3],
glmat[3][0],glmat[3][1],glmat[3][2],glmat[3][3]);
to:
QMatrix4x4 qmat = QMatrix4x4(glmat[0][0],glmat[1][0],glmat[2][0],glmat[3][0],
glmat[0][1],glmat[1][1],glmat[2][1],glmat[3][1],
glmat[0][2],glmat[1][2],glmat[2][2],glmat[3][2],
glmat[0][3],glmat[1][3],glmat[2][3],glmat[3][3]);

UnProjected mouse coordinates are between 0-1

I'm trying to create a ray from my mouse location out into 3D space, and apparently in order to do that I need to "UnProject()" it.
Doing so will give me a value between 0 & 1 for each axis.
This can't be right for drawing a "Ray" or a line from the viewport, can it? All this is, is a percentage essentially of my mouse to viewport size.
If this is actually right, then I don't understand the following:
I draw triangles that have vertices that are not constrained from 0-1, rather they are coordinates like (0,100,0), (100,100,0), (100,0,0), And these draw perfectly fine
But also, drawing the vertices that are unprojected from my mouse coordinates as lines/points also draw perfectly fine.
How the heck would I then compare my mouse coordinates to the coordinates of my objects?
If this is actually wrong, then what can cause such an error?
I tried unprojecting my own object's vertices, and those aren't constrained from 0-1.
I don't know whether or not the way I handle my "projections" when rendering is even compatible with gluUnproject. I've been just doing it the way these tutorials here show it (near bottom): http://qt-project.org/wiki/Developer-Guides#28810c65dd0f273a567b83a48839d275
This is the way I try to get my mouse coordinates:
GLdouble modelViewMatrix[16];
GLdouble projectionMatrix[16];
GLint viewport[4];
GLfloat winX, winY, winZ;
glGetDoublev(GL_MODELVIEW_MATRIX, modelViewMatrix);
glGetDoublev(GL_PROJECTION_MATRIX, projectionMatrix);
glGetIntegerv(GL_VIEWPORT, viewport);
winX = (float)x;
winY = (float)viewport[3] - (float)y;
glReadPixels( winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ );
GLdouble nearPlaneLocation[3];
gluUnProject(winX, winY, 0, modelViewMatrix, projectionMatrix,
viewport, &nearPlaneLocation[0], &nearPlaneLocation[1],
&nearPlaneLocation[2]);
GLdouble farPlaneLocation[3];
gluUnProject(winX, winY, 1, modelViewMatrix, projectionMatrix,
viewport, &farPlaneLocation[0], &farPlaneLocation[1],
&farPlaneLocation[2]);
QVector3D nearP = QVector3D(nearPlaneLocation[0], nearPlaneLocation[1],
nearPlaneLocation[2]);
QVector3D farP = QVector3D(farPlaneLocation[0], farPlaneLocation[1],
farPlaneLocation[2]);
Perhaps my actual projections are off?
void oglWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
QMatrix4x4 mMatrix;
QMatrix4x4 vMatrix;
QMatrix4x4 cameraTransformation;
cameraTransformation.rotate(alpha, 0, 1, 0);
cameraTransformation.rotate(beta, 1, 0, 0);
QVector3D cameraPosition = cameraTransformation * QVector3D(camX, camY, distance);
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0);
vMatrix.lookAt(cameraPosition, QVector3D(camX, camY, 0), cameraUpDirection);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(cameraPosition.x(), cameraPosition.y(), cameraPosition.z(), camX, camY, 0, cameraUpDirection.x(), cameraUpDirection.y(), cameraUpDirection.z());
shaderProgram.bind();
shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix);
shaderProgram.setUniformValue("texture", 0);
for (int x = 0; x < tileCount; x++)
{
shaderProgram.setAttributeArray("vertex", tiles[x]->vertices.constData());
shaderProgram.enableAttributeArray("vertex");
shaderProgram.setAttributeArray("textureCoordinate", textureCoordinates.constData());
shaderProgram.enableAttributeArray("textureCoordinate");
//Triangle Drawing
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, tiles[x]->image.width(), tiles[x]->image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, tiles[x]->image.bits());
glDrawArrays(GL_TRIANGLES, 0, tiles[x]->vertices.size());
}
shaderProgram.release();
}
Where as, pMatrix is a 4x4 matrix, controlled during resize events like:
pMatrix.setToIdentity();
pMatrix.perspective(fov, (float) width / (float) height, 0.001, 10000);
glViewport(0, 0, width, height);
and my vertex shader is set up like this:
uniform mat4 mvpMatrix;
in vec4 vertex;
in vec2 textureCoordinate;
out vec2 varyingTextureCoordinate;
void main(void)
{
varyingTextureCoordinate = textureCoordinate;
gl_Position = mvpMatrix * vertex;
}
glReadPixels takes integers (x and y) and you don't seem to be using winZ for some reason in gluUnProject.
Try it like this:
gluUnProject(winX, winY, winZ, glView, glProjection, viewport, &posX, &posY, &posZ);
Also, if you want the ray to stop when it meets something in the depth buffer then don't clear the depth buffer after rendering. If you do a glClear(GL_DEPTH_BUFFER_BIT) then the ray should go as far as the far clip you set in your projection matrix.
I also have no idea why you need to call it more than once. The last three floats will be the target vector and you can just use your camera position as the source of the ray (depending on what you are doing).
Part of my problem here was poorly describing it. I accidentally left residual code from frantically testing, resulting in bits of "read Pixel" functions and related nonsense which wasn't useful for solving the problem.
The rest of my problem was due to inconsistent data types for the matrices, and trying to pull matrices from OpenGL when it never had them stored in the first place.
The problem was solved by:
Using GLM to hold all my matrices
performing the calculations myself (inverse view matrix * inverse model matrix * inverse projection matrix) * vector holding NDC converted screen space coordinates (range of -1 to 1: x or y divided by width or height, * 2 - 1), which also has a z of -1 or 1 for the far or near planes, and a w of 1.
Divide result by the fourth spot of the vector.
I still do not know why unprojecting doesn't work for me, as I got the wrong results with GLU as well as GLM's unproject function, but doing it manually worked for me.
Since my problem extended over quite a great length of time, and took up several questions, I owe credit to a few individuals who helped me along the way:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion

Opaque OpenGL textures have transparent border

My problem concerns rendering text with OpenGL -- the text is rendered into a texture, and then drawn onto a quad. The trouble is that the pixels on the edge of the texture are drawn partially transparent. The interior of the texture is fine.
I'm calculating the texture coordinates to hit the center of my texels, using NEAREST (non-)interpolation, setting the texture wrapping to CLAMP_TO_EDGE, and setting the projection matrix to place my vertices at the center of the viewport pixels. Still seeing the issue.
I'm working on VTK with their texture utilities. These are the GL calls that are used to load the texture, as determined by stepping through with a debugger:
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Create and bind pixel buffer object here (not shown, lots of indirection in VTK)...
glTexImage2D( GL_TEXTURE_2D, 0 , GL_RGBA, xsize, ysize, 0, format, GL_UNSIGNED_BYTE, 0);
// Unbind PBO -- also omitted
glBindTexture(GL_TEXTURE_2D, id);
glAlphaFunc (GL_GREATER, static_cast<GLclampf>(0));
glEnable (GL_ALPHA_TEST);
// I've also tried doing this here for premultiplied alpha, but it made no difference:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
The rendering code:
float p[2] = ...; // point to render text at
int imgDims[2] = ...; // Actual dimensions of image
float width = ...; // Width of texture in image
float height = ...; // Height of texture in image
// Prepare the quad
float xmin = p[0];
float xmax = xmin + width - 1;
float ymin = p[1];
float ymax = ymin + height - 1;
float quad[] = { xmin, ymin,
xmax, ymin,
xmax, ymax,
xmin, ymax };
// Calculate the texture coordinates.
float smin = 1.0f / (2.0f * (imgDims[0]));
float smax = (2.0 * width - 1.0f) / (2.0f * imgDims[0]);
float tmin = 1.0f / (2.0f * imgDims[1]);
float tmax = (2.0f * height - 1.0f) / (2.0f * imgDims[1]);
float texCoord[] = { smin, tmin,
smax, tmin,
smax, tmax,
smin, tmax };
// Set projection matrix to map object coords to pixel centers
// (modelview is identity)
GLint vp[4];
glGetIntegerv(GL_VIEWPORT, vp);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
float offset = 0.5;
glOrtho(offset, vp[2] + offset,
offset, vp[3] + offset,
-1, 1);
// Disable polygon smoothing. Why not, I've tried everything else?
glDisable(GL_POLYGON_SMOOTH);
// Draw the quad
glColor4ub(255, 255, 255, 255);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, points);
glTexCoordPointer(2, GL_FLOAT, 0, texCoord);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
// Restore projection matrix
glMatrixMode(GL_PROJECTION);
glPopMatrix();
For debugging purposes, I've overwritten the outermost texels with red, and the next inner layer of texels with green (otherwise it's hard to see what's going on in the mostly-white text image).
I've inspected the texture in-memory using gDEBugger, and it looks as expected -- bright red and green borders around the texture area (the extra empty space is padding to make its size a power of two). For reference:
Here's what the final rendered image looks like (magnified 20x -- the black pixels are remnants of the text that was rendered under the debugging borders). Pale red border, but still a bold green inner border:
So it is just the outer edge of pixels that is affected. I'm not sure if it's color-blending or alpha-blending that's screwing things up, I'm at a loss. I've noticed that the corner pixels are twice as pale as the edge pixels, perhaps that's significant... Maybe someone here can spot the error?
Could be a "pixel perfect" problem. OpenGL defines the center of a line to be the spot that gets rasterized into a pixel. The middle is exactly half way between 1 integer and the next... to get pixel (x,y) to display "pixel perfect"... fix up your coordinates to be:
x=(int)x+0.5f; // x is a float.. makes 0.0 into 0.5, 16.343 into 16.5, etc.
y=(int)y+0.5f;
This probably is what is messing up the blending. I had the same issues with texture modulating... a single somewhat dimmer line or series of pixels at the bottom and right edges.
Okay, I've worked on it for the last few days. There were few ideas that didn't work at all. The only one that worked is to admit that this "Perfect Pixel" exists and try to trick it. Bad That I can't vote up for your answer Cosmic Bacon. But your answer, even if it looks good -- will a little bit ruin everything in a special programs like Games. My answer -- is improved yours.
Here's the solution:
Step1: Make a method that draws texture that you need and use only it for drawing. And Add 0.5f to every coordinate. Look:
public void render(Texture tex,float x1,float y1,float x2,float y2)
{
tex.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(x1+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(x2+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(x2+0.5f,y2+0.5f);
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(x1+0.5f,y2+0.5f);
GL11.glEnd();
}
Step2: If you're going to use "glTranslatef(somethin1,somethin2,0)" it will be nice to make a method that overcomes "Translatef" and doesn't let camera to move on fractional distance. Cause if there will be a little chance that Camera moves on, let's say, 0.3 -- Sooner or later you'll see this issue again(multiple times, i suppose). Next code makes camera follow the Object that has X and Y. And Camera will never loose the object from it's sight:
public void LookFollow(Block AF)
{
float some=5;//changing me will cause camera to move faster/slower
float mx=0,my=0;
//Right-Left
if(LookCorX!=AF.getX())
{
if(AF.getX()>LookCorX)
{
if(AF.getX()<LookCorX+2)
mx=AF.getX()-LookCorX;
if(AF.getX()>LookCorX+2)
mx=(AF.getX()-LookCorX)/some;
}
if(AF.getX()<LookCorX)
{
if(2+AF.getX()>LookCorX)
mx=AF.getX()-LookCorX;
if(2+AF.getX()<LookCorX)
mx=(AF.getX()-LookCorX)/some;
}
}
//Up-Down
if(LookCorY!=AF.getY())
{
if(AF.getY()>LookCorY)
{
if(AF.getY()<LookCorY+2)
my=AF.getY()-LookCorY;
if(AF.getY()>LookCorY+2)
my=(AF.getY()-LookCorY)/some;
}
if(AF.getY()<LookCorY)
{
if(2+AF.getY()>LookCorY)
my=AF.getY()-LookCorY;
if(2+AF.getY()<LookCorY)
my=(AF.getY()-LookCorY)/some;
}
}
//Evading "Perfect Pixel"
mx=(int)mx;
my=(int)my;
//Moving Camera
GL11.glTranslatef(-mx,-my,0);
//Saving up Position of camera.
LookCorX+=mx;
LookCorY+=my;
}
float LookCorX=300,LookCorY=200; //camera's starting position
As the result -- we receive a camera that moves a little sharper, cause steps can't be less than 1 pixel, and sometimes, it's necessary to make a smaller step, but textures are looking okay, and, it's -- a Great Progress!
Sorry for a real Big Answer. I'm still working on a Good Solution. Once I'll find something better and shorter -- this will be erased by me.

OpenGL rotating a 2D texture

UPDATE
See bottom for update.
I've been looking alot around the internet and I have found a few tutorials that explain what I'm trying to achieve but I can't get it to work, either the tutorial is incomplete or not applicable on my code.
I'm trying something as simple as rotating a 2D image around its origin (center).
I use xStart, xEnd, yStart and yEnd to flip the texture which are either 0 or 1.
This is what the code looks like
GameRectangle dest = destination;
Vector2 position = dest.getPosition();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, this->image);
//If the rotation isn't 0 we'll rotate it
if (rotation != 0)
{
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.5, 0.5, 0);
glRotatef(rotation, 0, 0, 1);
glMatrixMode(GL_PROJECTION);
}
glBegin(GL_QUADS);
glTexCoord2d(xStart,yStart);
glVertex2f(position.x, position.y);
glTexCoord2d(xEnd,yStart);
glVertex2f(position.x + this->bounds.getWidth(), position.y);
glTexCoord2d(xEnd,yEnd);
glVertex2f(position.x + this->bounds.getWidth(), position.y + this->bounds.getHeight());
glTexCoord2d(xStart,yEnd);
glVertex2f(position.x, position.y + this->bounds.getHeight());
glEnd();
glDisable(GL_TEXTURE_2D);
//Reset the rotation so next object won't be rotated
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glRotatef(0, 0, 0, 1);
glMatrixMode(GL_PROJECTION);
This code will draw the image in it's original size and it will rotate it, but it will rotate it from the top left corner which crops the image a lot. By calling GameRectangle.getOrigin() I can easily get the center of the rectangle, but I don't know where to use it.
Bit if put:
glTranslatef(-0.5, -0.5, 0);
After I call the:
glRotatef(0.5, 0.5, 0);
It will rotate from the center, but it will strech the image if it's not a perfect 90 degrees rotation.
UPDATE
After trying pretty much everything possible, I got the result I was looking for.
But I'm not sure if this is the best approach. Please tell me if there's something wrong with my code.
As I mentioned in a comment above, I use the same image multiple times and draw it with different values, so I can't save anything to the actual image. So I must reset the values everytime after I have rendered it.
I changed my code to this:
//Store the position temporary
GameRectangle dest = destination;
Vector2 position = dest.getPosition();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, this->image);
glTranslatef(dest.getOrigin().x, dest.getOrigin().y, 0);
glRotatef(rotation, 0, 0, 1);
glBegin(GL_QUADS);
glTexCoord2d(xStart,yStart);
glVertex2f(-dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yStart);
glVertex2f(dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yEnd);
glVertex2f(dest.getWidth()/2, dest.getHeight()/2);
glTexCoord2d(xStart,yEnd);
glVertex2f(-dest.getWidth()/2, dest.getHeight()/2);
glEnd();
//Reset the rotation and translation
glRotatef(-rotation,0,0,1);
glTranslatef(-dest.getOrigin().x, -dest.getOrigin().y, 0);
glDisable(GL_TEXTURE_2D);
This rotates the texture together with the quad it's drawn in, it doesn't strech or crop. However the edges are a bit jagged if the image is filled square but I guess I can't avoid that with out antialiasing.
What you want is this:
glPushMatrix(); //Save the current matrix.
//Change the current matrix.
glTranslatef(dest.getOrigin().x, dest.getOrigin().y, 0);
glRotatef(rotation, 0, 0, 1);
glBegin(GL_QUADS);
glTexCoord2d(xStart,yStart);
glVertex2f(-dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yStart);
glVertex2f(dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yEnd);
glVertex2f(dest.getWidth()/2, dest.getHeight()/2);
glTexCoord2d(xStart,yEnd);
glVertex2f(-dest.getWidth()/2, dest.getHeight()/2);
glEnd();
//Reset the current matrix to the one that was saved.
glPopMatrix();