How to scale pyopengl texture size to pygame window size? - opengl

Windows size : 640 * 480
Texture size : 1280 * 720
Hello, How can I scale the texture to the pygame window?

You've setup an orthographic projection:
glOrtho(0, self.windowWidth, self.windowHeight, 0, -1, 1)
If you want the texture to fill the whole window, then you have to draw a quad corresponding to the orthographic projection and the texture coordinates form (0, 0) to (1, 1):
glEnable(GL_TEXTURE_2D)
glBegin(GL_QUAD)
glTexCoord2f(0, 0)
glVertex2f(0, 0)
glTexCoord2f(1, 0)
glVertex2f(self.windowWidth, 0)
glTexCoord2f(1, 1)
glVertex2f(self.windowWidth, self.windowHeight)
glTexCoord2f(0, 1)
glVertex2f(0, self.windowHeight)
glEnd()
If you wan to keep the aspect ratio of the texture, then you have to scale the size of the quad. For instance:
left = 0
top = 0
width = self.windowWidth
height = self.windowHeight
sx = self.windowWidth / textureWidth
sy = self.windowHeight / textureHeight
if sx > sy:
width *= sy / sx
left = (self.windowWidth - width) / 2
else:
height *= sx / sy
top = (self.windowHeight - height) / 2
glEnable(GL_TEXTURE_2D)
glBegin(GL_QUAD)
glTexCoord2f(0, 0)
glVertex2f(left, top)
glTexCoord2f(1, 0)
glVertex2f(left + width, top)
glTexCoord2f(1, 1)
glVertex2f(left + width, top + height)
glTexCoord2f(0, 1)
glVertex2f(left, top + height)
glEnd()

Related

OpenGL Drawing 2 vectors with a given angle between them

I'm trying to draw in OpenGL 2 vectors with a given angle (in radians) between them, something like this:
I managed to draw the vectors but I'm not sure how to place them at the specific angle:
glBegin(GL_LINES); // Vx
glColor4f(1, .5, 0, 1);
glVertex3f(0, 0, 0);
glVertex3f(0, vectorYRScalingValue, 0); // vectorYRScalingValue is 5.0
glEnd();
glBegin(GL_LINES); // Vy
glColor4f(1, .5, 0, 1);
glVertex3f(0, 0, 0);
glVertex3f(0, vectorYRScalingValue, 0);
glEnd();
If β is the angle to be rotated in radians.
We rotate this vector anticlockwise around the origin.
float c = cos(β);
float s = sin(β);
NewX = x * c - y * s;
NewY = x * s + y * c;

How do I bring a polygon to the foreground in OpenGL?

The code below creates 2 square polygons, red and green.
I'm trying to place a red square on top of the green, but I can't.
The depth buffer is declared, cleaned when necessary, an orthogonal system is configured correctly.
If I specify a value outside the range (2;-2), the polygon disappears as it should.
#include <...>
constexpr auto FPS_RATE = 120;
int windowHeight = 600, windowWidth = 600, windowDepth = 600;
void init();
void idleFunction();
void displayFunction();
double getTime();
double getTime()
{
using Duration = std::chrono::duration<double>;
return std::chrono::duration_cast<Duration>(
std::chrono::high_resolution_clock::now().time_since_epoch()
).count();
}
const double frame_delay = 1.0 / FPS_RATE;
double last_render = 0;
void init()
{
glutDisplayFunc(displayFunction);
glutIdleFunc(idleFunction);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-windowWidth / 2, windowWidth / 2, -windowHeight / 2, windowHeight / 2, 2, -2);
glClearColor(0.0, 0.0, 0.0, 0.0);
}
void idleFunction()
{
const double current_time = getTime();
if ((current_time - last_render) > frame_delay)
{
last_render = current_time;
glutPostRedisplay();
}
}
void displayFunction()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
//move the red square to the foreground
glTranslatef(-32.5, -32.5, 2);
glColor3f(1, 0, 0);
glBegin(GL_POLYGON);
glVertex3i(-150, 150, 0);
glVertex3i(150, 150, 0);
glVertex3i(150, -150, 0);
glVertex3i(-150, -150, 0);
glEnd();
glPopMatrix();
glPushMatrix();
//move the green square to the background
glTranslatef(32.5, 32.5, -2);
glColor3f(0, 1, 0);
glBegin(GL_POLYGON);
glVertex3i(-150, 150, 0);
glVertex3i(150, 150, 0);
glVertex3i(150, -150, 0);
glVertex3i(-150, -150, 0);
glEnd();
glPopMatrix();
glutSwapBuffers();
}
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(windowWidth, windowHeight);
glutInitWindowPosition((GetSystemMetrics(SM_CXSCREEN) - windowWidth) / 2, (GetSystemMetrics(SM_CYSCREEN) - windowHeight) / 2);
glutCreateWindow("Window");
init();
glutMainLoop();
return 0;
}
You've to enable the Depth Test:
glEnable( GL_DEPTH_TEST );
The default depth test function (glDepthFunc) is < (GL_LESS).
If the distance to the far plane is 2.0 and the geometry is drawn with z coordinate of 2.0, then the geometry is clipped by the far plane, because the depth of the geometry is not less than the initialization depth of the depth buffer.
Change the depth function to <= (GL_LEQUAL):
glDepthFunc( GL_LEQUAL );
In a Right-handed system the viewspace z-axis points out of the viewport.
So if the z coordinate is "less than", then the object is "behind" an other object.
The projection matrix transforms from view space to normalized device space. In compare to the view space, the normalized device space is a left handed system, where the z-axis points in the viewport. The normalized device z-coordinate in range [-1, 1] (from the front to the back), is mapped to the depth value (in general in range [0, 1]), which is used for the depth test.
To deal with that glOrtho inverts the z-axis, if the near parameter is set less then the far parameter (this is how the function is suggested to be used).
This cause that the depth (z) order doesn't change, when the geometry is transformed form view space to normalized device space.
Note, glOrtho(-w, w, -h, h, -z, z) is the same as glScaled(1.0/w, 1.0/h, -1.0/z)
Since the z-axis is not inverted by the orthographic projection in your example, because near > far,
glOrtho(-windowWidth / 2, windowWidth / 2, -windowHeight / 2, windowHeight / 2, 2, -2);
the z coordinate has to be greater, to be "behind".
If the green rectangle should be behind the red one, then you've to change the orthographic projection (near < far). e.g.:
glOrtho(-windowWidth / 2, windowWidth / 2, -windowHeight / 2, windowHeight / 2, -2, 2);
If you don't want to change the projection, then you've to swap the z-coordinates of the geometry:
glPushMatrix();
//move the red square to the foreground
glTranslatef(-32.5, -32.5, -2.0); // foreground because near > far
// ...
glPopMatrix();
glPushMatrix();
//move the green square to the background
glTranslatef(32.5, 32.5, 2.0); // background because near > far
// ...
glPopMatrix();

OpenGL 2d rectangle not being rendered

I am trying to render a rectangle onto the screen. When the program is run, only the clear color shows up, and no rectangle.
Here's the code:
glClearColor(0.0, 0.0, 0.0, 0.0);
glViewport(0, 0, 1280, 720);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1280, 720, 0, -10, 10);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT || GL_DEPTH_BUFFER_BIT); //Clear the screen and depth buffer
int x = 100;
int y = 100;
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
glBegin(GL_QUADS);
glVertex2f(x, y);
glVertex2f(x + 10, y);
glVertex2f(x + 10, y + 10);
glVertex2f(x, y + 10);
glEnd();
gsm->update();
gsm->render();
glfwSwapBuffers(window);
}
It got culled. You had inverted Y axis with your projection, by supplying bottom =720 larger than top 0. Your quad is counterclockwise in your local coordinates, but in normalized coordinates it is clockwise. Remember, projection matrix is a part of global transform matrix! Now, if that's default state, then out of those two winding directions
the GL_CCW is the actual one, it is considered "Front". By default OpenGL culls triangles with mode glCullFace(GL_BACK), and quad internally is considered as pair of triangles).
Either change order of vertices
glBegin(GL_QUADS);
glVertex2f(x, y);
glVertex2f(x, y + 10);
glVertex2f(x + 10, y + 10);
glVertex2f(x + 10, y);
glEnd();
or change culling mode to match left-handedness of your coordinate system or disable culling.
See also:
1. https://www.khronos.org/opengl/wiki/Viewing_and_Transformations
2. The answer to Is OpenGL coordinate system left-handed or right-handed?

opengl: Trouble setting up viewport and glortho

I am trying to setup my OpenGL views for some texture rendering. Following some advice on the forum, I set up my viewport and ortho matrix as follows:
First I try to compute the screen width and height that I can use while maintaining the aspect ratio of my image:
void resize(int w, int h)
{
float target_aspect_ratio = image_width / image_height;
width = w;
height = (int)(width / target_aspect_ratio + 0.5f);
if (height > h) {
height = h;
width = (int)(height * target_aspect_ratio + 0.5f);
}
off_x = (w - width)/2.f;
off_y = (h - height)/2;
// I want to center my image. So I have these offsets
glViewport(off_x, off_y, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, 0.0f, 1.0f);
Now when I want to render my texture I do:
void paint()
{
// texture binding etc.
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(1, 0); glVertex2f(width, 0);
glTexCoord2f(1, 1); glVertex2f(width, height);
glTexCoord2f(0, 1); glVertex2f(0, height);
}
However, this does not show the image as expected. It does not maintain the aspect ratio as I size the screen. It is almost like the glViewport has no effect and I can verify this function gets called every time my window is resized.
Update:
It is strange. Almost as if these calls have no effect. I even did something as:
_off_x = _off_y = 0;
_width = 500;
_height = 500;
So I expected the viewport to be lower left box of my screen but the image is being drawn as before basically using the whole screen as the viewport.
Update 2:
Ok, so if I call
glViewport(_off_x, _off_y, _width, _height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, _width, 0, _height, 0, 1);
in my paint events, it works as expected! However, I thought it was enough to put this in the resize event handler.
Before start drawing, you need to switch your Matrix mode to GL_MODELVIEW. You don't need to set your projection matrix inside your render function at each frame.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
Here is a detailed analysis that I wrote about glMatrixMode() function modes :
OpenGL glMatrixMode(GL_PROJECTION) vs glMatrixMode(GL_MODELVIEW)

Texture Coordinates for OpenGL

I have a Tilesheet loaded as a texture and I am trying to render a single tile from the texture
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, TileSheet);
float TY = (Tile / 16) * 0.0625;
float TX = (Tile % 16) * 0.0625;
glBegin(GL_QUADS);
glTexCoord2f(TX, TY); glTexCoord2f(0, 0); glVertex3f(X, Y, 0);
glTexCoord2f(TX + 0.0625, TY); glVertex3f(X + TILE_SIZE, Y, 0);
glTexCoord2f(TX + 0.0625, TY + 0.0625); glVertex3f(X + TILE_SIZE, Y + TILE_SIZE, 0);
glTexCoord2f(TX, TY + 0.0625); glVertex3f(X, Y + TILE_SIZE, 0);
glEnd();
Tile is an int and represents what tile to render on the tilesheet
0.0625 being Width and Height of my Tilesheet / How many tiles per line (512 / 16)
So in the tilesheet it is 16 tiles wide so if Tile was 16 it would be the tile on the left and one tile from the top of the tilesheet. But this is what I get
So the top left is [0][0] and the bottom right is [1][0] I dont see what im doing wrong. any help would be greatly appreciated.
Thanks.
I should also mention that if Tile == 0 it renders the correct tile, but as soon as Tile > 0 the tile ends up as above.
You call glTexCoord2f twice before you call glVertex3f. That does not look right.
It looks like this line:
glTexCoord2f(TX, TY); glTexCoord2f(0, 0); glVertex3f(X, Y, 0);
Should be:
glTexCoord2f(TX, TY); glVertex3f(X, Y, 0);