Java lwjgl glVertex2f sizes - opengl

I'm trying to use lwjgl do draw a 64x64 pixels square on a window with the following code:
glColor3f(1f,1f,1f);
glBegin(GL_QUADS);
glVertex2f(0, 0);
glVertex2f(0, 64);
glVertex2f(64, 64);
glVertex2f(64, 0);
glEnd();
I noticed that the size of the square isn't 64x64 pixels, but it's 100x100 pixels. I used the following code to track the mouse
int pX = Mouse.getX();
int pY = Mouse.getY();
System.out.println(pX + " " + pY);
I tried dividing the pointer's coordinates by 1.6 and 1.53 and now they're right for some reason:
int pX = (int)(Mouse.getX()/1.6);
int pY = (int)(Mouse.getY()/1.53);
What unit of measure does glVertex2f use and why does it use a different one from Mouse.getX and Mouse.getY? How can I fix the issue? Thanks

Related

OpenGL fixed spot map glitch

I developed a game using OpenGL and c++, it all works fine but this glitch that I need to fix: when I move my camera around (using mouse) my map does not remain in a fixed spot.
It is basically a square (gl_quad) that I draw in front of me.
This is an example of the glitch:
Video
This is the drawing code of a square if it is needed
texture = scene->getTexture("map_papyrus.png");
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glUseProgram(this->shader->getRes());
glActiveTexture(GL_TEXTURE0);
GLint texture_location = glGetUniformLocation(this->shader- >getFragment(), "color_texture");
glUniform1i(texture_location, texture_location);
glBindTexture(GL_TEXTURE_2D, this->texture->getRes());
glBegin(GL_QUADS);
float size = .5f;
float offsetx = 0.0f;
float offsety = 0.0f;
if (set->easymode) { size = .2f; offsetx = 0.8f; offsety = 0.35f; }
glTexCoord2f(0,0); glVertex2f(-size + offsetx, -size + offsety);
glTexCoord2f(1, 0); glVertex2f(size + offsetx, -size +offsety);
glTexCoord2f(1,1); glVertex2f(size + offsetx, size + offsety);
glTexCoord2f(0, 1); glVertex2f(-size + offsetx, size + offsety);
glEnd();
glTranslated(0, 0, 0.00001);
Instead of rendering GUI elements as part of the 3D world, i'd rather finish drawing the world and then overlay on top of that a gui (without clearing the buffer).
That way the HUD doesn't need to move based on the camera's position and rotation.
Also, HUD elements wont clip with the world.
glClear(...)
// Scene:
glPushMatrix()
glPerspective(...)
glTranslatef(...)
glRotatef(...)
glBegin(...)
// ...
glEnd(...)
glPopMatrix()
// UI:
glPushMatrix()
glOrtho(...)
glBegin(...)
// ...
glEnd(...)
glPopMatrix()
// Swap buffers

Keeping the geometry intact on screen size change

I have made some shapes like this :
// Triangle
glBegin(GL_TRIANGLES);
glVertex3f(0.0,0.0,0);
glVertex3f(1.0,0.0,0);
glVertex3f(0.5,1.0,0);
glEnd();
// Cube using GLUT
glColor3f(0.0,0.0,1.0);
glutSolidCube(.5);
// Circle
glPointSize(2);
glColor3f(1.0,0.0,1.0);
glBegin(GL_POINTS);
float radius = .75;
for( float theta = 0 ; theta < 360 ; theta+=.01 )
glVertex3f( radius * cos(theta), radius * sin(theta), 0 );
glEnd();
Initially I keep my window size as 500x500 and the output is as shown :
However, if I change the width and height (not in proportion) of my widget, the shapes get distorted (Circle looks oval, Equilateral triangle looks isosceles) :
This is the widget update code :
void DrawingSurface::resizeGL(int width, int height)
{
// Update the drawable area in case the widget area changes
glViewport(0, 0, (GLint)width, (GLint)height);
}
I understand that I can keep the viewport itself with same width and height, but then lot of space will get wasted on sides.
Q. Any solution for this ?
Q. How do game developers handle this in general, designing OpenGL game for different resolutions ?
P.S. : I do understand that this isn't modern OpenGL and also there are better ways of making a circle.
They solve it by using the projection matrix, both the perspective matrix and ortho projection traditionally have a way of getting the aspect ratio (width/height) and use that to adjust the result on screen.

Why is my Texture Quad positioned incorrectly with glTexCoord2f using LWJGL?

I am going to make a adventure game in 2D with Grass,Trees and other things if i can make these. My problem is that when i use glTexCoord2f to clamp the texture to a quad then they get seperated from each others about 25 pixels. These quads is supposed to be connected together like any 2D games.
Im loading them with SlickUtil and the size of the Texture is 100x100
Here's my source code for rendering quads and InitGL
public static void Render(){
example--;
GL11.glLoadIdentity();
Color.white.bind();
system.Game.ground.bind();
GL11.glTranslatef(example, 0, 0);
//I used for loop for cloning quad at each side.
for(int x = 0; x <= width; x++){
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(x * 100, 0);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(x * 100 + 100, 0);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(x * 100 + 100, 100);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(x * 100, 100);
GL11.glEnd();
}
GL11.glLoadIdentity();
}
Heres my InitGL Code.
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
GL11.glViewport(0,0,width,height);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, width, height, 0, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
SlickUtil only reads Textures with dimensions that are powers of two. The reason is, that only since OpenGL 2.0, OpenGL could read textures with non-power-of-two dimensions. Try loading a texture with the dimensions of 128 by 128 pixels. Then display it with 100 by 100 pixels (like you did in your example).

OpenGL screen layout

I have some questions about the screen set up. Originally when I would draw a triangle the x vector 1 would be all the way to the right and -1 would be all the way to the left. Now I have adjusted it to account for the different aspect ratio of the window. My new question how do I make the numbers which are used to render a 2d tri go along with the pixel values. If my window is 480 pixels wide and 320 tall I want to have to enter this to span the screen with a tri
glBegin(GL_TRIANGLES);
glVertex2f(240, 320);
glVertex2f(480, 0);
glVertex2f(0, 0);
glEnd();
but instead it currently looks like this
glBegin(GL_TRIANGLES);
glVertex2f(0, 1);
glVertex2f(1, -1);
glVertex2f(-1, -1);
glEnd();
Any ideas?
You need to use functions glViewport and glOrtho with correct values. Basically glViewport sets the part of your window capable of rendering 3D-Graphics using OpenGL. glOrtho establishes coordinate system within that part of a window using OpenGL's coordinates.
So for your task you need to know exact width and height of your window. If you are saying they are 480 and 320 respectively then you need to call
glViewport(0, 0, 480, 320)
// or: glViewport ( 0,0,w,h)
somewhere, maybe in your SizeChanging-handler(if you are using WINAPI it is WM_SIZE message)
Next, when establishing OpenGL's scene you need to specify OpenGL's coordinates. For orthographic projection they will be the same as dimensions of a window so
glOrtho(-240, 240, -160, 160, -100, 100)
// or: glOrtho ( -w/2, w/2, -h/2, h/2, -100, 100 );
is siutable for your purppose. Not that here I'm using depth of 200 (z goes from -100 to 100).
Next on your rendering routine you may draw your triangle
Since the second piece of code is working for you, I assume your transformation matrices are all identity or you have a shader that bypasses them. Also your viewport is spanning the whole window.
In general if your viewport starts at (x0,y0) and has WxH size, the normalized coordinates (x,y) you feed to glVertex2f will be transformed to (vx,vy) as follows:
vx = x0 + (x * .5f + .5f) * W
vy = y0 + (y * .5f + .5f) * H
If you want to use pixel coordinates you can use the function
void vertex2(int x, int y)
{
float vx = (float(x) + .5f) / 480.f;
float vy = (float(y) + .5f) / 320.f;
glVertex3f(vx, vy, -1.f);
}
The -1 z value is the closest depth to the viewer. It's negative because the z is assumed to be reflected after the transformation (which is identity in your case).
The addition of .5f is because the rasterizer considers a pixel as a 1x1 quad and evaluates the coverage of your triangle in the middle of this quad.

OpenGL rotating a 2D texture

UPDATE
See bottom for update.
I've been looking alot around the internet and I have found a few tutorials that explain what I'm trying to achieve but I can't get it to work, either the tutorial is incomplete or not applicable on my code.
I'm trying something as simple as rotating a 2D image around its origin (center).
I use xStart, xEnd, yStart and yEnd to flip the texture which are either 0 or 1.
This is what the code looks like
GameRectangle dest = destination;
Vector2 position = dest.getPosition();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, this->image);
//If the rotation isn't 0 we'll rotate it
if (rotation != 0)
{
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.5, 0.5, 0);
glRotatef(rotation, 0, 0, 1);
glMatrixMode(GL_PROJECTION);
}
glBegin(GL_QUADS);
glTexCoord2d(xStart,yStart);
glVertex2f(position.x, position.y);
glTexCoord2d(xEnd,yStart);
glVertex2f(position.x + this->bounds.getWidth(), position.y);
glTexCoord2d(xEnd,yEnd);
glVertex2f(position.x + this->bounds.getWidth(), position.y + this->bounds.getHeight());
glTexCoord2d(xStart,yEnd);
glVertex2f(position.x, position.y + this->bounds.getHeight());
glEnd();
glDisable(GL_TEXTURE_2D);
//Reset the rotation so next object won't be rotated
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glRotatef(0, 0, 0, 1);
glMatrixMode(GL_PROJECTION);
This code will draw the image in it's original size and it will rotate it, but it will rotate it from the top left corner which crops the image a lot. By calling GameRectangle.getOrigin() I can easily get the center of the rectangle, but I don't know where to use it.
Bit if put:
glTranslatef(-0.5, -0.5, 0);
After I call the:
glRotatef(0.5, 0.5, 0);
It will rotate from the center, but it will strech the image if it's not a perfect 90 degrees rotation.
UPDATE
After trying pretty much everything possible, I got the result I was looking for.
But I'm not sure if this is the best approach. Please tell me if there's something wrong with my code.
As I mentioned in a comment above, I use the same image multiple times and draw it with different values, so I can't save anything to the actual image. So I must reset the values everytime after I have rendered it.
I changed my code to this:
//Store the position temporary
GameRectangle dest = destination;
Vector2 position = dest.getPosition();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, this->image);
glTranslatef(dest.getOrigin().x, dest.getOrigin().y, 0);
glRotatef(rotation, 0, 0, 1);
glBegin(GL_QUADS);
glTexCoord2d(xStart,yStart);
glVertex2f(-dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yStart);
glVertex2f(dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yEnd);
glVertex2f(dest.getWidth()/2, dest.getHeight()/2);
glTexCoord2d(xStart,yEnd);
glVertex2f(-dest.getWidth()/2, dest.getHeight()/2);
glEnd();
//Reset the rotation and translation
glRotatef(-rotation,0,0,1);
glTranslatef(-dest.getOrigin().x, -dest.getOrigin().y, 0);
glDisable(GL_TEXTURE_2D);
This rotates the texture together with the quad it's drawn in, it doesn't strech or crop. However the edges are a bit jagged if the image is filled square but I guess I can't avoid that with out antialiasing.
What you want is this:
glPushMatrix(); //Save the current matrix.
//Change the current matrix.
glTranslatef(dest.getOrigin().x, dest.getOrigin().y, 0);
glRotatef(rotation, 0, 0, 1);
glBegin(GL_QUADS);
glTexCoord2d(xStart,yStart);
glVertex2f(-dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yStart);
glVertex2f(dest.getWidth()/2, -dest.getHeight()/2);
glTexCoord2d(xEnd,yEnd);
glVertex2f(dest.getWidth()/2, dest.getHeight()/2);
glTexCoord2d(xStart,yEnd);
glVertex2f(-dest.getWidth()/2, dest.getHeight()/2);
glEnd();
//Reset the current matrix to the one that was saved.
glPopMatrix();