LWJGL OpenGL texturing not correct - opengl

I'm trying to make a simple button in OpenGL/LWJGL,
I can render my 2D QUAD correctly, but when i implement the texture, only about 3/4 parts of the whole quad gets textured, like this: https://dl.dropboxusercontent.com/u/60223805/glerror1.png
and if i remove the texture coords i get this: https://dl.dropboxusercontent.com/u/60223805/glerror2.png
none.bind();
co.Enable2D_GUI();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(co.width/2-200, co.height/2);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(co.width/2+none.getTextureWidth(),co.height/2);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(co.width/2+none.getTextureWidth(), co.height/2+none.getTextureHeight());
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(co.width/2-200, co.height/2+none.getTextureHeight());
GL11.glEnd();
co.Disable2D_GUI();
where none is an Texture (from slick-util library) and the functions Enable2D_GUI and Disable2D_GUI just enables and disable ortho and stuff.
What can be wrong? I'm very new to OpenGL so im sorry if my question is a bit nooby
This is my Enable2D_GUI and Disable2D_GUI functions:
public void Enable2D_GUI() {
GL11.glMatrixMode (GL11.GL_PROJECTION);
GL11.glPushMatrix();
GL11.glLoadIdentity ();
GL11.glOrtho (0, width, height, 0, 1, -1);
GL11.glDisable(GL11.GL_DEPTH_TEST);
GL11.glMatrixMode (GL11.GL_MODELVIEW);
GL11.glPushMatrix();
GL11.glLoadIdentity();
}
public void Disable2D_GUI() {
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glPopMatrix();
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glPopMatrix();
GL11.glDisable(GL11.GL_BLEND);
GL11.glEnable(GL11.GL_DEPTH_TEST);
}
Now when I test it with a 3D QUAD it doesnt work either, same result. This is my OpenGL init code:
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glShadeModel(GL11.GL_SMOOTH);
GL11.glClearColor(0f, 0.0f, 0.0f, 0.0f);
GL11.glClearDepth(1.0);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glViewport(0, 0, width, height);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(
45.0f,
(float)width/(float)height,
0.5f,
50.0f);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);

You're misusing the glPushMatrix and glPopMatrix and also not adding or performing state switches in the correct order.
In you init code you need to change it to the following.
GL11.glViewport(0, 0, width, height);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(45.0f, (float) width / (float) height, 0.5f, 50.0f);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glShadeModel(GL_SMOOTH);
GL11.glClearColor(0f, 0.0f, 0.0f, 0.0f);
GL11.glClearDepth(1.0);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
You don't have to perform state switches after the glLoadMatrix and glLoadIdentity though it would be better doing so. 1 The code is more readable. 2 Some things gets reset after calling glLoadIdentity though just the stuff about and within the Matrix itself.
Then you also need to fix your Enable2D_GUI and Disable2D_GUI to the following.
Enable2D_GUI
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, width, height, 0f, 1f, -1f);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glDisable(GL11.GL_DEPTH_TEST);
Disable2D_GUI
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glDisable(GL11.GL_BLEND);
GL11.glEnable(GL11.GL_DEPTH_TEST);
Aleast of what I know you can not use the glPushMatrix and glPopMatrix in between glMatrixMode and glLoadIdentity calls. I could be mistaking Then I also added the glLoadIdentity calls instead to reset the Matrices.

Related

OpenGL4: Rotation looks all wrong

Here's the vertex buffer information of the quad I'm drawing:
static const GLfloat pv_quad[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
This quad is used to draw 2D frames on the screen as part of the graphical user interface. The class I use to do this is Mage::Interface::Frame. I'll spare you the header definition and instead give you the class's implementation, as it's small. There's some test code in here, so ignore the fact the shader is part of the class. I know it shouldn't be there.
#include <Mage/Root.h>
#include <Mage/Interface/Frame.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/transform.hpp>
using Mage::Interface::Frame;
Frame::Frame()
: width(300), height(200), position(0, 0), color(1.0, 1.0, 1.0), model(1.0), rotation(0) {
prog.compileFile("Data/Shaders/FrameVertex.glsl", Mage::ShaderType::VERTEX);
prog.compileFile("Data/Shaders/FrameFragment.glsl", Mage::ShaderType::FRAGMENT);
prog.link();
this->calcTransform();
}
void Frame::setSize(int w, int h) {
this->width = w;
this->height = h;
this->calcTransform();
}
void Frame::setColor(int r, int g, int b) {
this->color = glm::vec3(float(r) / 256, float(g) / 256, float(b) / 256);
}
void Frame::setRotation(float degrees) {
this->rotation = glm::radians(degrees);
this->calcTransform();
}
void Frame::calcTransform() {
this->model = glm::mat4(1.0f); // reset model to origin.
// 1280 and 720 are the viewport's size. This is only hard coded for tests.
this->model = glm::scale(this->model, glm::vec3(float(width) / 1280, float(height) / 720, 1.0f));
this->model = glm::rotate(this->model, this->rotation, glm::vec3(0.0f, 0.0f, 1.0f));
this->model = glm::translate(this->model, glm::vec3(position.x, position.y, 0.0f));
}
void Frame::draw() {
Mage::VertexObject obj = ROOT.getRenderWindow()->getVertexBufferObject()->getObject("PrimitiveQuad");
prog.use();
prog.setUniform("mvp", this->model);
prog.setUniform("fColor", this->color);
glEnableVertexAttribArray(0);
ROOT.getRenderWindow()->getVertexBufferObject()->bind();
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)obj.begin);
glDrawArrays(GL_TRIANGLE_STRIP, 0, obj.size);
glDisableVertexAttribArray(0);
}
Here's the drawing function that's called every frame:
void RenderWindow::render() {
Mage::Interface::Frame F;
F.setSize(400, 200);
F.setRotation(0);
while (glfwWindowShouldClose(this->win) == 0) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
F.draw();
glfwSwapBuffers(this->win);
glfwPollEvents();
}
}
When I have setRotation(0), the resulting quad is indeed, 400 pixels wide and 200 pixels high, right in the centre of my screen as you would expect.
However, if I set the rotation to (90), well, this happens:
As you can see, that's not at all close to a 90 degrees turn. It should be 400px high and 200px wide.
Anyone care to explain what's going on here?
EDIT: Some playing around has shown me that the problem is with the scale, not the rotation. When I comment out the scale, the rotation appears to be correct.
The angle argument to glm::rotate() is in radians, not degrees:
m: Input matrix multiplied by this rotation matrix.
angle: Rotation angle expressed in radians.
axis: Rotation axis, recommanded [sic] to be normalized.
Use this:
void Frame::setRotation(float degrees) {
this->rotation = glm::radians( degrees );
this->calcTransform();
}
I am assuming that this game is supposed to be a 3D game with a 2D GUI, although this was not specified in the question, though not entirely necessary, as my answer will be the same.
When rendering with a 3D matrix, using a perspective view (Field of View taken into account), as opposed to using an orthographic view, the shapes will bend to their position depending on the fov.
So with that, I propose that you use a simple solution, and initialize a 2D viewing matrix (or orthographic matrix) for your 2D interface. If you are just looking for a simple way to render a 2D quad onto the screen freeGLUT(free Graphics Library Utility Toolkit) is there for you. There are plenty of docs out there to help install freeglut, so once you finish that, initialize a 2D rendering matrix, then render the quad using glVertex2i/f or glVertex3i/f, like so:
void setView2d()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, *SCREEN_WIDTH, *SCREEN_HEIGHT, 0);
glMatrixMode( GL_MODELVIEW );
glDisable(GL_DEPTH_TEST);
glLoadIdentity();
}
void setView3d()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, (GL_FLOAT)*SCREEN_WIDTH / *SCREEN_HEIGHT, 0.1, 100);
glEnable(GL_DEPTH_TEST);
glLoadIdentity();
}
void render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_TEST);
setView2d(); //Render 2D objects
glPushMatrix();
{
//glTranslatef() and glRotatef() still work for 2D
//if using rotate, rotate on z axis, like so:
glRotatef(90, 0, 0, 1);
glBegin(GL_TRIANGLES);
{
glVertex2i(0, 0);
glVertex2i(100, 0);
glVertex2i(0, 100);
/*
glVertex2i is replacable with glVertex2f, glVertex3i, and glVertex3f
if using a glVertex3, set the z value to 0
*/
}
glEnd();
}
glPopMatrix();
setView3d(); //Render 3D objects
glPushMatrix();
{
//render 3D stuff
}
glPopMatrix();
glutSwapBuffers();
}
I should also mention that when using the gluOrtho2D, coordinates used in vertex x,y are based on pixels, instead of the 3D blocks.
Hope this helped,
-Nick

opengl: Trouble setting up viewport and glortho

I am trying to setup my OpenGL views for some texture rendering. Following some advice on the forum, I set up my viewport and ortho matrix as follows:
First I try to compute the screen width and height that I can use while maintaining the aspect ratio of my image:
void resize(int w, int h)
{
float target_aspect_ratio = image_width / image_height;
width = w;
height = (int)(width / target_aspect_ratio + 0.5f);
if (height > h) {
height = h;
width = (int)(height * target_aspect_ratio + 0.5f);
}
off_x = (w - width)/2.f;
off_y = (h - height)/2;
// I want to center my image. So I have these offsets
glViewport(off_x, off_y, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, 0.0f, 1.0f);
Now when I want to render my texture I do:
void paint()
{
// texture binding etc.
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(1, 0); glVertex2f(width, 0);
glTexCoord2f(1, 1); glVertex2f(width, height);
glTexCoord2f(0, 1); glVertex2f(0, height);
}
However, this does not show the image as expected. It does not maintain the aspect ratio as I size the screen. It is almost like the glViewport has no effect and I can verify this function gets called every time my window is resized.
Update:
It is strange. Almost as if these calls have no effect. I even did something as:
_off_x = _off_y = 0;
_width = 500;
_height = 500;
So I expected the viewport to be lower left box of my screen but the image is being drawn as before basically using the whole screen as the viewport.
Update 2:
Ok, so if I call
glViewport(_off_x, _off_y, _width, _height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, _width, 0, _height, 0, 1);
in my paint events, it works as expected! However, I thought it was enough to put this in the resize event handler.
Before start drawing, you need to switch your Matrix mode to GL_MODELVIEW. You don't need to set your projection matrix inside your render function at each frame.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
Here is a detailed analysis that I wrote about glMatrixMode() function modes :
OpenGL glMatrixMode(GL_PROJECTION) vs glMatrixMode(GL_MODELVIEW)

Why is my Texture Quad positioned incorrectly with glTexCoord2f using LWJGL?

I am going to make a adventure game in 2D with Grass,Trees and other things if i can make these. My problem is that when i use glTexCoord2f to clamp the texture to a quad then they get seperated from each others about 25 pixels. These quads is supposed to be connected together like any 2D games.
Im loading them with SlickUtil and the size of the Texture is 100x100
Here's my source code for rendering quads and InitGL
public static void Render(){
example--;
GL11.glLoadIdentity();
Color.white.bind();
system.Game.ground.bind();
GL11.glTranslatef(example, 0, 0);
//I used for loop for cloning quad at each side.
for(int x = 0; x <= width; x++){
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(x * 100, 0);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(x * 100 + 100, 0);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(x * 100 + 100, 100);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(x * 100, 100);
GL11.glEnd();
}
GL11.glLoadIdentity();
}
Heres my InitGL Code.
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
GL11.glViewport(0,0,width,height);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, width, height, 0, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
SlickUtil only reads Textures with dimensions that are powers of two. The reason is, that only since OpenGL 2.0, OpenGL could read textures with non-power-of-two dimensions. Try loading a texture with the dimensions of 128 by 128 pixels. Then display it with 100 by 100 pixels (like you did in your example).

How to get the whole scene rotate around itself? (my code has a little bug which just lets the objects rotate around themselves)

What must be changed to let me see the impression of flying around the whole fixed scene? My current code just lets me look from a fixed viewpoint at objects each one rotating around itself. Enabling glLoadIdentity() just stops their rotation. Note that 3dWidget::paintGL() is permanently called by a timer every 20ms.
void 3dWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTranslatef(0.5f, 0.5f, 0.5f);
glRotatef(3.0f, 1.0f, 1.0f, 1.0f);
glTranslatef(-0.5f, -0.5f, -0.5f);
glPushMatrix();
//glLoadIdentity();
for (int i = 0; i < m_cubes.count(); i++) {
m_cubes[i]->render();
}
glPopMatrix();
}
void Cube::render() {
glTranslatef(m_x, m_y, m_z); // local position of this object
glCallList(m_cubeId); // render code is in createRenderCode()
glTranslatef(-m_x, -m_y, -m_z);
}
void Cube::createRenderCode(int cubeId) {
m_cubeId = cubeId;
glVertexPointer(3, GL_FLOAT, 0, m_pCubePoints);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, m_pCubeColors);
glNewList(m_cubeId, GL_COMPILE);
{
glEnableClientState(GL_COLOR_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP, 0, m_numPoints);
glDisableClientState(GL_COLOR_ARRAY);
}
glEndList();
}
void 3dWidget::init(int w, int h)
{
...
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float aspect = w/(float)(h ? h : 1);
glFrustum(-aspect, aspect, -1, 1, 10, 100);
glTranslatef(0., 0., -12);
glMatrixMode(GL_MODELVIEW);
}
EDIT: It seems it's important to know that 2 cubes are created with the following 3D position coordinates (m_x, m_y, m_z):
void 3dWidget::createScene()
{
Cube* pCube = new Cube;
pCube->create(0.5 /*size*/, -0.5 /*m_x*/, -0.5 /*m_y*/, -0.5 /*m_z*/);
pCube = new Cube;
pCube->create(0.5 /*size*/, +0.5 /*m_x*/, +0.5 /*m_y*/, +0.5 /*m_z*/);
}
Use gluLookAt to position the camera. You apply it to the modelview matrix before any object transforms.
Obviously, you'll have to figure out a path for the camera to follow. That's up you and how you want the "flight" to proceed.
EDIT: Just to be clear, there's no camera concept, as such, in OpenGL. gluLookAt is just another transform that (when applied to the modelview matrix) has the effect of placing a camera at the prescribed location.
If you really are just trying to rotate the world, your code seems to perform the transforms in a reasonable order. I can't see why your objects rotate around themselves rather than as a group. It might help to present a SSCCE using glut.
Now I've found the reason by myself. It works as soon as I change method paintGL() to
void 3dWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
#if 0 // not working
glTranslatef(0.5f, 0.5f, 0.5f);
glRotatef(3.0f, 1.0f, 1.0f, 1.0f);
glTranslatef(-0.5f, -0.5f, -0.5f);
#else // this works properly, they rotate horizontally around (0,0,0)
glRotatef(3.0f, 0.0f, 1.0f, 0.0f);
#endif
for (int i = 0; i < m_cubes.count(); i++) {
m_cubes[i]->render();
}
}
I don't get it exactly why, but it obviously appeared that some transformations had compensated in a way that the objects just rotate around itself. Thanks for your help anyway.
I think it's always better to let the scene rotate than to move by gluLookAt (beside the issue that finding the right formula for the angle of view is more difficult).

OpenGL Making a Room in a given space and camera

This is the setting I have to work with (I cannot change any of these values)
#include <stdlib.h>
#include <GL/glut.h>
const GLdouble FRUSTDIM = 100.0f;
void reshape(int w, int h) // Resize the GL Window. w=width, h=height
{
glViewport(0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-FRUSTDIM, FRUSTDIM, -FRUSTDIM, FRUSTDIM, 320., 640.);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
I want to build a wall, but something is wrong and I dont quite understand. If I'm not mistaken, the current space is (-100 - 100)x(-100 - 100)x(320 - 640) and the camera is currently at 0,0,320
I want to make a room, but I can't even set up a wall :(....
I tried using QUADS and QUAD_STRIP, but it still wont show up when I run it D:
My code:
void display(void)
{
glBegin(GL_QUADS);
glColor3f(1,1,1);
glVertex3f(50,50,420);
glVertex3f(50,-50,420);
glVertex3f(-50,-50,420);
glVertex3f(-50,50,420);
glEnd();
glutSwapBuffers();
glFlush();
}
I just need to draw a wall to get myself going. If there is any code u think is required to solve my problem, comment and I will edit my question. (FYI other codes are working fine because the skeleton was give to me to start myself going).
It needs to include this:
glClear(GL_DEPTH_BUFFER_BIT|GL_COLOR_BUFFER_BIT); // clearing window
glDisable(GL_LIGHTING);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
to make the window clear for you to see the object. Additionally:
glTranslatef(-100,-100,-630.0);
glBegin(GL_QUADS);
/* Back Wall */
glColor3f(1.0f, 0.0f, 0.0f);
glNormal3f(0,0,1);
glVertex3f(0,0,0);
glVertex3f(200,0,0);
glVertex3f(200,200,0);
glVertex3f(0,200,0);
glEnd();
this would make the wall (pretty much the same, but just added translate before building it).