In OpenGL's fixed pipeline, by default, specifying vertex coordinates using glVertex3f is equivalent to specifying a location between -1.0 and +1.0 in screen space. Therefore, given a set of 4 perfectly adjacent screen-space vertices using GL_TRIANGLE_STRIP (or even GL_QUADS), and unless your window is already perfectly square, you will always render a rectangle instead of a perfect square...
Knowing the width, height and aspect ratio of a window, is there some way to correct this?
I have tried multiplying the vertex coordinates by the aspect ratio, which unfortunately seemed to achieve the same visual effect.
Here's the full source code I'm currently using:
#include "main.h"
#pragma comment(lib, "glut32.lib")
int g_width = 800;
int g_height = 600;
int g_aspectRatio = double(g_width) / double(g_height);
bool g_bInitialized = false;
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(0, 0);
glutInitWindowSize(g_width, g_height);
glutCreateWindow("OpenGL Test App");
glutDisplayFunc(onRender);
glutReshapeFunc(onSize);
glutIdleFunc(onRender);
glutMainLoop();
return 0;
}
void onInit()
{
glFrontFace(GL_CW);
}
void onRender()
{
if(!g_bInitialized)
onInit();
static float angle = 0.0f;
const float p = 0.5f * g_aspectRatio;
glLoadIdentity();
gluLookAt(
0.0f, 0.0f, 10.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glScalef(1, -1, 1); // Flip the Y-axis
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glBegin(GL_TRIANGLE_STRIP);
{
glColor4f(1.0, 0.0, 0.0, 1.0); // Red
glVertex3f(-p, -p, 0.0); // Top-Left
glColor4f(0.0, 1.0, 0.0, 1.0); // Green
glVertex3f(p, -p, 0.0); // Top-Right
glColor4f(0.0, 0.0, 1.0, 1.0); // Blue
glVertex3f(-p, p, 0.0); // Bottom-Left
glColor4f(1.0, 1.0, 0.0, 1.0); // Yellow
glVertex3f(p, p, 0.0); // Bottom-Left
}
glEnd();
angle += 0.6f;
glutSwapBuffers();
}
void onSize(int w, int h)
{
g_width = max(w, 1);
g_height = max(h, 1);
g_aspectRatio = double(g_width) / double(g_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(45, g_aspectRatio, 1, 1000);
glMatrixMode(GL_MODELVIEW);
}
EDIT:
This has been solved... In the above code, I had defined g_aspectRatio as an int instead of a floating-point value. Therefore, it's value was always 1...
In my (old) experience, that's just why you have an aspect ratio argument to gluPerspective().
The manual page says:
In general, the aspect ratio in gluPerspective should match
the aspect ratio of the associated viewport. For example, aspect = 2.0
means the viewer's angle of view is twice as wide in x as it is in y.
If the viewport is twice as wide as it is tall, it displays the image
without distortion.
Check your g_aspectRatio value.
by default, specifying vertex coordinates using glVertex3f is equivalent to specifying a location between -1.0 and +1.0 in screen space
Wrong. Coordinates passed to OpenGL through glVertex or a glVertexPointer vertex array are in model space. The transformation to screen space happens by transforming into view space by the modelview matrix and from view space to clip space by the projection matrix. Then clipping is applied and the perspective divide applied to reach normalized coordinate space.
Hence the value range for glVertex can be whatever you like it to be. By applying the right projection matrix you get your view space to be in [-aspect; aspect]×[-1, 1] if you like that.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-aspect, aspect, -1, 1, -1, 1);
Related
I am trying to draw two 2D diamonds facing each other.
So I drew the first diamond then I drew the second diamond after using:
glTranslated(0, -150, 0);
so it can appear exactly under my first diamond .
However, I ran into a problem that I couldn't flip the second diamond so it could look like a mirror.
Here is what i am trying to do:
I searched online for solutions and they all mentioned that I should
use
glScalef(1.0f,-1.0f,1.0f);
but each time I use it the drawing disappears.
The function
glRotatef(angle,x,y,z);
caught my attention but i couldn't use it properly resulting in wrong direction.
Here is how my image looks like right now without glRotate():
So I think I need the proper technique to use any of these functions.
Note: I am using many line loops and vertices to draw.
#include <windows.h> // For MS Windows
#include <GL/glut.h> // (or others, depending on the system in use)
void init(void)
{
glClearColor(1.0, 1.0, 1.0, 0.0); // Set display-window color to
white.
glMatrixMode(GL_PROJECTION); // Set projection parameters.
gluOrtho2D(0.0, 400.0, 0.0, 400.0);
}
void drawDiamond()
{
glBegin(GL_LINE_LOOP);
glVertex2f(125, 350);
glVertex2f(245, 350);
glVertex2f(290, 300);
glVertex2f(182, 200);
glVertex2f(75, 300);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(109, 333);
glVertex2f(138, 350);
glVertex2f(159, 337);
glVertex2f(123, 300);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(109, 333);
glVertex2f(123, 300);
glVertex2f(154, 225);
glVertex2f(92, 300);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(290, 300);
glVertex2f(75, 300);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(123, 300);
glVertex2f(159, 337);
glVertex2f(154, 300);
glVertex2f(171, 225);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(181, 300);
glVertex2f(159, 337);
glVertex2f(181, 350);
glVertex2f(209, 337);
glVertex2f(181, 300);
glVertex2f(171, 225);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(181, 300);
glVertex2f(209, 337);
glVertex2f(219, 300);
glVertex2f(195, 225);
glVertex2f(181, 300);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(209, 337);
glVertex2f(243, 300);
glVertex2f(195, 225);
glVertex2f(219, 300);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(209, 337);
glVertex2f(229, 350);
glVertex2f(260, 333);
glVertex2f(243, 300);
glVertex2f(209, 337);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(260, 333);
glVertex2f(278, 300);
glVertex2f(210, 225);
glVertex2f(243, 300);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(195, 225);
glVertex2f(182, 200);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2f(171, 225);
glVertex2f(182, 200);
glEnd();
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT); // Clear display window.
glColor3f(0.0, 0.0, 0.0); // Set line segment color to blue.
// your code goes here
drawDiamond();
glTranslatef(0.0f, -150, 0.0f);
drawDiamond();
glFlush(); // Process all OpenGL routines as quickly as possible.
}
void main(int argc, char** argv)
{
glutInit(&argc, argv); // Initialize GLUT.
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition(50, 100); // Set top-left display-window
position.
glutInitWindowSize(400, 400); // Set display-window width and
height.
glutCreateWindow("Diamond Project"); // Create display window.
init(); // Execute initialization procedure.
glutDisplayFunc(display); // Send graphics to display window.
glutMainLoop(); // Display everything and wait.
}
What you want to do is to mirror (flip) the object around an axis, which is parallel to the X-axis, and goes through the bottom bounding of the object (diamond).
To do so, the bottom Y coordinate (bottomY) has to be found and the object has to be translated in the opposite direction. Note the bottom of the model coordinates (vertices) and not the bottom of final coordinates on the viewport:
float bottomY = 200.0f;
glTranslatef( 0.0f, -bottomY , 0.0f );
At next the object has to be flipped. This can either be done by
glRotatef(180.0f, 1.0f, 0.0f, 0.0f);
or by
glScalef(1.0f, -1.0f, 0.0f)
Note, both operations result in the same transformation matrix, because cos(90°) = cos(-90°), sin(90°) = -sin(90°).
Then the bottomY translation has to be reversed:
glTranslatef( 0.0f, bottomY, 0.0f );
But note that the OpenGL fixed function pipeline stack operates in the the reverse order, because the current matrix is multiplied by the matrix which is specified by the new operation.
Translation: See the documentation of glTranslate:
glTranslate produces a translation by x y z . The current matrix (see glMatrixMode) is multiplied by this translation matrix, with the product replacing the current matrix.
Rotation: See the documentation of glRotate:
glRotate produces a rotation of angle degrees around the vector x y z . The current matrix (see glMatrixMode) is multiplied by a rotation matrix with the product replacing the current matrix.
Scaling: See the documentation of glScale:
glScaleproduces a nonuniform scaling along the x, y, and z axes. The three parameters indicate the desired scale factor along each of the three axes.
The current matrix (see glMatrixMode) is multiplied by this scale matrix.
This means the the following should do what you want:
float bottomY = 200.0f;
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
drawDiamond();
glTranslatef( 0.0f, bottomY , 0.0f );
glRotatef( 180.0f, 1.0f, 0.0f, 0.0f );
glTranslatef( 0.0f, -bottomY , 0.0f );
drawDiamond();
and is the same as:
glTranslatef( 0.0f, bottomY , 0.0f );
glScalef( 1.0f, -1.0f, 1.0f );
glTranslatef( 0.0f, -bottomY , 0.0f );
Depending on how your vertices are setup and depending on if you have Back Face Culling enabled you might have to change your diamond or (model's) center point to be the bottom tip from there you can then simply rotate about the X axis provided that you declared that as the Horizontal Axis. To do so shouldn't be all that hard. It would look something like:
glRotatef( 180.0f, 1, 0, 0 );
provided you are rotating in degrees as opposed to radians.
For each vertex (I'm assuming that you use immediate mode for designating vertices), you can glVertex2i(myVertex.x, symmetryLine - myVertex.y, 0) where myVertex are the x and y values you previously used, and symmetryLine the value you wish to mirror against. Best way would be to use a negative glScale though. Your diamond is rotationally symmetric glRotate also works but you know, not a very elegant way to do it.
I am a beginner in OpenGl and I am struggling a bit with setting up the glOrtho camera to match the window size so that I can draw a line using the window's coordinates. For example, if I want to draw a line from coordinates 0,10 (x,y) to 600,10. I managed to draw the line (which will be a "Separator" from the viewport and a toolbar with buttons) in my current set up but it was by "try end error" approach and the coordinates that I needed to put don't make any sense to me. When I tried to draw a line using the above-mentioned coordinates, the line simply did not show up. What I need to change in the glOrtho set up in order to work with these (1000x600) screen size and draw my vertices and not these:
glVertex3f(-2.0, 11.0, 0.0);
glVertex3f(20.0, 11.0, 0.0);
Note, my current window size is 1000x600 (width/height)
This is the line (on the top that crosses the whole screen):
This is my OGWindow class that handles all of the drawing:
void OGWindow::MyReSizeGLScene(int fwidth, int fheight)
{
// Store window size in class variables so it can be accessed in myDrawGLScene() if necessary
wWidth = fwidth;
wHeight = fheight;
// Calculate aspect ration of the OpenGL window
aspect_ratio = (float) fwidth / fheight;
// Set camera so it can see a square area of space running from 0 to 10
// in both X and Y directions, plus a bit of space around it.
Ymin = -1;
Ymax = 12;
Xmin = -1;
// Choose Xmax so that the aspect ration of the projection
// = the aspect ratio of the viewport
Xmax = (aspect_ratio * (Ymax -Ymin)) + Xmin;
glMatrixMode(GL_PROJECTION); // Select The Projection Stack
glLoadIdentity();
glOrtho(Xmin, Xmax, Ymin, Ymax, -1.0, 1.0);
glViewport(0, 0, wWidth, wHeight); // Viewport fills the window
}
void OGWindow::myDrawGLScene(GLvoid) // Here's Where We Do All The Drawing
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the drawing area
OGWindow::myDrawModel();
drawToolbar();
glutSwapBuffers(); // Needed if we're running an animation
glFlush();
}
void OGWindow::myDrawModel(GLvoid)
{
switch ( squareColour ) {
case RED:
glColor3f(1.0, 0.0, 0.0);
break;
case BLUE:
glColor3f(0.0, 0.0, 1.0);
break;
}
glBegin( GL_QUADS );
glVertex3f( squareX, squareY, 0.0 ); // Coordinates of bottom-left corner of square
glVertex3f( squareX + squareWidth, squareY, 0.0 );
glVertex3f( squareX + squareWidth, squareY + squareHeight, 0.0 );
glVertex3f( squareX, squareY + squareHeight, 0.0 );
glEnd();
}
// Convert from screen coords returned by mouse
// to world coordinates.
// Return result in worldX, worldY
void OGWindow::screen2World(int screenX, int screenY, double & worldX, double & worldY)
{
// Dimensions of rectangle viewed by camera projection
double projWidth = Xmax -Xmin;
double projHeight = Ymax - Ymin;
// Screen coords with origin at bottom left
int screenLeft = screenX;
int screenUp = wHeight - screenY;
worldX = Xmin + screenLeft * projWidth / wWidth ;
worldY = Ymin + screenUp * projHeight / wHeight ;
}
//Method to draw the toolbar separator line
void OGWindow::drawToolbar(GLvoid) {
//draw toolbar line separator
glColor3f(0.0,0.0,0.0);
glBegin(GL_LINES);
glVertex3f(-2.0, 11.0, 0.0);
glVertex3f(20.0, 11.0, 0.0);
glEnd();
//draw create button
glPushMatrix();
glTranslatef(2.0, 10.0, 0.0);
glutSolidCube(2.0);
glPopMatrix();
}
This is my main class where I am ivoking the methods from OGWindow:
int main(int argc, char **argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowSize( 1000, 600 );
glutInitWindowPosition(0, 0);
glutCreateWindow("OpenGL Demo");
glEnable(GL_DEPTH_TEST); // enable the depth buffer test
glutDisplayFunc(DrawGLScene);
glutReshapeFunc(ReSizeGLScene);
glutMouseFunc(mouseClick);
glutMotionFunc(mouseMotion);
glutPassiveMotionFunc(mousePassiveMotion);
glutIdleFunc(Idle);
theWindow.initGL();
glutMainLoop();
}
Check out the documentation of glOrtho function. As you see, there are 6 parameters: left, right, bottom, top, near, far. You made mistake by setting window width to top instead of bottom parameter. Here's proper use of function:
glOrtho (0, 1000, 600, 0, -1.0, 1.0)
So, first your ortho settings. If you want your camera to match the screen dimensions, glOrtho has to use the same dimensions.
// This will anchor the camera to the center of the screen
// Camera will be centered on (0,0)
glOrtho( -screenWidth/2.f, screenWidth/2.f, -screenHeight/2.f, screenHeight/2.f, -1, 1 );
// This will anchor the camera to the lower left corner of the screen
// Camera will be centered on (screenWidth/2, screenHeight/2)
glOrtho( 0, screenWidth, 0, screenHeight, -1, 1 );
Try both and see the difference. Although if you are making some sort of editor, where your camera doesn't move, you may be looking for the second ortho setup.
Second, you only ever use (apparently) the GL_PROJECTION matrix mode. You must use this mode to set the camera projection and GL_MODELVIEW to apply transforms to the camera or the objects.
So when you call resize and don't change the matrix mode back to GL_MODELVIEW, you'll be applying translations to the projection matrix.
If you did forget to initialize the modelview matrix it may contain garbage values and yield unexpected results.
My program refuses to do depth testing. The two sphere objects are always drawn in the order they are created, not according to their position. Sphere alpha is positioned at (0, 0, 1) and Sphere beta is positioned (0, 0, -10), yet OpenGL still draws beta on top of alpha. I set depth test to enabled in my program.
Nothing appears to work. I want OpenGL to do depth test automatically on any objects drawn in the window. Any help or advise would be greatly appreciated. Here is the full code.
#include "GL/freeglut.h"
#include "GL/gl.h"
#include "GL/glu.h"
const int SPHERE_RES = 200;
double Z_INIT = -28.0;
double RADIUS = 2;
double Red[3] = {1, 0, 0};
double Blue[3] = {0, 0, 1};
using namespace std;
/*
* Method handles resize of the window
*/
void handleResize (int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
double ratio = (float)w/ (float)h;
gluPerspective(45.0, ratio, 1.0, 100.0);
}
/*
* Color and depth is enabled and in this method
*/
void configureColor(void)
{
glClearColor(1.0f, 1.0f, 1.0f, 0.0f); //Set background to white
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);// Clear window.
glDepthFunc(GL_ALWAYS);
glEnable(GL_DEPTH_TEST);
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
}
void display (void) {
configureColor();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
GLfloat sun_direction[] = { 0.0, 0.0, -1.0, 0.0};
glLightfv(GL_LIGHT0, GL_POSITION, sun_direction);
GLUquadric* quad = gluNewQuadric();
//first sphere is drawn
glColor3f(Red[0], Red[1], Red[2]);
glPushMatrix();
glLoadIdentity();
glTranslatef(0, 0, Z_INIT);
glTranslatef(0, 0, 1.0);
gluSphere(quad, RADIUS, SPHERE_RES, SPHERE_RES);
glPopMatrix();
//second sphere is supposed to be drawn behind it,
//but it is drawn on top.
glColor3f(Blue[0], Blue[1], Blue[2]);
glPushMatrix();
glLoadIdentity();
glTranslatef(0, 0, Z_INIT);
glTranslatef(0, 0, -10.0);
gluSphere(quad, RADIUS, SPHERE_RES, SPHERE_RES);
glPopMatrix();
free(quad);
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv); //initializes the GLUT
glutInitDisplayMode(GLUT_SINGLE);
glutInitWindowSize(600,600);
glutInitWindowPosition(100,100);
glutCreateWindow("OpenGL - First window demo");
glutReshapeFunc(handleResize);
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
I am using Ubuntu 14.04 operating system.
glDepthFunc(GL_ALWAYS);
This is the reason you see the spheres in the order they are drawn. Setting the depth function to GL_ALWAYS simply means all depth tests always pass, for any fragment, be it closer or farther.
You need GL_LESS for the result you want. A fragment having depth lesser than the one in the frame buffer wins; the closer (lesser z) one wins over the farther (greater z) one.
You can either call glDepthFunc(GL_LESS) or comment out glDepthFunc(GL_ALWAYS) since GL_LESS is the default.
I want to clarify things with gluPerspective near and far parameters, I know
that they define the range in z axis for all objects - so objects closer/away than near/far will be clipped by the clipping algorithms. And when lets say near = 0.1, and far = 100*winWid, we are not seeing anything because objects are behind of the viewer (and camera by default is at (0.0, 0.0, 0.0) plus openGL user coordinates system is right handed), so then we call (see code below) translate(0.0, 0.0, -winWid) to move back by -z axis objects to place them in front of the camera.
But if we set far = -100*winWid; everything works same as with positive far value.
So what's being changed when far is negative ??
Why in that case nothing is clipped too ??
#include <gl/glut.h>
#include <math.h>
const float winWid = 1000.0f;
const float winHei = 800.0f;
GLfloat cube_side = 200.0f;
GLfloat ALPHA = 0.7f;
void render();
void updateDisplay()
{
render(cubeAngle, rotx, roty, rotz);
}
void drawCube(const GLfloat& a)
{
glBegin(GL_QUADS);
// back face
glColor4f(0.0f, 1.0f, 0.0f, ALPHA);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, a, 0.0f);
glVertex3f(a, a, 0.0f);
glVertex3f(a, 0.0f, 0.0f);
// and other cube faces here ...
glEnd();
}
void render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
drawCube(cube_side);
glPopMatrix();
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_ALPHA);
glutInitWindowSize(winWid, winHei);
glutInitWindowPosition(100, 100);
glutCreateWindow("window");
glutDisplayFunc(updateDisplay);
glEnable(GL_DEPTH_TEST); // depth buffer setup
glEnable(GL_BLEND); // transparency setup
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(55.0f, winWid/winHei, 0.1f, 100*winWid);
glMatrixMode(GL_MODELVIEW);
glTranslatef(0.0f, 0.0f, -winWid); // move back to see drawing objects
glRotatef(75.0f, -1.0f, 0.0f, 0.0f); // make z+ axis point up to emphasize 3D (wihout this rotate z+ points towards the viewer)
glutMainLoop();
return 0;
}
Negative far-plane values are not supported by gluPerspective. The documentation states:
zFar: Specifies the distance from the viewer to the far clipping plane (always positive). (source)
By default, the camera in OpenGL looks along the negative z-axis. So the visible area is [-near, -far] in world coordinates. In your code example, the object is located at z=-1000, while the visible range is from [-0.01, -100*1000], which means that the object is clearly in view.
One additional thing to mention is the depth-buffer precision: This is mainly defined by the range given by nearPlane and farPlane. Assuming, that you have a precision of 16-bit (can be more or less depending on the setup), one can store 2^16 different depth values. This means with your setup, objects can be relative far away from each other and will still be treated as being at the same depth. You may think about whether this huge depth range is really necessary for the application.
Here's the vertex buffer information of the quad I'm drawing:
static const GLfloat pv_quad[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
This quad is used to draw 2D frames on the screen as part of the graphical user interface. The class I use to do this is Mage::Interface::Frame. I'll spare you the header definition and instead give you the class's implementation, as it's small. There's some test code in here, so ignore the fact the shader is part of the class. I know it shouldn't be there.
#include <Mage/Root.h>
#include <Mage/Interface/Frame.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/transform.hpp>
using Mage::Interface::Frame;
Frame::Frame()
: width(300), height(200), position(0, 0), color(1.0, 1.0, 1.0), model(1.0), rotation(0) {
prog.compileFile("Data/Shaders/FrameVertex.glsl", Mage::ShaderType::VERTEX);
prog.compileFile("Data/Shaders/FrameFragment.glsl", Mage::ShaderType::FRAGMENT);
prog.link();
this->calcTransform();
}
void Frame::setSize(int w, int h) {
this->width = w;
this->height = h;
this->calcTransform();
}
void Frame::setColor(int r, int g, int b) {
this->color = glm::vec3(float(r) / 256, float(g) / 256, float(b) / 256);
}
void Frame::setRotation(float degrees) {
this->rotation = glm::radians(degrees);
this->calcTransform();
}
void Frame::calcTransform() {
this->model = glm::mat4(1.0f); // reset model to origin.
// 1280 and 720 are the viewport's size. This is only hard coded for tests.
this->model = glm::scale(this->model, glm::vec3(float(width) / 1280, float(height) / 720, 1.0f));
this->model = glm::rotate(this->model, this->rotation, glm::vec3(0.0f, 0.0f, 1.0f));
this->model = glm::translate(this->model, glm::vec3(position.x, position.y, 0.0f));
}
void Frame::draw() {
Mage::VertexObject obj = ROOT.getRenderWindow()->getVertexBufferObject()->getObject("PrimitiveQuad");
prog.use();
prog.setUniform("mvp", this->model);
prog.setUniform("fColor", this->color);
glEnableVertexAttribArray(0);
ROOT.getRenderWindow()->getVertexBufferObject()->bind();
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)obj.begin);
glDrawArrays(GL_TRIANGLE_STRIP, 0, obj.size);
glDisableVertexAttribArray(0);
}
Here's the drawing function that's called every frame:
void RenderWindow::render() {
Mage::Interface::Frame F;
F.setSize(400, 200);
F.setRotation(0);
while (glfwWindowShouldClose(this->win) == 0) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
F.draw();
glfwSwapBuffers(this->win);
glfwPollEvents();
}
}
When I have setRotation(0), the resulting quad is indeed, 400 pixels wide and 200 pixels high, right in the centre of my screen as you would expect.
However, if I set the rotation to (90), well, this happens:
As you can see, that's not at all close to a 90 degrees turn. It should be 400px high and 200px wide.
Anyone care to explain what's going on here?
EDIT: Some playing around has shown me that the problem is with the scale, not the rotation. When I comment out the scale, the rotation appears to be correct.
The angle argument to glm::rotate() is in radians, not degrees:
m: Input matrix multiplied by this rotation matrix.
angle: Rotation angle expressed in radians.
axis: Rotation axis, recommanded [sic] to be normalized.
Use this:
void Frame::setRotation(float degrees) {
this->rotation = glm::radians( degrees );
this->calcTransform();
}
I am assuming that this game is supposed to be a 3D game with a 2D GUI, although this was not specified in the question, though not entirely necessary, as my answer will be the same.
When rendering with a 3D matrix, using a perspective view (Field of View taken into account), as opposed to using an orthographic view, the shapes will bend to their position depending on the fov.
So with that, I propose that you use a simple solution, and initialize a 2D viewing matrix (or orthographic matrix) for your 2D interface. If you are just looking for a simple way to render a 2D quad onto the screen freeGLUT(free Graphics Library Utility Toolkit) is there for you. There are plenty of docs out there to help install freeglut, so once you finish that, initialize a 2D rendering matrix, then render the quad using glVertex2i/f or glVertex3i/f, like so:
void setView2d()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, *SCREEN_WIDTH, *SCREEN_HEIGHT, 0);
glMatrixMode( GL_MODELVIEW );
glDisable(GL_DEPTH_TEST);
glLoadIdentity();
}
void setView3d()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, (GL_FLOAT)*SCREEN_WIDTH / *SCREEN_HEIGHT, 0.1, 100);
glEnable(GL_DEPTH_TEST);
glLoadIdentity();
}
void render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_TEST);
setView2d(); //Render 2D objects
glPushMatrix();
{
//glTranslatef() and glRotatef() still work for 2D
//if using rotate, rotate on z axis, like so:
glRotatef(90, 0, 0, 1);
glBegin(GL_TRIANGLES);
{
glVertex2i(0, 0);
glVertex2i(100, 0);
glVertex2i(0, 100);
/*
glVertex2i is replacable with glVertex2f, glVertex3i, and glVertex3f
if using a glVertex3, set the z value to 0
*/
}
glEnd();
}
glPopMatrix();
setView3d(); //Render 3D objects
glPushMatrix();
{
//render 3D stuff
}
glPopMatrix();
glutSwapBuffers();
}
I should also mention that when using the gluOrtho2D, coordinates used in vertex x,y are based on pixels, instead of the 3D blocks.
Hope this helped,
-Nick