OpenGL4: Rotation looks all wrong - c++

Here's the vertex buffer information of the quad I'm drawing:
static const GLfloat pv_quad[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
This quad is used to draw 2D frames on the screen as part of the graphical user interface. The class I use to do this is Mage::Interface::Frame. I'll spare you the header definition and instead give you the class's implementation, as it's small. There's some test code in here, so ignore the fact the shader is part of the class. I know it shouldn't be there.
#include <Mage/Root.h>
#include <Mage/Interface/Frame.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/transform.hpp>
using Mage::Interface::Frame;
Frame::Frame()
: width(300), height(200), position(0, 0), color(1.0, 1.0, 1.0), model(1.0), rotation(0) {
prog.compileFile("Data/Shaders/FrameVertex.glsl", Mage::ShaderType::VERTEX);
prog.compileFile("Data/Shaders/FrameFragment.glsl", Mage::ShaderType::FRAGMENT);
prog.link();
this->calcTransform();
}
void Frame::setSize(int w, int h) {
this->width = w;
this->height = h;
this->calcTransform();
}
void Frame::setColor(int r, int g, int b) {
this->color = glm::vec3(float(r) / 256, float(g) / 256, float(b) / 256);
}
void Frame::setRotation(float degrees) {
this->rotation = glm::radians(degrees);
this->calcTransform();
}
void Frame::calcTransform() {
this->model = glm::mat4(1.0f); // reset model to origin.
// 1280 and 720 are the viewport's size. This is only hard coded for tests.
this->model = glm::scale(this->model, glm::vec3(float(width) / 1280, float(height) / 720, 1.0f));
this->model = glm::rotate(this->model, this->rotation, glm::vec3(0.0f, 0.0f, 1.0f));
this->model = glm::translate(this->model, glm::vec3(position.x, position.y, 0.0f));
}
void Frame::draw() {
Mage::VertexObject obj = ROOT.getRenderWindow()->getVertexBufferObject()->getObject("PrimitiveQuad");
prog.use();
prog.setUniform("mvp", this->model);
prog.setUniform("fColor", this->color);
glEnableVertexAttribArray(0);
ROOT.getRenderWindow()->getVertexBufferObject()->bind();
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)obj.begin);
glDrawArrays(GL_TRIANGLE_STRIP, 0, obj.size);
glDisableVertexAttribArray(0);
}
Here's the drawing function that's called every frame:
void RenderWindow::render() {
Mage::Interface::Frame F;
F.setSize(400, 200);
F.setRotation(0);
while (glfwWindowShouldClose(this->win) == 0) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
F.draw();
glfwSwapBuffers(this->win);
glfwPollEvents();
}
}
When I have setRotation(0), the resulting quad is indeed, 400 pixels wide and 200 pixels high, right in the centre of my screen as you would expect.
However, if I set the rotation to (90), well, this happens:
As you can see, that's not at all close to a 90 degrees turn. It should be 400px high and 200px wide.
Anyone care to explain what's going on here?
EDIT: Some playing around has shown me that the problem is with the scale, not the rotation. When I comment out the scale, the rotation appears to be correct.

The angle argument to glm::rotate() is in radians, not degrees:
m: Input matrix multiplied by this rotation matrix.
angle: Rotation angle expressed in radians.
axis: Rotation axis, recommanded [sic] to be normalized.
Use this:
void Frame::setRotation(float degrees) {
this->rotation = glm::radians( degrees );
this->calcTransform();
}

I am assuming that this game is supposed to be a 3D game with a 2D GUI, although this was not specified in the question, though not entirely necessary, as my answer will be the same.
When rendering with a 3D matrix, using a perspective view (Field of View taken into account), as opposed to using an orthographic view, the shapes will bend to their position depending on the fov.
So with that, I propose that you use a simple solution, and initialize a 2D viewing matrix (or orthographic matrix) for your 2D interface. If you are just looking for a simple way to render a 2D quad onto the screen freeGLUT(free Graphics Library Utility Toolkit) is there for you. There are plenty of docs out there to help install freeglut, so once you finish that, initialize a 2D rendering matrix, then render the quad using glVertex2i/f or glVertex3i/f, like so:
void setView2d()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, *SCREEN_WIDTH, *SCREEN_HEIGHT, 0);
glMatrixMode( GL_MODELVIEW );
glDisable(GL_DEPTH_TEST);
glLoadIdentity();
}
void setView3d()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, (GL_FLOAT)*SCREEN_WIDTH / *SCREEN_HEIGHT, 0.1, 100);
glEnable(GL_DEPTH_TEST);
glLoadIdentity();
}
void render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_TEST);
setView2d(); //Render 2D objects
glPushMatrix();
{
//glTranslatef() and glRotatef() still work for 2D
//if using rotate, rotate on z axis, like so:
glRotatef(90, 0, 0, 1);
glBegin(GL_TRIANGLES);
{
glVertex2i(0, 0);
glVertex2i(100, 0);
glVertex2i(0, 100);
/*
glVertex2i is replacable with glVertex2f, glVertex3i, and glVertex3f
if using a glVertex3, set the z value to 0
*/
}
glEnd();
}
glPopMatrix();
setView3d(); //Render 3D objects
glPushMatrix();
{
//render 3D stuff
}
glPopMatrix();
glutSwapBuffers();
}
I should also mention that when using the gluOrtho2D, coordinates used in vertex x,y are based on pixels, instead of the 3D blocks.
Hope this helped,
-Nick

Related

2D Hud over 3D scene (OpenGL, SDL, C++)

My 3D world draws perfectly every time but the 2D text never draws. The code below features my latest effort using a tutorial from lighthouse3D. I get the feeling its something stupidly simple and im just not seeing it.
Rendering code :
void ScreenGame::draw(SDL_Window * window)
{
glClearColor(0.5f,0.5f,0.5f,1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set up projection matrix
glm::mat4 projection(1.0);
projection = glm::perspective(60.0f,800.0f/600.0f,1.0f,150.0f);
rt3d::setUniformMatrix4fv(shaderProgram, "projection", glm::value_ptr(projection));
GLfloat scale(1.0f); // just to allow easy scaling of complete scene
glm::mat4 modelview(1.0); // set base position for scene
mvStack.push(modelview);
mvStack.top() = glm::lookAt(camera->getEye(),camera->getAt(),camera->getUp());
glm::vec4 tmp = mvStack.top()*lightPos;
light0.position[0] = tmp.x;
light0.position[1] = tmp.y;
light0.position[2] = tmp.z;
rt3d::setLightPos(shaderProgram, glm::value_ptr(tmp));
glUseProgram(skyBoxShader); // Switch shaders, reset uniforms for skybox
rt3d::setUniformMatrix4fv(skyBoxShader, "projection", glm::value_ptr(projection));
glDepthMask(GL_FALSE); // make sure depth test is off
glm::mat3 mvRotOnlyMat3 = glm::mat3(mvStack.top());
mvStack.push( glm::mat4(mvRotOnlyMat3) );
skyBox->draw(mvStack); // drawing skybox
mvStack.pop();
glDepthMask(GL_TRUE); // make sure depth test is on
mvStack.top() = glm::lookAt(camera->getEye(),camera->getAt(),camera->getUp());
glUseProgram(shaderProgram); // Switch back to normal shader program
rt3d::setUniformMatrix4fv(shaderProgram, "projection", glm::value_ptr(projection));
rt3d::setLightPos(shaderProgram, glm::value_ptr(tmp));
rt3d::setLight(shaderProgram, light0);
// Draw all visible objects...
Ball->draw(mvStack);
ground->draw(mvStack);
building1->draw(mvStack);
building2->draw(mvStack);
setOrthographicProjection();
glPushMatrix();
glLoadIdentity();
renderBitmapString(5,30,1,GLUT_BITMAP_HELVETICA_18,"Text Test");
glPopMatrix();
restorePerspectiveProjection();
SDL_GL_SwapWindow(window); // swap buffers
}
using the following methods :
void setOrthographicProjection() {
// switch to projection mode
glMatrixMode(GL_PROJECTION);
// save previous matrix which contains the
//settings for the perspective projection
glPushMatrix();
// reset matrix
glLoadIdentity();
// set a 2D orthographic projection
glOrtho(0.0F, 800, 600, 0.0F, -1.0F, 1.0F);
// switch back to modelview mode
glMatrixMode(GL_MODELVIEW);
}
void restorePerspectiveProjection() {
glMatrixMode(GL_PROJECTION);
// restore previous projection matrix
glPopMatrix();
// get back to modelview mode
glMatrixMode(GL_MODELVIEW);
}
void renderBitmapString(
float x,
float y,
int spacing,
void *font,
char *string) {
char *c;
int x1=x;
for (c=string; *c != '\0'; c++) {
glRasterPos2f(x1,y);
glutBitmapCharacter(font, *c);
x1 = x1 + glutBitmapWidth(font,*c) + spacing;
}
}

From gluOrtho2D to 3D

I followed a guide to draw a Lorenz system in 2D.
I want now to extend my project and switch from 2D to 3D. As far as I know I have to substitute the gluOrtho2D call with either gluPerspective or glFrustum. Unfortunately whatever I try is useless.
This is my initialization code:
// set the background color
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
/// set the foreground (pen) color
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);*/
// set the foreground (pen) color
glColor4f(1.0f, 1.0f, 1.0f, 0.02f);
// enable blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// enable point smoothing
glEnable(GL_POINT_SMOOTH);
glPointSize(1.0f);
// set up the viewport
glViewport(0, 0, 400, 400);
// set up the projection matrix (the camera)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//gluOrtho2D(-2.0f, 2.0f, -2.0f, 2.0f);
gluPerspective(45.0f, 1.0f, 0.1f, 100.0f); //Sets the frustum to perspective mode
// set up the modelview matrix (the objects)
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
while to draw I do this:
glClear(GL_COLOR_BUFFER_BIT);
// draw some points
glBegin(GL_POINTS);
// go through the equations many times, drawing a point for each iteration
for (int i = 0; i < iterations; i++) {
// compute a new point using the strange attractor equations
float xnew=z*sin(a*x)+cos(b*y);
float ynew=x*sin(c*y)+cos(d*z);
float znew=y*sin(e*z)+cos(f*x);
// save the new point
x = xnew;
y = ynew;
z = znew;
// draw the new point
glVertex3f(x, y, z);
}
glEnd();
// swap the buffers
glutSwapBuffers();
the problem is that I don't visualize anything in my window. It's all black. What am I doing wrong?
The name "gluOrtho2D" is a bit misleading. In fact gluOrtho2D is probably the most useless function ever. The definition of gluOrtho2D is
void gluOrtho2D(
GLdouble left,
GLdouble right,
GLdouble bottom,
GLdouble top )
{
glOrtho(left, right, bottom, top, -1, 1);
}
i.e. the only thing it does it calling glOrtho with default values for near and far. Wow, how complicated and ingenious </sarcasm>.
Anyway, even if it's called ...2D, there's nothing 2-dimensional about it. The projection volume still has a depth range of [-1 ; 1] which is perfectly 3-dimensional.
Most likely the points generated lie outside the projection volume, which has a Z value range of [0.1 ; 100] in your case, but your points are confined to the range [-1 ; 1] in either axis (and IIRC the Z range of the strange attractor is entirely positive). So you have to apply some translation to see something. I suggest you choose
near = 1
far = 10
and apply a translation of Z: -5.5 to move things into the center of the viewing volume.

Rendering visually perfect squares in OpenGL?

In OpenGL's fixed pipeline, by default, specifying vertex coordinates using glVertex3f is equivalent to specifying a location between -1.0 and +1.0 in screen space. Therefore, given a set of 4 perfectly adjacent screen-space vertices using GL_TRIANGLE_STRIP (or even GL_QUADS), and unless your window is already perfectly square, you will always render a rectangle instead of a perfect square...
Knowing the width, height and aspect ratio of a window, is there some way to correct this?
I have tried multiplying the vertex coordinates by the aspect ratio, which unfortunately seemed to achieve the same visual effect.
Here's the full source code I'm currently using:
#include "main.h"
#pragma comment(lib, "glut32.lib")
int g_width = 800;
int g_height = 600;
int g_aspectRatio = double(g_width) / double(g_height);
bool g_bInitialized = false;
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(0, 0);
glutInitWindowSize(g_width, g_height);
glutCreateWindow("OpenGL Test App");
glutDisplayFunc(onRender);
glutReshapeFunc(onSize);
glutIdleFunc(onRender);
glutMainLoop();
return 0;
}
void onInit()
{
glFrontFace(GL_CW);
}
void onRender()
{
if(!g_bInitialized)
onInit();
static float angle = 0.0f;
const float p = 0.5f * g_aspectRatio;
glLoadIdentity();
gluLookAt(
0.0f, 0.0f, 10.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glScalef(1, -1, 1); // Flip the Y-axis
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glBegin(GL_TRIANGLE_STRIP);
{
glColor4f(1.0, 0.0, 0.0, 1.0); // Red
glVertex3f(-p, -p, 0.0); // Top-Left
glColor4f(0.0, 1.0, 0.0, 1.0); // Green
glVertex3f(p, -p, 0.0); // Top-Right
glColor4f(0.0, 0.0, 1.0, 1.0); // Blue
glVertex3f(-p, p, 0.0); // Bottom-Left
glColor4f(1.0, 1.0, 0.0, 1.0); // Yellow
glVertex3f(p, p, 0.0); // Bottom-Left
}
glEnd();
angle += 0.6f;
glutSwapBuffers();
}
void onSize(int w, int h)
{
g_width = max(w, 1);
g_height = max(h, 1);
g_aspectRatio = double(g_width) / double(g_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(45, g_aspectRatio, 1, 1000);
glMatrixMode(GL_MODELVIEW);
}
EDIT:
This has been solved... In the above code, I had defined g_aspectRatio as an int instead of a floating-point value. Therefore, it's value was always 1...
In my (old) experience, that's just why you have an aspect ratio argument to gluPerspective().
The manual page says:
In general, the aspect ratio in gluPerspective should match
the aspect ratio of the associated viewport. For example, aspect = 2.0
means the viewer's angle of view is twice as wide in x as it is in y.
If the viewport is twice as wide as it is tall, it displays the image
without distortion.
Check your g_aspectRatio value.
by default, specifying vertex coordinates using glVertex3f is equivalent to specifying a location between -1.0 and +1.0 in screen space
Wrong. Coordinates passed to OpenGL through glVertex or a glVertexPointer vertex array are in model space. The transformation to screen space happens by transforming into view space by the modelview matrix and from view space to clip space by the projection matrix. Then clipping is applied and the perspective divide applied to reach normalized coordinate space.
Hence the value range for glVertex can be whatever you like it to be. By applying the right projection matrix you get your view space to be in [-aspect; aspect]×[-1, 1] if you like that.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-aspect, aspect, -1, 1, -1, 1);

How to get the whole scene rotate around itself? (my code has a little bug which just lets the objects rotate around themselves)

What must be changed to let me see the impression of flying around the whole fixed scene? My current code just lets me look from a fixed viewpoint at objects each one rotating around itself. Enabling glLoadIdentity() just stops their rotation. Note that 3dWidget::paintGL() is permanently called by a timer every 20ms.
void 3dWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTranslatef(0.5f, 0.5f, 0.5f);
glRotatef(3.0f, 1.0f, 1.0f, 1.0f);
glTranslatef(-0.5f, -0.5f, -0.5f);
glPushMatrix();
//glLoadIdentity();
for (int i = 0; i < m_cubes.count(); i++) {
m_cubes[i]->render();
}
glPopMatrix();
}
void Cube::render() {
glTranslatef(m_x, m_y, m_z); // local position of this object
glCallList(m_cubeId); // render code is in createRenderCode()
glTranslatef(-m_x, -m_y, -m_z);
}
void Cube::createRenderCode(int cubeId) {
m_cubeId = cubeId;
glVertexPointer(3, GL_FLOAT, 0, m_pCubePoints);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, m_pCubeColors);
glNewList(m_cubeId, GL_COMPILE);
{
glEnableClientState(GL_COLOR_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP, 0, m_numPoints);
glDisableClientState(GL_COLOR_ARRAY);
}
glEndList();
}
void 3dWidget::init(int w, int h)
{
...
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float aspect = w/(float)(h ? h : 1);
glFrustum(-aspect, aspect, -1, 1, 10, 100);
glTranslatef(0., 0., -12);
glMatrixMode(GL_MODELVIEW);
}
EDIT: It seems it's important to know that 2 cubes are created with the following 3D position coordinates (m_x, m_y, m_z):
void 3dWidget::createScene()
{
Cube* pCube = new Cube;
pCube->create(0.5 /*size*/, -0.5 /*m_x*/, -0.5 /*m_y*/, -0.5 /*m_z*/);
pCube = new Cube;
pCube->create(0.5 /*size*/, +0.5 /*m_x*/, +0.5 /*m_y*/, +0.5 /*m_z*/);
}
Use gluLookAt to position the camera. You apply it to the modelview matrix before any object transforms.
Obviously, you'll have to figure out a path for the camera to follow. That's up you and how you want the "flight" to proceed.
EDIT: Just to be clear, there's no camera concept, as such, in OpenGL. gluLookAt is just another transform that (when applied to the modelview matrix) has the effect of placing a camera at the prescribed location.
If you really are just trying to rotate the world, your code seems to perform the transforms in a reasonable order. I can't see why your objects rotate around themselves rather than as a group. It might help to present a SSCCE using glut.
Now I've found the reason by myself. It works as soon as I change method paintGL() to
void 3dWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
#if 0 // not working
glTranslatef(0.5f, 0.5f, 0.5f);
glRotatef(3.0f, 1.0f, 1.0f, 1.0f);
glTranslatef(-0.5f, -0.5f, -0.5f);
#else // this works properly, they rotate horizontally around (0,0,0)
glRotatef(3.0f, 0.0f, 1.0f, 0.0f);
#endif
for (int i = 0; i < m_cubes.count(); i++) {
m_cubes[i]->render();
}
}
I don't get it exactly why, but it obviously appeared that some transformations had compensated in a way that the objects just rotate around itself. Thanks for your help anyway.
I think it's always better to let the scene rotate than to move by gluLookAt (beside the issue that finding the right formula for the angle of view is more difficult).

glutBitmapString shows nothing

I'm going to show FPS on the screen with the freeglut function glutBitmapString,but it shows nothing. Here is my code. Is there anyone can figure where the problem is?
void PrintFPS()
{
frame++;
time=glutGet(GLUT_ELAPSED_TIME);
if (time - timebase > 100) {
cout << "FPS:\t"<<frame*1000.0/(time-timebase)<<endl;
char* out = new char[30];
sprintf(out,"FPS:%4.2f",frame*1000.0f/(time-timebase));
glColor3f(1.0f,1.0f,1.0f);
glRasterPos2f(20,20);
glutBitmapString(GLUT_BITMAP_TIMES_ROMAN_24,(unsigned char* )out);
timebase = time;
frame = 0;
}
}
void RenderScene(void)
{
// Clear the window with current clearing color
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
GLfloat vRed[] = { 1.0f, 0.0f, 0.0f, 0.5f };
GLfloat vYellow[] = {1.0f,1.0f,0.0f,1.0f};
shaderManager.UseStockShader(GLT_SHADER_IDENTITY, vYellow);
//triangleBatch.Draw();
squareBatch.Draw();
PrintFPS();
glutSwapBuffers();
}
it supposed to show the FPS on the top left of the screen
The position that's provided by glRasterPos is treated just like a vertex, and transformed by the current model-view and projection matrices. In you example, you specify the text to be position at (20,20), which I'm guessing is supposed to be screen (viewport, really) coordinates.
If it's the case that you're rendering 3D geometry, particularly with a perspective projection, your text may be clipped out. However, there are (at least) two simple solutions (presented in order of code simplicity):
use one of the glWindowPos functions instead of glRasterPos. This function bypasses the model-view and projection transformations.
use glMatrixMode, glPushMatrix, and glPopMatrix to temporarily switch to window coordinates for rendering:
// Switch to window coordinates to render
glMatrixMode( GL_MODELVIEW );
glPushMatrix();
glLoadIdentity();
glMatrixMode( GL_PROJECTION );
glPushMatrix();
glLoadIdentity();
gluOrtho2D( 0, windowWidth, 0, windowHeight );
glRasterPos2i( 20, 20 ); // or wherever in window coordinates
glutBitmapString( ... );
glPopMatrix();
glMatrixMode( GL_MODELVIEW );
glPopMatrix();