I'm learning OpenGL, and I am reading a book to help me a long. I've followed it through half way and have decided to go off on my own path now that I'm familiar with the basics. I've started to develop an application, the intention is to just show a grid.
I've pretty much nailed it, but when I run my application, part of the grid is cut off. I've attached a condensed version of the code (which has the same result) - does anyone know what I am doing wrong to make it cut off part of the screen? I've tinkered about a lot and I've run through a few values but I just cannot get this thing sorted. Any help or a nudge in the right direction is much appreciated.
Code:
#include <Windows.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include "deps\glut\glut.h"
void display();
int main(int argc, char **argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(800, 600);
glutCreateWindow("");
glutDisplayFunc(display);
glClearColor(1.0, 1.0, 1.0, 1.0);
glColor3f(0.0, 0.0, 0.0);
// Init
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-8.0, 8.0, -8.0, 8.0, -8.0, 8.0);
glutMainLoop();
return 0;
}
void display() {
float f;
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, -1.0, 0.0);
// This next bit of code just creates the grid
glColor3f(0.0f, 0.0f, 0.0f);
glPointSize(1.0);
for(f=-10.0f;f<10.0f;f++) {
glBegin(GL_LINES);
glVertex3f(f, 0.0f, -10.0f);
glVertex3f(f, 0.0f, 10.0f);
glEnd();
}
glRotatef(90, 0.0, 1.0, 0.0);
for(f=-10.0f;f<10.0f;f++) {
glBegin(GL_LINES);
glVertex3f(f, 0.0f, -10.0f);
glVertex3f(f, 0.0f, 10.0f);
glEnd();
}
// Swap the buffers
glutSwapBuffers();
}
You can't draw outside the unit cube.
Your glOrtho call scales the coordinates so that you can use the range -8...8 for x, y, and z coordinates, but your for loop then tries to use -10...10, which exceeds the range and will be clipped.
You're hitting the Z limits due to the diagonal of a unit cube being sqrt(3) longer than the side. So I suggest you use -14…14 ( = ± round( 8*sqrt(3) ) ) as limits for the near and far plane.
Related
I'am working on a project which one my homework. I need rotate a car(not exactly but like a car this is not important I think). My car 2d. I create it with glReactf(); function. Because of I can create a rectangle with this function pixel by pixel. Like;
// Create a rectangle (0,0) to (30,30) :) Like a square, yeap :) I use it.
// Because I am working on a lot of rectangles and you know, squares are rectangles in math :)
glRectf(0, 0, 30, 30);
But I have a code. It works but it is 3d. I can't turn it 2d. Can you help me? This is not important, I am working on 2d. While I say "I can't turn it 2d" I mean I didn't get algorithm and logic on this code. This is 3d quads code, and I can turn it with rotatef() function;
#include <stdio.h>
#include <GL/glut.h>
double rotate_y = 0;
void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(rotate_y, 0.0, 1.0, 0.0);
glBegin(GL_POLYGON);
glColor3f(1.0, 1.0, 1.0);
glVertex3f(0.5, -0.5, 0.5);
glColor3f(1.0, 1.0, 0.0);
glVertex3f(0.5, 0.5, 0.5);
glColor3f(1.0, 0.0, 1.0);
glVertex3f(-0.5, 0.5, 0.5);
glColor3f(1.0, 1.0, 1.0);
glVertex3f(-0.5, -0.5, 0.5);
glEnd();
glFlush();
glutSwapBuffers();
}
void keyboard(int key, int x, int y) {
if (key == GLUT_KEY_RIGHT) {rotate_y += 45;}
else if (key == GLUT_KEY_LEFT) {rotate_y -= 45;}
glutPostRedisplay();
}
int main(int argc, char* argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowPosition(0, 0);
glutInitWindowSize(800, 800);
glutCreateWindow("Rotating Test");
glutDisplayFunc(display);
glutSpecialFunc(keyboard);
glutMainLoop();
return 0;
}
I need a car(like 30x30 px quad in 2d) and I need turn it 180 degree on y axis. I want to create it like write it my first code.
I solve it, how I did it I don't know, I am serious but it done. This is my display function;
void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(rotate_y, 0.0, 1.0, 0.0);
glColor3f(1.0, 1.0, 1.0);
glRectf(1, 1, 11, 11);
glFlush();
glutSwapBuffers();
}
I tired implementing a simple zoom in/out using the glutMouseWheelFunc in VC++. I am able to achieve the zooming in/out but while doing so, the axes(GLlines) tend to disappear after zooming more than a certain level.
Am using the glutMouseWheelFunc to increase/decrease z- axis value in glTranslatef.
I have defined 'z' of glTranslatef for as camera distance:
float cDist= 30.0f; // camera distance
Then used it in glTranslatef as,
glTranslatef(0.0f, 0.0f, -cDist);
in display function below.
void display(void) {
glClearColor(0.0, 0.0, 0.0, 1.0); //clear the screen to black
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);//clear the color buffer and the depth buffer
enable();
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -cDist);
glRotatef(xrot, 1.0, 0.0, 0.0);
---------------
glBegin(GL_LINES);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(-500, 0, 0);
glVertex3f(500, 0, 0);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3f(0, -500, 0);
glVertex3f(0, 500, 0);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f(0, 0, -500);
glVertex3f(0, 0, 500);
glTranslated(-xpos, 0.0f, -zpos); //translate the screento the position of our camera
glColor3f(1.0f, 1.0f, 1.0f);
glutSwapBuffers();
}
Afterwards, I defined wheelfunc as,
void mouseWheel(int button, int dir, int x, int y)
{
if (dir > 0)
{
cDist = cDist++;
}
else
{
cDist= cDist--;
}
return;
}
and called it in main function with glutMouseWheelFunc(mouseWheel);
int main(int argc, char **argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH | GLUT_MULTISAMPLE);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow("Window");
glutDisplayFunc(display);
glutIdleFunc(display);
glutReshapeFunc(reshape);
glutMotionFunc(mouseMovement);
glutMouseWheelFunc(mouseWheel);---- here
-----
---
glutMainLoop();
return 0;
}
Is this approach of zooming in proper? if not, what could be the alternative ? Also how can I define axes(lines I drew) to full extent of the screen?
thanks for the help. Seems like I missed up setting up proper depth value for gluPerspective. Once I increased the depth value, zoom out/in was working fine
I have an OpenGL project which uses GLUT (not freeglut) wherein I would like to display 2D text on the viewport at a fixed location. The rest of my objects are in 3D world co-ordinates.
This answer to a related old question says that,
the bitmap fonts which ship with GLUT are simple 2D fonts and are not suitable for display inside your 3D environment. However, they're perfect for text that needs to be overlayed on the display window.
I've tried the approach outlined in the accepted answer, but it does not give me the desired output. Following is a code snippet of the display function:
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0, 0.0, -(dis+ddis)); //Translate the camera
glRotated(elev+delev, 1.0, 0.0, 0.0); //Rotate the camera
glRotated(azim+dazim, 0.0, 1.0, 0.0); //Rotate the camera
.
.
.
draw 3D scene
.
.
.
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0.0, win_width, 0.0, win_height);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRasterPos2i(10, 10);
string s = "Some text";
void * font = GLUT_BITMAP_9_BY_15;
for (string::iterator i = s.begin(); i != s.end(); ++i)
{
char c = *i;
glColor3d(1.0, 0.0, 0.0);
glutBitmapCharacter(font, c);
}
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glFlush();
glSwapBuffers();
}
A similar question to what I want seems to have been asked, but has no accepted answers by the author.
Answers in general seem to suggest using other libraries (mostly OS specific) to achieve the overlay. However, there is no clear indication to whether this can be achieved with GLUT alone. Can it be done?
I ran into this same problem using glut to create a model of the solar system. Here's how I solved it: (using your code from above)
glDisable(GL_TEXTURE_2D); //added this
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0.0, win_width, 0.0, win_height);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRasterPos2i(10, 10);
string s = "Some text";
void * font = GLUT_BITMAP_9_BY_15;
for (string::iterator i = s.begin(); i != s.end(); ++i)
{
char c = *i;
glColor3d(1.0, 0.0, 0.0);
glutBitmapCharacter(font, c);
}
glMatrixMode(GL_PROJECTION); //swapped this with...
glPopMatrix();
glMatrixMode(GL_MODELVIEW); //...this
glPopMatrix();
//added this
glEnable(GL_TEXTURE_2D);
You'll note i switched glMatrixMode(GL_PROJECTION); and glMatrixMode(GL_MODELVIEW); near the end. I decided to do this because of this website. I also has to surround the code section with glDisable(GL_TEXTURE_2D) and glEnable(GL_TEXTURE_2D).
Hope that helps- I worked for me. My solar system uses classes to perform all of this, I'd be happy to share more details upon request.
As I said, the code you posted works perfectly fine. Here's the code I used to test it:
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
#include "GL/glu.h"
//glm not required
//#include <glm.hpp>
//#include <gtc/matrix_transform.hpp>
#include <string>
int win_width = 500, win_height = 500;
void renderScene(void) {
static float dis=0, ddis=0, elev=0, delev=0, azim=0, dazim=0;
azim += 0.5f;
if (azim >= 360.0f){
azim -= 360.0f;
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -(dis + ddis));
glRotated(elev + delev, 1.0, 0.0, 0.0);
glRotated(azim + dazim, 0.0, 1.0, 0.0);
glScalef(2.0f,2.0f,2.0f);
glBegin(GL_TRIANGLES);
glColor3f(1.0f, 1.0f, 0.0f);
glVertex3f(-0.5f, -0.5f, 0.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(0.5f, 0.0f, 0.0f);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f(0.0f, 0.5f, 0.0f);
glEnd();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
//I like to use glm because glu is deprecated
//glm::mat4 orth= glm::ortho(0.0f, (float)win_width, 0.0f, (float)win_height);
//glMultMatrixf(&(orth[0][0]));
gluOrtho2D(0.0, win_width, 0.0, win_height);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor3f(1.0f, 0.0f, 0.0f);//needs to be called before RasterPos
glRasterPos2i(10, 10);
std::string s = "Some text";
void * font = GLUT_BITMAP_9_BY_15;
for (std::string::iterator i = s.begin(); i != s.end(); ++i)
{
char c = *i;
//this does nothing, color is fixed for Bitmaps when calling glRasterPos
//glColor3f(1.0, 0.0, 1.0);
glutBitmapCharacter(font, c);
}
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glEnable(GL_TEXTURE_2D);
glutSwapBuffers();
glutPostRedisplay();
}
int main(int argc, char **argv) {
// init GLUT and create Window
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100, 100);
glutInitWindowSize(win_width, win_height);
glutCreateWindow("GLUT Test");
// register callbacks
glutDisplayFunc(renderScene);
// enter GLUT event processing cycle
glutMainLoop();
return 1;
}
The usual disclaimers about deprecated fixed function pipeline and others apply.
There is a tricky way to solve your problem.
Please write a code before you call a function or what ever it is as written below:
glutBitmaCharacter(font, ' ');//
Does it....! And it should work...
void init (void)
{
glClearColor (1.0, 1.0, 1.0, 0.0);
//glMatrixMode(GL_MODELVIEW);
//gluLookAt(x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
glMatrixMode(GL_PROJECTION);
gluPerspective(45, 2, -1, 1);
//glFrustum(xwMin, xwMax, ywMin, ywMax, dnear, dfar);
//gluPerspective(45.0, 45, -1, 1);
}
void displayFcn (void)
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.0, 1.0, 0.0);
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_LINE);
glBegin(GL_TRIANGLES);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.1, 0.0, 0.0);
glVertex3f(0.50, 0.866025, 0.0);
glEnd();
glFlush();
}
void reshapeFcn(GLint newWidth, GLint newHeight)
{
glViewport(0,0,newWidth, newHeight);
winWidth = newWidth;
winHeight = newHeight;
}
void main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowPosition(400,200);
glutInitWindowSize(winWidth, winHeight);
glutCreateWindow("Test");
init();
glutDisplayFunc(displayFcn);
glutReshapeFunc(reshapeFcn);
glutMainLoop();
}
Could someone explain a little bit and give suggestions how to make that triangle visible.
I don't think that you're allowed to set 'zNear' negative. That will do weird things to the depth buffer and mean that you see things behind the eye. Docs say it's always positive. You're also liable to get somewhat strange results if you fix the aspect ratio to 2, regardless of the aspect ratio of the window.
You also need to set the current matrix back to MODEL_VIEW as datenwolf has shown. You don't need to use gluLookAt, but you need to do something to move the triangle away from the origin where the eye is located. You could do that by setting the z component to some negative value, or by applying a translation before the vertices:
glPushMatrix();
glTranslated(0,0,-5);
glBegin(GL_TRIANGLES);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.1, 0.0, 0.0);
glVertex3f(0.50, 0.866025, 0.0);
glEnd();
glPopMatrix();
gluPrespective and gluLookAt multiply on top of the current matrix on the selected stack. You need to load an identity first to make sense. Also you need to set a viewport before rendering. Best practice is to set all matrices and the viewport in the display function, and nowhere else. Sticking to that rule will make your life a lot easier. Also in OpenGL one usually doesn't have a dedicated initializtion phase. Resources are loaded on demand.
void displayFcn (void)
{
glViewport(0,0,winWidth, winHeight);
glClearColor (1.0, 1.0, 1.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45, 2, 1, 10);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
glColor3f(0.0, 1.0, 0.0);
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_LINE);
glBegin(GL_TRIANGLES);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.1, 0.0, 0.0);
glVertex3f(0.50, 0.866025, 0.0);
glEnd();
glFinish();
}
void reshapeFcn(GLint newWidth, GLint newHeight)
{
winWidth = newWidth;
winHeight = newHeight;
glutPostRedisplay();
}
void main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowPosition(400,200);
glutInitWindowSize(winWidth, winHeight);
glutCreateWindow("Test");
glutDisplayFunc(displayFcn);
glutReshapeFunc(reshapeFcn);
glutMainLoop();
}
I need to show the same object in OpenGL in two different viewports, for instance, one using ortographic projection and the other using perspective. In order to do this, do I need to draw again the object after each call to glViewport()?
Nehe has a good tutorial on how to do this, and his site is generally a good resource for OpenGL questions.
// normal mode
if(!divided_view_port)
glViewport(0, 0, w, h);
else
{
// right bottom
glViewport(w/2, h/2, w, h);
glLoadIdentity ();
gluLookAt(5.0f, 5.0f, 5.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
display();
// left bottom
glViewport(0, h/2, w/2, h);
glLoadIdentity();
gluLookAt (5.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
display();
// top right
glViewport(w/2, 0, w, h/2);
glLoadIdentity();
gluLookAt(0.0f, 0.0f, 5.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
display();
// top left
glViewport(0, 0, w/2, h/2);
glLoadIdentity();
gluLookAt(0.0f, 5.0f, 0.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
display();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if (w <= h)
glOrtho(-2.0, 2.0,
-2.0 * (GLfloat) h / (GLfloat) w, 2.0 * (GLfloat) h / (GLfloat) w,
-10.0, 100.0);
else
glOrtho(-2.0 * (GLfloat) w / (GLfloat) h, 2.0 * (GLfloat) w / (GLfloat) h,
-2.0, 2.0,
-10.0, 100.0);
glMatrixMode(GL_MODELVIEW);
Minimal runnable example
Similar to this answer, but more direct and compilable. Output:
main.c
#include <stdlib.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
static int width;
static int height;
static void display(void) {
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0f, 0.0f, 0.0f);
glViewport(0, 0, width/2, height/2);
glLoadIdentity();
gluLookAt(0.0, 0.0, -3.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glutWireTeapot(1);
glViewport(width/2, 0, width/2, height/2);
glLoadIdentity();
gluLookAt(0.0, 0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glutWireTeapot(1);
glViewport(0, height/2, width/2, height/2);
glLoadIdentity();
gluLookAt(0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0);
glutWireTeapot(1);
glViewport(width/2, height/2, width/2, height/2);
glLoadIdentity();
gluLookAt(0.0, -3.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0);
glutWireTeapot(1);
glFlush();
}
static void reshape(int w, int h) {
width = w;
height = h;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-1.0, 1.0, -1.0, 1.0, 1.5, 20.0);
glMatrixMode(GL_MODELVIEW);
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow(argv[0]);
glClearColor(0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_FLAT);
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return EXIT_SUCCESS;
}
Compile and run:
gcc -o main.out main.c -lGL -lGLU -lglut
./main.out
I think that in modern OpenGL 4 you should just render to textures, and then place those textures orthogonaly on the screen, see this as a starting point: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
Tested on OpenGL 4.5.0 NVIDIA 352.63, Ubuntu 15.10.
yes,
and you should also change the scissor settings to have a clean separation between the two views if they are in the same window.
In GL 4 you can render to many viewports in one rendering pass. See ARB_viewport_array and related concepts.
Think of OpenGL as being nothing more than commands which prepare you to output to the window you're currently working with.
There's two commands with OpenGL that even NEHE's tutorials don't tell you the importance of:
wglCreateContext - which takes a window device context DC, can be obtained from ANY window - whether it's a user control, a windows form, a GL window, or another application window (like notepad). This creates an OpenGL device context - they refer to as a resource context - which you later use with ...
wglMakeCurrent - which takes two parameters, the Device Context you're dealing with (the parameter passed in for the Windows Device Context in wglCreateContext) - and the Resource Context that returns.
Leveraging ONLY these two things - here's my advice:
NEHE's tutorial provides a solution that leverages the existing window ONLY and segments the screen for drawing. Here's the tutorial: http://nehe.gamedev.net/tutorial/multiple_viewports/20002/
Leveraging glViewport you'll need to re-draw on every update.
That's one method.
But there's another - less graphically and processor intense method:
Create a window for each view by leveraging a user control.
Each window has it's own hWnd.
Get the DC, process the wglcreatecontext, and then, on a timer (mine is 30 frames a second), if you detect state change, then select wglMakeCurrent for that view and redraw. Otherwise, just skip the section entirely.
This conserves valuable processing power, and also reduces the code from having to manage the window and viewport calculations manually.