How to map object coordinate to the screen in OpenGL? - opengl

I am not able to understand the correct way of transforming primitive coordinate values to the screen coordinates.
If I use the following code (where w and h are width and height of my window 640 X 480)
glViewport(0,0,w,h);
// set up the projection matrix
glMatrixMode(GL_PROJECTION);
// clear any previous transform and set to the identity matrix
glLoadIdentity();
// just use an orthographic projection
glOrtho(0,w,h,0,1,-1);
and my primitives are
glBegin(GL_TRIANGLES);
glColor3f(1,0,0);
glVertex3f(-10,-10,0);
glColor3f(0,1,0);
glVertex3f(10,-10,0);
glColor3f(0,0,1);
glVertex3f(0,10,0);
glEnd();
The triangle becomes too big to fit the window. Most of the tutorials have the primitives in the range[-1,1] and their ortho projection between [-1,1], so the triangle comes correctly at the centre.
So, if the coordinates are generated by a 3rd party software (or lies above the range [-1,1], how would I transform them correctly so that the coordinates fit the screen?

Related

Text wont output on screen C++ / OpenGL

I am currently trying to output a string onto screen in OpenGL my relevant code is as follows:
void drawBitmapText(char *string,float x,float y)
{
char *c;
glRasterPos2f(x, y);
for (c=string; *c != '\0'; c++)
{
glutBitmapCharacter(GLUT_BITMAP_TIMES_ROMAN_10, *c);
}
}
Where my display function looks like so:
void display(void)
{
int speed=frame/20;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
frame++;
if((frame>=0) && (frame<1000)) // Scene 1.
{
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
drawBitmapText("Hello World",200,200);
glutSwapBuffers();
}
}
I believe I have implemented it correctly but apparently not. Any ideas?
If the xy position passed to glRasterPos2f(x, y); lies outside the viewport after transformation, the following raster drawing operations will be omitted until a new raster position that transforms to within the viewport is specified.
If you don't know what "transformation" and "viewport" mean you should educate yourself on this. Only so much: The coordinates given to glRasterPos are not pixel coordinates.
The position passed to glRasterPos2f() is processed just like vertices you pass to draw calls. This means that the current transformations (modelview and projection) are applied to the position.
If you don't specify any transformations, the default for both of these is the identity transformation. This means that the resulting coordinates are the same as the input coordinates.
Once all transformations are applied, OpenGL expects that the resulting coordinates are in a coordinate system called Normalized Device Coordinates (NDC). In this coordinate system, the range [-1.0, 1.0] for each coordinate direction maps to the window size. Which means that if no transformations are applied, your input coordinates should already be within that range.
I can think of at least 3 options to get this working for you:
Specify coordinates in the range [-1.0, 1.0]. For example:
glRasterPos2f(-1.0f, -1.0f);
would place text at the bottom left corner, and:
glRasterPos2f(0.0f, 0.0f);
places it at the center of the window.
If you want to specify the position in pixels, set up a corresponding projection transformation:
glMatrixMode(GL_PROJECTION);
glLoadIndentity();
glOrtho(0.0, width, 0.0, height, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
Use glWindowPos2f() instead of glRasterPos2f(). This function also sets the raster position, but takes input in window coordinates (which are in pixel units), and does not apply the current transformations.

Draw oval with sphere in Opengl

I want to draw an oval by projection the sphere on the screen (like rasterize). Here is my code but it doesn't show anything on the screen. Should I use more functions to initialize the projection? Is this way possible to draw oval on screen by using sphere?
GLfloat xRotated, yRotated, zRotated;
GLdouble radius=1;
void display(void);
void reshape(int x, int y);
int main (int argc, char **argv)
{
glutInit(&argc, argv);
glutInitWindowSize(800,800);
glutCreateWindow("OVAL");
zRotated = 30.0;
xRotated=43;
yRotated=50;
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
void display(void)
{
glMatrixMode(GL_PROJECTION);
glOrtho(0.1, 1.0, 0.1, 1.0, -1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0,0.0,-5.0);
glColor3f(0.9, 0.3, 0.2);
glRotatef(xRotated,1.0,0.0,0.0);
glRotatef(yRotated,0.0,1.0,0.0);
glRotatef(zRotated,0.0,0.0,1.0);
glScalef(1.0,1.0,1.0);glutSolidSphere(radius,20,20);
glFlush();
}
void reshape(int x, int y)
{
if (y == 0 || x == 0) return;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(39.0,(GLdouble)x/(GLdouble)y,0.6,21.0);
glMatrixMode(GL_MODELVIEW);
glViewport(0,0,x,y);
}
You are drawing a sphere compltely outside of the viewing volume, so it should be no surprise that it can't be seen.
There are a couple of issues with your code:
All OpenGL matrix functions besides glLoadIndentity and glLoadMatrix always post-multiply a matrix to the current top element of the current matrix stack. In your display function, you call glOrtho without resetting the projection matrix to identity before. This will result in totally weird - and different - results if the display callback is called more than once.
You should add a call to glLoadIdentity() right before calling glOrtho.
You set up the model view transformations so that the sphere's center will always end up at (0,0,-5) in eye space. However, you set a projectiom matrix which defines a viewing volume which goes from z=1 (near plane) to z=-1 (far plane) in eye space, so your spehre is actually behind the far plane.
There are several ways this could be fixed. Changing the viewing frustum by modifying the parameters of glOrtho might be the easisest. You could for example try (-2, 2, -2, 2, 1, 10) to be able to see the sphere.
It is not really clear what
I want to draw an oval by projection the sphere on the screen (like rasterize).
exactly means. If you just want the sphere to be distorted to an ellipsoid, you could just apply some non-uniform scaling. This in principle could be done in the projection matrix (if no other objects are to be shown), but this would make much more sense to apply it to the model matrix of the sphere - you already have the glScale call there, you could try something like glScalef(1.0f, 0.5f, 1.0f);.
Also note that the ortho parameters I suggested previously will result in some distortion if your viewport is not exactly square. In a real world, one wants to incorporate the aspect ratio of the viewport into the projection matrix.
If you want to see the sphere deformed as by a perspective projection, you would have to skip the glOrtho altogheter and switch to a perspective projection matrix.
The code you are using is totally outdated. The OpenGL matrix stack has been deprecated in OpenGL 3.0 (2008) and is not available in core profiles of modern OpenGL. The same applies for builtin vertex attributes like glColor or immediate mode drawing and client-side vertex arrays. As a result, GLUT's drawing functions can also not be used with modern GL any more.
If you really intend learning OpenGL nowadays, I stronly advise you to ignore this old cruft and star learning the modern way.

Understanding the window coordinates' interpretation in OpenGL

I was trying to understand OpenGL a bit more deep and I got stuck with below issue.
This segment describes my understanding, and the outputs are as assumed.
glViewport(0, 0 ,800, 480);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -1);
glRotatef(0, 0, 0, 1);
glBegin(GL_QUADS);
glVertex3f(-128, -128, 0.0f);
glVertex3f(128, -128, 0.0f);
glVertex3f(128, 128, 0.0f);
glVertex3f(-128, 128, 0.0f);
glEnd();
The window coordinates (Wx, Wy, Wz) for the above snippet are
(272.00000286102295, 111.99999332427979, 5.9604644775390625e-008)
(527.99999713897705, 111.99999332427979, 5.9604644775390625e-008)
(527.99999713897705, 368.00000667572021, 5.9604644775390625e-008)
(272.00000286102295, 368.00000667572021, 5.9604644775390625e-008)
I did a glReadPixels() and dumped to a bmp file. In the image I get a quad as expected with the (Wx, Wy) mentioned above ( since incase of images, the origin is at the top left, while verifying the bmp image I took care of subtracting the the window height i.e 480). This output was as per my understanding - (Wx, Wy) will be used as a 2D coordinate and Wz will be used for depth purpose.
Now comes the issue. I tried the below code snippet.
glViewport(0, 0 ,800, 480);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(100, 0, -1);
glRotatef(30, 0, 1, 0);
glBegin(GL_QUADS);
glVertex3f(-128, -128, 0.0f);
glVertex3f(128, -128, 0.0f);
glVertex3f(128, 128, 0.0f);
glVertex3f(-128, 128, 0.0f);
glEnd()
The window coordinates for the above snippet are
(400.17224205479812, 242.03174613770986, 1.0261343689191909)
(403.24386530741430, 238.03076912806583, 0.99456100555566640)
(403.24386530741430, 241.96923087193414, 0.99456100555566640)
(400.17224205479812, 237.96825386229017, 1.0261343689191909)
When I dumped output to a bmp file, I expected to have a very small parallelogram(approx like a 4 x 4 square transformed to a parallelogram) based on the above (Wx, Wy). But this was not the case. The image had a different set of coordinates as below
(403, 238)
(499, 113)
(499, 366)
(403, 241)
I have mentioned the coordinates in CW direction as seen on the image.
I got lost here. Can anyone please help in understanding what and why it is happening in the 2nd case??
How come I got a point (499, 113) on the screen when it was no where in the calculated window coordinates?
I used gluProject() to the window coordinates.
Note : I'm using OpenGL 2.0. I'm just trying to understand the concepts here, so please don't suggest to use versions > OpenGL 3.0.
edit
This is an update for the answer posted by derhass
The homogenous coordinates after the projection matrix for the 2nd case is as follows
(-0.027128123630699719, -0.53333336114883423, -66.292930483818054, -63.000000000000000)
(0.52712811245482882, -0.53333336114883423, 64.292930722236633, 65.00000000000000)
(0.52712811245482882, 0.53333336114883423, 64.292930722236633, 65.000000000000000)
(-0.027128123630699719, 0.53333336114883423, -66.292930483818054, 63.000000000000000)
So here for the vertices where z > -1, the vertices will get clipped at the near plane. When this is the case, shouldn't GL use the projected point at z = -1 plane?
The thing you are missing here is clipping.
After this
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
you basically have a camera at origin, looking along the -z direction, and the near plane at z=-1, the far plane at z=-100. Now you draw a 128x128 square rotated at 30 degrees aliong the y (up) axis, and shifted by -1 along z (and 100 along x, but that is not the crucial point here). Since You rotated the square around its center point, the z value for two of the points will be way before the near plane, while the other two should fall into the frustum. (And you can also see that as those two points match your expectations).
Now directly projecting all 4 points to window space is not what GL does. It transforms the points to clip space, intersects the primitives with all 6 sides of the viewing frustum and finally projects the clipped primitives into window space for rasterization.
The projection you did is actually only meaningful for points which lie inside the frustum. Two of your points lie behind the camrea, and projecting points behind the camera will create an mirrored image of these points in front of the camera.

Setting the coordinate system for drawing in OpenGL

I just started reading initial chapters of Blue book and got to understand that the projection matrix can be used to modify the mapping of our desired coordinate system to real screen coordinates. It can be used to reset the coordinate system and change it from -1 to 1 on left, right, top and bottom by the following (as an example)
glMatrixMode(GL_PROJECTION);
glLoadIdentity(); //With 1's in the diagonal of the identity matrix, the coordinate system is rest from -1 to 1 (and the drawing should happen then inside those coordinates which be mapped later to the screen)
Another example: (Width: 1024, Height: 768, Aspect Ratio: 1.33) and to change the coordinate system, do:
glOrtho (-100.0 * aspectRatio, 100.0 * aspectRatio, -100.0, 100.0, 100.0, 1000.0);
I expected the coordinate system for OpenGL to change to -133 on left, 133 on right, -100 on bottom and 100 on top. Using these coordinates, I understand that the drawing will be done inside these coordinate and anything outside these coordinates will be clipped.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-100 * aspectRatio, 100 * aspectRatio, -100, 100, 100, 1000);
glMatrixMode(GL_MODELVIEW);
glRectf(-50.0, 50.0, 200, 100);
However, the above command doesn't give me any output on the screen. What am I missing here?
I see two problems here:
The rect should not by show at all, since glRectf() draws at depth z=0, but you set up your orthorgraphic projection to cover the z range [100,1000], so the object lies before the near plane and should be clipped away.
You do not specifiy waht MODELVIEW matrix you use. In the comments, you mention that the object does show up, but not in the place where you expect it. This also violates my first point, but could be explained if the ModelView matrix is not identity.
So I suggest to first use a different projection matrix with like glOrtho(..., -1.0f, 1.0f); so that z=0 is actually covered, and second insert a glLoadIdentity() call after the glMatrixMode(GL_MODELVIEW) in the above code.
Another approach would be to keep the glOrtho() as it is and to specify a translation matrix wich moves the rect somewhere between z=100 and z=1000.

Rotation of camera used for perspective projection

I've just started playing with OpenGl to render a number of structure each comprising a number of polygon.
Basically I want to perform the equivalent of setting a camera at (0,0,z) in the world (structure) coordinates and rotate it about the x,y and z-axes of the world axes (in that order!) to render a view of each structure (as I understand it it common practice to do use the inverse camera matrix). Thus as I understand it I need to translate (to world origin i.e. (0,0,-z)) * rotateZrotateYrotateX * translate (re-define world origin see below)
So I think I need something like:
//Called when the window is resized
void handleResize(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(9.148, (double)w / (double)h, 800.0, 1500.0);
}
float _Zangle = 10.0f;
float _cameraAngle = 90.0f;
//Draws the 3D scene
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
glTranslatef(0.0f, 0.0f, -z); //Move forward Z (mm) units
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" by 90 degrees about x-axis
glTranslatef(-11.0f,189.0f,51.0f); //re-define origin of world coordinates to be (11,-189,-51) - applied to all polygon vertices
glPushMatrix(); //Save the transformations performed thus far
glBegin(GL_POLYGON);
glVertex3f(4.91892,-225.978,-50.0009);
glVertex3f(5.73534,-225.978,-50.0009);
glVertex3f(6.55174,-225.978,-50.0009);
glVertex3f(7.36816,-225.978,-50.0009);
.......// etc
glEnd();
glPopMatrix();
However when I compile and run this the _angle and _cameraAngle seem to be reversed i.e. _angle seems to rotate about y-axis (Vertical) of Viewport and _cameraAngle about z-axis (into plane of Viewport)? What am I doing wrong?
Thanks for taking the time to read this
The short answer is: Use gluLookAt(). This utility function creates the proper viewing matrix.
The longer answer is that each OpenGL transformation call takes the current matrix and multiplies it by a matrix built to accomplish the transformation. By calling a series of OpenGL transformation function you build one transformation matrix that will apply the combination of transformations. Effectively, the matrix will be M = M1 * M2 * M3 . . . Mathematically, the transformations are applied from right to left in the above equation.
Your code doesn't move the camera. It stays at the origin, and looks down the negative z-axis. Your transformations move everything in model space to (11,-189,-51), rotates everything 90 degrees about the x-axis, rotates everything 90 degrees about the y-axis, rotates everything 10 degrees about the z-axis, then translates everything -z along the z-axis.
EDIT: More information
I'm a little confused about what you want to accomplish, but I think you want to have elements at the origin, and have the camera look at those elements. The eye coordinates would be where you want the camera, and the center coordinates would be where you want the objects to be. I'd use a little trigonometry to calculate the position of the camera, and point it at the origin.
In this type of situation I usually keep track of camera position using longitude, latitude, and elevation centered on the origin. Calculating x,y,z for the eye coordinates is simplyx = elv * cos(lat) * sin(lon), y = elv * sin(lat), z = elv * cos(lat) * cos(lat).
My gluLookAt call would be gluLookAt(x, y, z, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
You could rotate the up on the camera by changing the last three coordinates for gluLookAt.
The z axis is coming from the center of the monitor into you. So, rotating around the z-axis should make the camera spin in place (like a 2D rotation on just the xy plane). I can't tell, but is that what's happening here?
It's possible that you are encountering Gimbal Lock. Try removing one of the rotations and see if things work the way they should.
While it's true that you can't actually move the camera in OpenGL, you can simulate camera motion by moving everything else. This is why you hear about the inverse camera matrix. Instead of moving the camera by (0, 0, 10), we can move everything in the world by (0, 0, -10). If you expand those out into matrices, you will find that they are inverses of each other.
I also noticed that, given the code presented, you don't need the glPushMatrix()/glPopMatrix() calls. Perhaps there is code that you haven't shown that requires them.
Finally, can you provide an idea of what it is you are trying to render? Debugging rotations can be hard without some context.
Short answer :Good tip
Longer answer: Yes the order of matrix multiplication is clear... that's what I meant by inverse camera matrix to indicate moving all the world coordinates of structures into the camera coordinates (hence the use of "camera" in my comments ;-)) instead of actually translating and rotating camera into the world coordinates.
So if I read between the lines correctly you suggest something like:
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
gluLookAt(0.0,0.0,z,11.0,-189.0,-51.0,0.0,1.0,0.0); //eye(0,0,z) look at re-defined world origin(11,-189,-51) and up(0.0,1.0,0.0)
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" (actually structures) about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" (actually structures!) by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" (actually structures) by 90 degrees about x-axis
glPushMatrix();
Or am I still missing something?
I think you are mixing axes of your world with axes of the camera,
GLRotatef only uses axes of the camera, they are not the same as your the world axes once the camera is rotated.