OpenGL C++ - Solar System - c++

I've finished building a solar system using OpenGL and C++. One of the features in this system is having camera positions on each planet pointing to the north which moves based on the planet transformation. The camera positions are: One at the top, one a little behind the planet and the last one is far away from the planet. There are some other features but I don't have any issues with them.
So, the issue I am having is that some planets seem to be trembling for some reason while rotating around their centers. If I increase the speed of spinning the planet will stop trembling or the trembling becomes unnoticeable. The entire solar system is fully based on real textures and proportional space calculations and it has multiple camera positions as mentioned earlier.
Here is some code that might help understanding what I am trying to achieve:
//Caculate the earth postion
GLfloat UranusPos[3] = {Uranus_distance*DistanceScaler * cos(-uranus * M_PI / 180), 0, Uranus_distance*DistanceScaler * sin(-uranus * M_PI / 180)};
//Caculate the Camera Position
GLfloat cameraPos[3] = {Uranus_distance*DistanceScaler * cos(-uranus * M_PI / 180), (5*SizeScaler), Uranus_distance*DistanceScaler * sin(-uranus * M_PI / 180)};
//Setup the camear on the top of the moon pointing
gluLookAt(cameraPos[0], cameraPos[1], cameraPos[2], UranusPos[0], UranusPos[1], UranusPos[2]-(6*SizeScaler), 0, 0, -1);
SetPointLight(GL_LIGHT1,0.0,0.0,0.0,1,1,.9);
//SetMaterial(1,1,1,.2);
//Saturn Object
// Uranus Planet
UranusObject( UranusSize * SizeScaler, Uranus_distance*DistanceScaler, uranusumbrielmoonSize*SizeScaler, uranusumbrielmoonDistance*DistanceScaler, uranustitaniamoonSize*SizeScaler, uranustitaniamoonDistance*DistanceScaler, uranusoberonmoonSize*SizeScaler, uranusoberonmoonDistance*DistanceScaler);
The following is the planet function I am calling to draw the object inside the display function:
void UranusObject(float UranusSize, float UranusLocation, float UmbrielSize, float UmbrielLocation, float TitaniaSize, float TitaniaLocation, float OberonSize, float OberonLocation)
{
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Uranus_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glRotatef( uranus, 0.0, 1.0, 0.0 );
glTranslatef( UranusLocation, 0.0, 0.0 );
glDisable( GL_LIGHTING );
glColor3f( 0.58, 0.29, 0.04 );
DoRasterString( 0., 5., 0., " Uranus" );
glEnable( GL_LIGHTING );
glPushMatrix();
// Venus Spinning
glRotatef( uranusSpin, 0., 1.0, 0.0 );
MjbSphere(UranusSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Umbriel_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glDisable(GL_LIGHTING);
if (LinesEnabled)
{
glPushMatrix();
gluLookAt( 0.0000001, 0., 0., 0., 0., 0., 0., 0., .000000001 );
DrawCircle(0.0, 0.0, UmbrielLocation, 1000);
glPopMatrix();
}
glEnable( GL_LIGHTING );
glColor3f(1.,1.,1.);
glRotatef( uranusumbrielmoon, 0.0, 1.0, 0.0 );
glTranslatef( UmbrielLocation, 0.0, 0.0 );
MjbSphere(UmbrielSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Titania_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glDisable(GL_LIGHTING);
if (LinesEnabled)
{
glPushMatrix();
gluLookAt( 0.0000001, 0., 0., 0., 0., 0., 0., 0., .000000001 );
DrawCircle(0.0, 0.0, TitaniaLocation, 1000);
glPopMatrix();
}
glEnable( GL_LIGHTING );
glColor3f(1.,1.,1.);
glRotatef( uranustitaniamoon, 0.0, 1.0, 0.0 );
glTranslatef( TitaniaLocation, 0.0, 0.0 );
MjbSphere(TitaniaSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Oberon_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glDisable(GL_LIGHTING);
if (LinesEnabled)
{
glPushMatrix();
gluLookAt( 0.0000001, 0., 0., 0., 0., 0., 0., 0., .000000001 );
DrawCircle(0.0, 0.0, OberonLocation, 1000);
glPopMatrix();
}
glEnable( GL_LIGHTING );
glColor3f(1.,1.,1.);
glRotatef( uranusoberonmoon, 0.0, 1.0, 0.0 );
glTranslatef( OberonLocation, 0.0, 0.0 );
MjbSphere(OberonSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
Finally, the following code is for the transformation calculations used for solar animation:
uranus += 0.0119 * TimeControl;
if( uranus > 360.0 )
uranus -= 360.0;
// Clockwise Rotation
uranusSpin -= 2.39 * TimeControl;
if( uranusSpin <= -360.0 )
uranusSpin = 0.0;
Note: the problem happens only with 4 planets only.
I really appreciate any idea that could solve the problem.

First of all take a look at:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
Now to your problem. I am too lazy to go through your code but you most likely hit the floating point accuracy barrier and or have accumulating errors during rotating. My bet is that the error is bigger further away from the sun (outer planets and more in the ecliptic plane). If that is so then it is obvious. So how to remedy that?
floating point accuracy
while rendering you are transforming vertexes by transform matrix. For reasonable ranges is this OK but if you Vertex is very far from (0,0,0) then the matrix multiplication is not that precise. This means when you convert back to camera space the Vertex coordinate is jumping around. To avoid this you need to translate your vertexes to camera origin before feeding it to OpenGL.
So just subtract camera position from each Vertex (before loading them to OpenGL!!!) you have and then render them with camera with position (0,0,0). This way you get rid of the jumping and even wrong interpolation of primitives. see
ray and ellipsoid intersection accuracy improvement
If your object is still too big (this is not the case for Solar system) then you can stack up more frustums together and render each with separate camera shifted by some step so you get in range.
Use 64 bit floats where you can, but be aware GPU implementations does not support 64-bit interpolators so fragment shader is feed by 32-bit floats instead.
accumulating errors in transform matrix
If you have some "static" matrix and apply countless operations on it like rotation, translation Then it will lose its precision after while. This is recognizable by changing scale (axises are not unit size anymore) and adding skew (axises are not perpendicular to each other anymore) with time it gets worse and worse.
To remedy that You can keep counter of operations per matrix and if hit some treshold perform matrix normalization. This is simple just extraxt all the axises vectors, make them perpendicular again, set them back to their original size and write them back to your matrix. With basic vector math knowledge is this easy just exploit cross product (which gives you perpendicular vector). I use Z axis as view direction so I keep Z axis direction as is and correct the X,Y axises directions. The size is easy just divide each vector by its size and you are unit again (or very close to it). for more info see:
Understanding 4x4 homogenous transform matrices
[Edit1] What is going on
Have a look at your code for single planet without the rendering stuff:
// you are missing glMatrixMode(GL_????) here !!! what if has been changed?
glPushMatrix();
// rotate to match dayly rotation axis?
glRotatef( uranus, 0.0, 1.0, 0.0 );
// translate to Uranus avg year rotation radius
glTranslatef( UranusLocation, 0.0, 0.0 );
glPushMatrix();
// rotate Uranus to actual position (year rotation)
glRotatef( uranusSpin, 0., 1.0, 0.0 );
// render sphere
MjbSphere(UranusSize,50,50);
glPopMatrix();
// moons
glPopMatrix();
So what you are doing is this. Let assume you are using ModelView matrix and you are instructing OpenGL to do this operation on it:
ModelView = ModelView * glRotatef(uranus,0.0,1.0,0.0) * glTranslatef(UranusLocation,0.0,0.0) * glRotatef(uranusSpin,0.,1.0, 0.0);
So what is wrong with this? For small scenes nothing but you are using proportional sizes so:
UranusLocation=2870480859811.71 [m]
UranusSize = 25559000 [m]
So that means the glVertex magnitudes are ~25559000 and after applying transforms ~2870480859811.71+25559000. Now there are few problems with these values.
First any glRotation call applies sin and cos coefficients to the 2870480859811.71 Lets assume we have error of sin,cos around 0.000001 that means the final position after result has position error in it:
error=2870480859811.71*0.000001=2870480.85981171
The OpenGL sin,cos implementation has probably higher precision but not by much. Anyway if you are comparing it to planet radius
2870480.85981171/25559000=0.112308 -> 11%
You get that the jumping error is around 11% of the planet size. That is huge. The implication from this is that the jumping is the bigger the further away from Sun and more Visible for smaller Planets (as our perception is usually relative not absolute).
You can try to boost this by using double precision (glRotated) but that does not mean it would solve the problem (some drivers does not have double precision implementation of sin,cos).
If you want to get rid of these problems you have follow the bullet #1 or do the rotations on your own on at least doubles and feed only the final matrix to OpenGL. So first the #1 approach. Translation of matrix is just +/- operation (also encoded as multiplication) but no un-precise coefficients are present so you are using full precision of used variable. Anyway I would use glTranslated just to be sure. So we need to make sure the rotations does not use big values inside OpenGL. So try this:
// compute planet position
double x,y,z;
x=UranusLocation*cos(uranusSpin*M_PI/180.0);
y=0.0;
z=UranusLocation*sin(uranusSpin*M_PI/180.0);
// rotate to match dayly rotation axis?
glRotated( uranus, 0.0, 1.0, 0.0 );
// translate to Uranus to actual position (year rotation)
glTranslated(x,y,z);
// render sphere
MjbSphere(UranusSize,50,50);
This affects the daily rotation speed as the daily and Year rotations angles are not adding up, but you are not implementing the daily rotation yet anyway. If this does not help then we need to use camera local coordinates to avoid having big values send to OpenGL:
// compute planet position
double x,y,z;
x=UranusLocation*cos(uranusSpin*M_PI/180.0);
y=0.0;
z=UranusLocation*sin(uranusSpin*M_PI/180.0);
// here change/compute camera position to (original_camera_position-(x,y,z))
// rotate to match dayly rotation axis?
glRotated( uranus, 0.0, 1.0, 0.0 );
// render sphere
MjbSphere(UranusSize,50,50);
Hope I matched your coordinate system if not just swap axises or negate them (x,y,z).
It is much better to have own precise matrix math at disposal and compute the glTranslate,glRotate on CPU side with high precision and use only the resultant matrix in OpenGL. see the Understanding 4x4 homogenous transform matrices link above how to do that.

I can not give you an exact answer based on what you have shown. I can not compile and run your current code and test it with numbers and the debugger. What I can do is give you some advise that ought to help you out.
From what I can tell based on the code that you did supply you are creating your Planet objects via a function call. So I can rationalize that you are doing this for every planet. If that is the case then if you take a notice at your full code base you will see that everyone of these functions besides their names, and the numbers that are used are basically copies of themselves or duplicate code. This is where you need a more versatile structure.
Here are some important things to consider in your construct.
Create a full 3D Motion camera class & a separate Player Class
The Camera Class will allow you to look up, down, left & right by (restrained angles) - Via Mouse Controls
The Player Class will have the camera object attached to it at camera_eye_level, lookAtDirection that is perpendicular to the upVector. The player class will be able to move freely forward, backward, up, down turn left & turn right via keyboard controls. This gives you flexibility.
Note: If You are doing by scale and your planets are considerably far apart, make your player's linear speed rate higher so that you move faster towards the object covering more distance.
Create a base Geometry class
Derived Geometry Classes: Box, Flat Grid, Cylinder, Sphere, Pyramid, Cone
These classes will hold a finite container of Vector3s(Vertices),
a container of Vector3(Indices), a container of Vector2(Texture Coords),
a container of Vector4(Material - Color with Alpha), and a container
of Vector3(Normals).
The base class will probably only hold an unsigned int (ID value) and or a std::string (Name Identifier) and an Enum Value of Type Geometry that is being created
Each Derived class will have a constructor on required parameters, the size of the 3 dimensions (don't need to worry about Texture & Color Information Here that will come later)
Create a Material Class
Create a base Light class
Derived Types: Directional, Point & Spot
Create A Texture Class
Create A Texture Transform Class - This Will Allow you to apply transformations directly on the texture that is attached to your Geometry To give the effect that an object is moving, when only the texture transform is.
Example: Geometry Type Box( 2,1,4 ) Texture Applied "conveyor_belt1.png" this way you don't have to change the box's transform and move all the vertices, texcoords, and normals on each render pass, you can just simply make the texture move in place. Less computationally expensive
Create Node Class - Base class for all nodes that will belong to a Scene Graph
Types of Nodes for SceneGraph - ShapeNode(Geometry), LightNode, TransformNode(contains vector & matrix information for Translation, Rotation & Scaling),
The combination of the Node Classes and The SceneGraph will create a tree like structure such as this:
// This would be an example of text file that you would read in to parse the data
// and it will construct the scene graph for you, as well as rending it.
// Amb = Ambient, Dif = Diffuse - The Materials Work With Lighting & Shading and will blend with which ever texture is applied
// Items needed to construct
// Objects
Grid geoid_1 Name_plane Wid_5 Dep_5 divx_10 divz_10
Sphere geoid_2 name_uranus radius_20
Sphere geoid_3 name_earth radius_1
Sphere geoid_4 name_earth_moon radius_0.3
// Materials
Material matID_1 Amb_1,1,1,1 Dif_1,1,1,1 // (white fully opaque)
// Textures
Texture texID_1 fileFromTexture_"assets\textures\Uranus.png"
Texture texID_2 fileFromTexture_"assets\textures\Earth.png"
Texture texID_3 fileFromTexture "assets\textures\EarthMoon.png"
// Lights (Directional, Point, & Spot Types)
Transform Trans_0,0,0 // This is the center of the World Space and has to be a
+Shape geoID_1 matID_1 // Applies flat grid to root node to create your horizontal plane
+Transform Trans_10,2,10000 // Nest a transform node that is relative to the root; this will allow you to rotate, translate and scale every object that is attached and nested below this node. Any Nodes Higher in the branch will not be affected
++Shape geoID_2 matID_1 texID_1
+Transform Trans_10,1.5,200 // This node has 1 '+' so it is nested under the root and not Uranus
++Shape geoID_3 matID_1 tex1ID_2
+++Transform Trans_10,1.5,201 // This node is nested under the Earth's Transform Node and will belong to the Earth's Moon.
+++Shape geoID_4 matID_1 textID_3
END // End Of File To Parse
With this kind of construct, you would be able to translate, rotate and scale objects independently of each other, or by a hierarchy. For example: And as for the moons of Uranus you can apply the same technique as I showed with the Earth and its moon. Each moon would have its own transform but those transforms would be nested under the planets transforms where the planets transforms would be nested to the root or even the Sun's transform if you added one in. (Sun = Light Source) it would have several light nodes attached. (I've already done this before with the Sun - Earth & Moon and had the objects rotate accordingly.
You have a Model of a Jeep but you have 4 different models to render to make the full object in your game. 1 Body, 2 Front Wheels, 3 Back Wheels & 4 Steering Wheel. With this construct a graph my look like this
Model modID_1 modelFromFile_"Assets/Models/jeep_base.mod"
Model modID_2 modelFromFile_"Assets/Models/jeep_front_wheel.mod"
Model modID_3 modelFromFile_"Assets/Models/jeep_rear_wheel.mod"
Model modID_4 modelFromFile_"Assets/Models/jeep_steering.mod"
Material matID_1 Amb_1_1_1_1 Diff_1_1_1_1
Texture texID_1 textureFromFile_"Assets/Textures/jeep1_.png"
TextureTransform texTransID_1 name_front
TextureTransform texTransID_2 name_back
TextureTransform texTransID_3 name_steering
+Transform_0,0,0 name_root
++Transform_0,0.1,1 name_jeep
+++Shape modID_1 matID_1 texID_1 texCoord_0,0, size_75,40
+++Transform_0.1,0.101,1 name_jeepFrontWheel
+++Shape modID_2 matID_1 texID_1 texCoord_80,45 size_8,8
+++Transform_-0.1,-0.101,1 name_jeepBackWheel
+++Shape modID_3 matID_1 texID_2 texCoord_80,45 size_8,8
+++Transform_0.07,0.05,-0.02 name_jeepSteering
+++Shape modID_4 matID_1 texID_2 texCoord_80,55 size_10,10
END
Then in your code you can grab the transform nodes that belongs to jeep and when you translate it across the screen all the jeeps parts move together as one object, yet independently all tires can rotate front & back using the texture transforms, and your fronts can turn left and right by is own node transform by a constrained degree, also your steering can turn the same direction as the wheels using the texture transform, but may rotate less or more, depending on the characteristics of this particular vehicle and how it will handle within the scene or game.
With this type of system; the process is a bit harder to construct, but once it is working it allows for a simple automation of generating a scene graph. The down side to this is text file is easy to read at the human level but parsing a text file is much harder than parsing a binary file. The main reason for that is your function that belongs to your Scene class to parse and create all of these different nodes (All Geometry(Shapes), All (Lights), Material Texture and Transform Nodes.
The reason for having the Geometry and Lights having a base class is when you create a ShapeNode or a LightNode that is attached or nested to a TransformNode it can accept any derived type from the base class. This way you don't have to code your scene graph construction and parser to accept every type of geometry and light, you can have it accept any geometry and light, but to tell it what type it is.
Now you can make this a bit easier by creating a parser that works reads in binary files. The plus side is, it is easier to write the parser as long as you know the structure of the file, how much data to read in, and the expected type of data at the current read location. The down side is you can not read or set this up manually as I demonstrated above with a human readable text file. However going with this method would require you to have a decent Hex Editor Program that allows for file template types. such as 010 Editor this way you can add your own template to your file structure and when you read in the binary file with the applied template, you can then see if the values are correct and the fields have the appropriate data type and values.
In truth the best advice I can give you is this: The construct above is still good to follow, but may not be the best, it is a good starting point. But what you are learning right now on OpenGL appears to be OpenGL v1.0 which is basically outdated and deprecated. You should scrap all that you have learned from this relic of an API and begin to learn on modern OpenGL any version that is higher than 3.3. From there you can then learn to build and write shaders in GLSL and it will simplify a lot of things for you once you have your frame work up and working. Then once you get that up and working, instead of rendering everything to the screen from the CPU and rendering it from the GPU instead is much more efficient. Once you have that in place then consider looking into the concept of Batch Rendering. Batch Rendering will allow you to control how many buckets there are and how many vertices each bucket will contain. This batch process prevents the bottle neck between the rate of I/O from the CPU over the BUS to the GPU since GPUs do computation much faster than your CPU. Then with your scene graph you will not have to worry about creating lights, creating materials and applying them for all of these will be done in your fragment shader (GLSL) or what is called pixel shader (DirectX). All of your vertices and index information will be passed into the vertex Shader. Then it is just a matter of linking your shaders to an openGL program.
If you would like to see and learn about everything I have described above then visit the community that I've been a member of since 2007-08 at www.MarekKnows.com and join our community. He has several video tutorial series to learn from.

Related

OpenGL capture 6 cube face images

How to move camera in OpenGL to capture 6 cube face images and save into files (like image below)?
What does "plugin" means?
I'm confused that you need how to calculate camera position&direction vectors for capture each side of dice or how to implement lookat&perspective functions.
for lookat&perspective functions, there are many resources to refer :
Can't understand gluLookAt arguments
gluPerspective parameters- what do they mean?
these functions are usually provided on many libraries, but if you need, then I will post my implementation.
Camera position and direction/up vector is calculated for viewing each side of dice squarely. to do this, you have to care about perspective FOV(field of view) with respect to distance between camera and dice.
If you read above posts carefully, you can determine arguments for these functions.
If once you see each side on screen, I think you need the method combining the result scene of each dice into one screen(or FBO) and save it.
If once you obtain 6 sets of arguments for lookat and perspective, you can use glViewPort.
// if pixel per each side : 10
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
//back side draw
glViewPort(0, 10, 10, 10);
//call glulookat & gluperspective or equivalent functions with arguments so that back side of dice drew on your Viewport fully
gluLookAt(...);
gluPerpective(...);
glDrawElements(...);
//up side draw
glViewPort(10, 0, 10, 10);
gluLookAt(...);
gluPerpective(...);
glDrawElements(...);
//left side draw
glViewPort(10, 10, 10, 10);
gluLookAt(...);
gluPerpective(...);
glDrawElements(...);
...
The above code draw 6 times in each selected viewport of your result FBO.
An example using PyQt5 for making an image of a plane with size X, Y in the z=0 plane.
Xt = X/2 #center of your camera in X
Yt = Y/2 #center of your camera in Y
dist = math.tan(math.radians(60))*Y/2 #Compute the distance of the campera from plane
#assuming a 60 degree projection
aspect = float(self.width)/float(self.height) #aspect ratio of display
center = QtGui.QVector3D(Xt, Yt, 0) #look at this point
eye = QtGui.QVector3D(Xt, Yt, dist) #Point of Camera in space
up = QtGui.QVector3D(0, 1, 0)
self.modelview = QtGui.QMatrix4x4()
self.modelview.lookAt(eye,center,up) #build modelview matrix with given parameters
self.projection = QtGui.QMatrix4x4()
self.projection.perspective(60.0, aspect, dist*0.0001, dist*10000.0) #build projection matrix
repeating this process for each side + adjusting the z distance to your cube should yield the desired result. The you can just write your results to a framebuffer and read that buffer into an array.

Drawing a quarter circle in c++ using GLU

// baseballField
glColor3f(0.22, 0.36, 0.20);
GLUquadricObj *myobject;
myobject = gluNewQuadric();
glTranslatef(120.0, 655.0, 0.0);
gluDisk(myobject, 0.0, 40.0, 60, 4);
I'm trying to simulate the shape of a baseball field by creating a quarter circle (preferably the top right quarter). The code above successfully draws the circle with the correct size, location, and color. However, it is the whole circle. If anyone has any insights, please let me know. Thanks in advance!
Sticking with the GLU call, the one you're looking for is, quite intuitively, gluPartialDisk(). For a quarter circle:
gluPartialDisk(myobject, 0.0, 40.0, 60, 4, 0.0, 90.0);
The last two arguments specify the starting angle and the sweep angle, both in degrees.
Note that GLU is very deprecated, and only works with legacy versions of OpenGL. For sample code that shows how to draw a circle with a more current version of OpenGL, see for example my answer here: How to draw a circle using VBO in ES2.0.
void gluDisk(
GLUquadric* quad,
GLdouble inner,
GLdouble outer,
GLint slices,
GLint loops);
The disk is subdivided around the z axis into slices (like pizza slices)
Try playing with the slice, 2 means half circle i think.
[edit] dont forget with the loops too."and also about the z axis into rings (as specified by slices and loops, respectively)."

Draw oval with sphere in Opengl

I want to draw an oval by projection the sphere on the screen (like rasterize). Here is my code but it doesn't show anything on the screen. Should I use more functions to initialize the projection? Is this way possible to draw oval on screen by using sphere?
GLfloat xRotated, yRotated, zRotated;
GLdouble radius=1;
void display(void);
void reshape(int x, int y);
int main (int argc, char **argv)
{
glutInit(&argc, argv);
glutInitWindowSize(800,800);
glutCreateWindow("OVAL");
zRotated = 30.0;
xRotated=43;
yRotated=50;
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
void display(void)
{
glMatrixMode(GL_PROJECTION);
glOrtho(0.1, 1.0, 0.1, 1.0, -1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0,0.0,-5.0);
glColor3f(0.9, 0.3, 0.2);
glRotatef(xRotated,1.0,0.0,0.0);
glRotatef(yRotated,0.0,1.0,0.0);
glRotatef(zRotated,0.0,0.0,1.0);
glScalef(1.0,1.0,1.0);glutSolidSphere(radius,20,20);
glFlush();
}
void reshape(int x, int y)
{
if (y == 0 || x == 0) return;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(39.0,(GLdouble)x/(GLdouble)y,0.6,21.0);
glMatrixMode(GL_MODELVIEW);
glViewport(0,0,x,y);
}
You are drawing a sphere compltely outside of the viewing volume, so it should be no surprise that it can't be seen.
There are a couple of issues with your code:
All OpenGL matrix functions besides glLoadIndentity and glLoadMatrix always post-multiply a matrix to the current top element of the current matrix stack. In your display function, you call glOrtho without resetting the projection matrix to identity before. This will result in totally weird - and different - results if the display callback is called more than once.
You should add a call to glLoadIdentity() right before calling glOrtho.
You set up the model view transformations so that the sphere's center will always end up at (0,0,-5) in eye space. However, you set a projectiom matrix which defines a viewing volume which goes from z=1 (near plane) to z=-1 (far plane) in eye space, so your spehre is actually behind the far plane.
There are several ways this could be fixed. Changing the viewing frustum by modifying the parameters of glOrtho might be the easisest. You could for example try (-2, 2, -2, 2, 1, 10) to be able to see the sphere.
It is not really clear what
I want to draw an oval by projection the sphere on the screen (like rasterize).
exactly means. If you just want the sphere to be distorted to an ellipsoid, you could just apply some non-uniform scaling. This in principle could be done in the projection matrix (if no other objects are to be shown), but this would make much more sense to apply it to the model matrix of the sphere - you already have the glScale call there, you could try something like glScalef(1.0f, 0.5f, 1.0f);.
Also note that the ortho parameters I suggested previously will result in some distortion if your viewport is not exactly square. In a real world, one wants to incorporate the aspect ratio of the viewport into the projection matrix.
If you want to see the sphere deformed as by a perspective projection, you would have to skip the glOrtho altogheter and switch to a perspective projection matrix.
The code you are using is totally outdated. The OpenGL matrix stack has been deprecated in OpenGL 3.0 (2008) and is not available in core profiles of modern OpenGL. The same applies for builtin vertex attributes like glColor or immediate mode drawing and client-side vertex arrays. As a result, GLUT's drawing functions can also not be used with modern GL any more.
If you really intend learning OpenGL nowadays, I stronly advise you to ignore this old cruft and star learning the modern way.

Setting the coordinate system for drawing in OpenGL

I just started reading initial chapters of Blue book and got to understand that the projection matrix can be used to modify the mapping of our desired coordinate system to real screen coordinates. It can be used to reset the coordinate system and change it from -1 to 1 on left, right, top and bottom by the following (as an example)
glMatrixMode(GL_PROJECTION);
glLoadIdentity(); //With 1's in the diagonal of the identity matrix, the coordinate system is rest from -1 to 1 (and the drawing should happen then inside those coordinates which be mapped later to the screen)
Another example: (Width: 1024, Height: 768, Aspect Ratio: 1.33) and to change the coordinate system, do:
glOrtho (-100.0 * aspectRatio, 100.0 * aspectRatio, -100.0, 100.0, 100.0, 1000.0);
I expected the coordinate system for OpenGL to change to -133 on left, 133 on right, -100 on bottom and 100 on top. Using these coordinates, I understand that the drawing will be done inside these coordinate and anything outside these coordinates will be clipped.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-100 * aspectRatio, 100 * aspectRatio, -100, 100, 100, 1000);
glMatrixMode(GL_MODELVIEW);
glRectf(-50.0, 50.0, 200, 100);
However, the above command doesn't give me any output on the screen. What am I missing here?
I see two problems here:
The rect should not by show at all, since glRectf() draws at depth z=0, but you set up your orthorgraphic projection to cover the z range [100,1000], so the object lies before the near plane and should be clipped away.
You do not specifiy waht MODELVIEW matrix you use. In the comments, you mention that the object does show up, but not in the place where you expect it. This also violates my first point, but could be explained if the ModelView matrix is not identity.
So I suggest to first use a different projection matrix with like glOrtho(..., -1.0f, 1.0f); so that z=0 is actually covered, and second insert a glLoadIdentity() call after the glMatrixMode(GL_MODELVIEW) in the above code.
Another approach would be to keep the glOrtho() as it is and to specify a translation matrix wich moves the rect somewhere between z=100 and z=1000.

Giving 2D structures 3D depth [duplicate]

This question already has an answer here:
Closed 12 years ago.
Possible Duplicate:
How to give a 2D structure 3D depth
Hello everyone,
I posted this same question yesterday. I would like to have uploaded images showing my program output but due to spamming protection I am informed I need 10 reputation "points". I could send images of my output under different projection matrices to anyone willing.
I am beginning to learn OpenGL as part of a molecular modeling project, and currently I am trying to render 7 helices that will be arranged spatially close to each other and will move, tilt, rotate and interact with each other in certain ways.
My question is how to give the 2D scene 3-Dimensional depth so that the geometric structures look like true helices in three dimensions?
I have tried playing around with projection matrices (gluPerspective, glFrustum) without much luck, as well as using the glDepthRange function. As I understand from textbook/website references, when rendering a 3D scene it is appropriate to use a (perspective) projection matrix that has a vanishing point (either gluPerspective or glFrustum) to create the illusion of 3 dimensions on a 2D surface (the screen)
I include my code for rendering the helices, but for simplicity I insert the code for rendering one helix (the other 6 helices are exactly the same except for their translation matrix and the color function parameters) as well as the reshape handler.
This is the output ![enter image description here][1] I get when I run my program with an orthographic projection (glOrtho) and it looks as a 2D projection of helices (curved lines drawn in three dimensions). This is my output (![enter image description here][2]) when I use a perspective projection (glFrustum in my case). It does not appear as if I am looking at my helices in 3D!!
Perhaps the glFrustum parameters are wrong?
//GLOBALS
GLfloat x, y, z;
GLfloat c = 1.5f; //helical pitch
GLfloat theta; //constant angle between tangent and x-axis
thetarad = theta/(Pi/180.0); //angle converted from degrees to radians
GLfloat r = 7.0f; //radius
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST); /* enable depth testing */
glDepthFunc(GL_LESS); /* make sure the right depth function is used */
/*CALLED TO DRAW HELICES*/
void RenderHelix() {
/**** WHITE HELIX ****/
glColor3f(1.0,1.0,1.0);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(-30.f, 100.f, 0.f); //Move Position
glRotatef(90.0, 0.0, 0.0, 0.0);
glBegin(GL_LINE_STRIP);
for(theta = 0; theta <= 360; ++theta) { /* Also can use: for(theta = 0; theta <= 2*Pi; ++rad) */
x = r*(cosf(theta));
y = r*(sinf(theta));
z = c*theta;
glVertex3f(x,y,z);
}
glEnd();
glScalef(1.0,1.0,12.0); //Stretch or contract the helix
glPopMatrix();
/* Code for Other 6 Helices */
.............
glutSwapBuffers();
}
void Reshape(GLint w, GLint h) {
if(h==0)
h=1;
glViewport(0,0,w,h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
GLfloat aspectratio = (GLfloat)w/(GLfloat)h;
if(w<=h)
//glOrtho(-100,100,-100/aspectratio,100/aspectratio, -50.0,310.0);
//glOrtho(-100,100,-100/aspectratio,100/aspectratio, 0.0001,1000000.0); //CLIPPING FAILSAFE TEST
//gluPerspective(122.0,(GLfloat)w/(GLfloat)h,10.0,50.0);
glFrustum(-10.f,10.f, -100.f/aspectratio, 100.f/aspectratio, 1.0f, 15.0f);
else
//glOrtho(-100*aspectratio,100*aspectratio,-100,100,-50.0,310.0);
//glOrtho(-100*aspectratio,100*aspectratio,-100,100,0.0001,1000000.0); //CLIPPING FAILSAFE TEST
//gluPerspective(122.0,(GLfloat)w/(GLfloat)h,10.0,50.0);
glFrustum(-10.f*aspectratio,10.f*aspectratio,-10.f,10.f, 1.0f,15.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
The usual reason that simple 3d applications don't "look 3d" is because you need to set up a lighting system. Lighting is a major source of depth cues to your brain.
Here's a good tutorial on adding lighting to an OpenGL program:
http://www.cse.msu.edu/~cse872/tutorial3.html
EDIT: For more context, here's the relevant chapter from the classic OpenGL Red Book:
http://fly.cc.fer.hr/~unreal/theredbook/chapter06.html
Notice the screenshot near the top, showing the same sphere render both with and without lighting.
I agree it's difficult if you can't post your images (and it may not be trivial then).
If your code is open source you are likely to get help for molecular modelling from the Blue Obelisk Community (http://blueobelisk.shapado.com/ is a SE-type site for answering these questions).
There used to be a lot of GL used in our community but I'm not sure I know a good code which which you could hack to get some idea of the best things to do. The leading graphics tools are Jmol (where the gfx were largely handwritten and very good) and Avogadro which uses Qt.
But if you ask for examples of open source GL moelcular graphics you'll probably get help.
And of course you'll probably get complementary help here