Drawing a quarter circle in c++ using GLU - c++

// baseballField
glColor3f(0.22, 0.36, 0.20);
GLUquadricObj *myobject;
myobject = gluNewQuadric();
glTranslatef(120.0, 655.0, 0.0);
gluDisk(myobject, 0.0, 40.0, 60, 4);
I'm trying to simulate the shape of a baseball field by creating a quarter circle (preferably the top right quarter). The code above successfully draws the circle with the correct size, location, and color. However, it is the whole circle. If anyone has any insights, please let me know. Thanks in advance!

Sticking with the GLU call, the one you're looking for is, quite intuitively, gluPartialDisk(). For a quarter circle:
gluPartialDisk(myobject, 0.0, 40.0, 60, 4, 0.0, 90.0);
The last two arguments specify the starting angle and the sweep angle, both in degrees.
Note that GLU is very deprecated, and only works with legacy versions of OpenGL. For sample code that shows how to draw a circle with a more current version of OpenGL, see for example my answer here: How to draw a circle using VBO in ES2.0.

void gluDisk(
GLUquadric* quad,
GLdouble inner,
GLdouble outer,
GLint slices,
GLint loops);
The disk is subdivided around the z axis into slices (like pizza slices)
Try playing with the slice, 2 means half circle i think.
[edit] dont forget with the loops too."and also about the z axis into rings (as specified by slices and loops, respectively)."

Related

OpenGL C++ - Solar System

I've finished building a solar system using OpenGL and C++. One of the features in this system is having camera positions on each planet pointing to the north which moves based on the planet transformation. The camera positions are: One at the top, one a little behind the planet and the last one is far away from the planet. There are some other features but I don't have any issues with them.
So, the issue I am having is that some planets seem to be trembling for some reason while rotating around their centers. If I increase the speed of spinning the planet will stop trembling or the trembling becomes unnoticeable. The entire solar system is fully based on real textures and proportional space calculations and it has multiple camera positions as mentioned earlier.
Here is some code that might help understanding what I am trying to achieve:
//Caculate the earth postion
GLfloat UranusPos[3] = {Uranus_distance*DistanceScaler * cos(-uranus * M_PI / 180), 0, Uranus_distance*DistanceScaler * sin(-uranus * M_PI / 180)};
//Caculate the Camera Position
GLfloat cameraPos[3] = {Uranus_distance*DistanceScaler * cos(-uranus * M_PI / 180), (5*SizeScaler), Uranus_distance*DistanceScaler * sin(-uranus * M_PI / 180)};
//Setup the camear on the top of the moon pointing
gluLookAt(cameraPos[0], cameraPos[1], cameraPos[2], UranusPos[0], UranusPos[1], UranusPos[2]-(6*SizeScaler), 0, 0, -1);
SetPointLight(GL_LIGHT1,0.0,0.0,0.0,1,1,.9);
//SetMaterial(1,1,1,.2);
//Saturn Object
// Uranus Planet
UranusObject( UranusSize * SizeScaler, Uranus_distance*DistanceScaler, uranusumbrielmoonSize*SizeScaler, uranusumbrielmoonDistance*DistanceScaler, uranustitaniamoonSize*SizeScaler, uranustitaniamoonDistance*DistanceScaler, uranusoberonmoonSize*SizeScaler, uranusoberonmoonDistance*DistanceScaler);
The following is the planet function I am calling to draw the object inside the display function:
void UranusObject(float UranusSize, float UranusLocation, float UmbrielSize, float UmbrielLocation, float TitaniaSize, float TitaniaLocation, float OberonSize, float OberonLocation)
{
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Uranus_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glRotatef( uranus, 0.0, 1.0, 0.0 );
glTranslatef( UranusLocation, 0.0, 0.0 );
glDisable( GL_LIGHTING );
glColor3f( 0.58, 0.29, 0.04 );
DoRasterString( 0., 5., 0., " Uranus" );
glEnable( GL_LIGHTING );
glPushMatrix();
// Venus Spinning
glRotatef( uranusSpin, 0., 1.0, 0.0 );
MjbSphere(UranusSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Umbriel_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glDisable(GL_LIGHTING);
if (LinesEnabled)
{
glPushMatrix();
gluLookAt( 0.0000001, 0., 0., 0., 0., 0., 0., 0., .000000001 );
DrawCircle(0.0, 0.0, UmbrielLocation, 1000);
glPopMatrix();
}
glEnable( GL_LIGHTING );
glColor3f(1.,1.,1.);
glRotatef( uranusumbrielmoon, 0.0, 1.0, 0.0 );
glTranslatef( UmbrielLocation, 0.0, 0.0 );
MjbSphere(UmbrielSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Titania_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glDisable(GL_LIGHTING);
if (LinesEnabled)
{
glPushMatrix();
gluLookAt( 0.0000001, 0., 0., 0., 0., 0., 0., 0., .000000001 );
DrawCircle(0.0, 0.0, TitaniaLocation, 1000);
glPopMatrix();
}
glEnable( GL_LIGHTING );
glColor3f(1.,1.,1.);
glRotatef( uranustitaniamoon, 0.0, 1.0, 0.0 );
glTranslatef( TitaniaLocation, 0.0, 0.0 );
MjbSphere(TitaniaSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_2D);
glPushMatrix();
glBindTexture( GL_TEXTURE_2D, Oberon_Tex);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glDisable(GL_LIGHTING);
if (LinesEnabled)
{
glPushMatrix();
gluLookAt( 0.0000001, 0., 0., 0., 0., 0., 0., 0., .000000001 );
DrawCircle(0.0, 0.0, OberonLocation, 1000);
glPopMatrix();
}
glEnable( GL_LIGHTING );
glColor3f(1.,1.,1.);
glRotatef( uranusoberonmoon, 0.0, 1.0, 0.0 );
glTranslatef( OberonLocation, 0.0, 0.0 );
MjbSphere(OberonSize,50,50);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
Finally, the following code is for the transformation calculations used for solar animation:
uranus += 0.0119 * TimeControl;
if( uranus > 360.0 )
uranus -= 360.0;
// Clockwise Rotation
uranusSpin -= 2.39 * TimeControl;
if( uranusSpin <= -360.0 )
uranusSpin = 0.0;
Note: the problem happens only with 4 planets only.
I really appreciate any idea that could solve the problem.
First of all take a look at:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
Now to your problem. I am too lazy to go through your code but you most likely hit the floating point accuracy barrier and or have accumulating errors during rotating. My bet is that the error is bigger further away from the sun (outer planets and more in the ecliptic plane). If that is so then it is obvious. So how to remedy that?
floating point accuracy
while rendering you are transforming vertexes by transform matrix. For reasonable ranges is this OK but if you Vertex is very far from (0,0,0) then the matrix multiplication is not that precise. This means when you convert back to camera space the Vertex coordinate is jumping around. To avoid this you need to translate your vertexes to camera origin before feeding it to OpenGL.
So just subtract camera position from each Vertex (before loading them to OpenGL!!!) you have and then render them with camera with position (0,0,0). This way you get rid of the jumping and even wrong interpolation of primitives. see
ray and ellipsoid intersection accuracy improvement
If your object is still too big (this is not the case for Solar system) then you can stack up more frustums together and render each with separate camera shifted by some step so you get in range.
Use 64 bit floats where you can, but be aware GPU implementations does not support 64-bit interpolators so fragment shader is feed by 32-bit floats instead.
accumulating errors in transform matrix
If you have some "static" matrix and apply countless operations on it like rotation, translation Then it will lose its precision after while. This is recognizable by changing scale (axises are not unit size anymore) and adding skew (axises are not perpendicular to each other anymore) with time it gets worse and worse.
To remedy that You can keep counter of operations per matrix and if hit some treshold perform matrix normalization. This is simple just extraxt all the axises vectors, make them perpendicular again, set them back to their original size and write them back to your matrix. With basic vector math knowledge is this easy just exploit cross product (which gives you perpendicular vector). I use Z axis as view direction so I keep Z axis direction as is and correct the X,Y axises directions. The size is easy just divide each vector by its size and you are unit again (or very close to it). for more info see:
Understanding 4x4 homogenous transform matrices
[Edit1] What is going on
Have a look at your code for single planet without the rendering stuff:
// you are missing glMatrixMode(GL_????) here !!! what if has been changed?
glPushMatrix();
// rotate to match dayly rotation axis?
glRotatef( uranus, 0.0, 1.0, 0.0 );
// translate to Uranus avg year rotation radius
glTranslatef( UranusLocation, 0.0, 0.0 );
glPushMatrix();
// rotate Uranus to actual position (year rotation)
glRotatef( uranusSpin, 0., 1.0, 0.0 );
// render sphere
MjbSphere(UranusSize,50,50);
glPopMatrix();
// moons
glPopMatrix();
So what you are doing is this. Let assume you are using ModelView matrix and you are instructing OpenGL to do this operation on it:
ModelView = ModelView * glRotatef(uranus,0.0,1.0,0.0) * glTranslatef(UranusLocation,0.0,0.0) * glRotatef(uranusSpin,0.,1.0, 0.0);
So what is wrong with this? For small scenes nothing but you are using proportional sizes so:
UranusLocation=2870480859811.71 [m]
UranusSize = 25559000 [m]
So that means the glVertex magnitudes are ~25559000 and after applying transforms ~2870480859811.71+25559000. Now there are few problems with these values.
First any glRotation call applies sin and cos coefficients to the 2870480859811.71 Lets assume we have error of sin,cos around 0.000001 that means the final position after result has position error in it:
error=2870480859811.71*0.000001=2870480.85981171
The OpenGL sin,cos implementation has probably higher precision but not by much. Anyway if you are comparing it to planet radius
2870480.85981171/25559000=0.112308 -> 11%
You get that the jumping error is around 11% of the planet size. That is huge. The implication from this is that the jumping is the bigger the further away from Sun and more Visible for smaller Planets (as our perception is usually relative not absolute).
You can try to boost this by using double precision (glRotated) but that does not mean it would solve the problem (some drivers does not have double precision implementation of sin,cos).
If you want to get rid of these problems you have follow the bullet #1 or do the rotations on your own on at least doubles and feed only the final matrix to OpenGL. So first the #1 approach. Translation of matrix is just +/- operation (also encoded as multiplication) but no un-precise coefficients are present so you are using full precision of used variable. Anyway I would use glTranslated just to be sure. So we need to make sure the rotations does not use big values inside OpenGL. So try this:
// compute planet position
double x,y,z;
x=UranusLocation*cos(uranusSpin*M_PI/180.0);
y=0.0;
z=UranusLocation*sin(uranusSpin*M_PI/180.0);
// rotate to match dayly rotation axis?
glRotated( uranus, 0.0, 1.0, 0.0 );
// translate to Uranus to actual position (year rotation)
glTranslated(x,y,z);
// render sphere
MjbSphere(UranusSize,50,50);
This affects the daily rotation speed as the daily and Year rotations angles are not adding up, but you are not implementing the daily rotation yet anyway. If this does not help then we need to use camera local coordinates to avoid having big values send to OpenGL:
// compute planet position
double x,y,z;
x=UranusLocation*cos(uranusSpin*M_PI/180.0);
y=0.0;
z=UranusLocation*sin(uranusSpin*M_PI/180.0);
// here change/compute camera position to (original_camera_position-(x,y,z))
// rotate to match dayly rotation axis?
glRotated( uranus, 0.0, 1.0, 0.0 );
// render sphere
MjbSphere(UranusSize,50,50);
Hope I matched your coordinate system if not just swap axises or negate them (x,y,z).
It is much better to have own precise matrix math at disposal and compute the glTranslate,glRotate on CPU side with high precision and use only the resultant matrix in OpenGL. see the Understanding 4x4 homogenous transform matrices link above how to do that.
I can not give you an exact answer based on what you have shown. I can not compile and run your current code and test it with numbers and the debugger. What I can do is give you some advise that ought to help you out.
From what I can tell based on the code that you did supply you are creating your Planet objects via a function call. So I can rationalize that you are doing this for every planet. If that is the case then if you take a notice at your full code base you will see that everyone of these functions besides their names, and the numbers that are used are basically copies of themselves or duplicate code. This is where you need a more versatile structure.
Here are some important things to consider in your construct.
Create a full 3D Motion camera class & a separate Player Class
The Camera Class will allow you to look up, down, left & right by (restrained angles) - Via Mouse Controls
The Player Class will have the camera object attached to it at camera_eye_level, lookAtDirection that is perpendicular to the upVector. The player class will be able to move freely forward, backward, up, down turn left & turn right via keyboard controls. This gives you flexibility.
Note: If You are doing by scale and your planets are considerably far apart, make your player's linear speed rate higher so that you move faster towards the object covering more distance.
Create a base Geometry class
Derived Geometry Classes: Box, Flat Grid, Cylinder, Sphere, Pyramid, Cone
These classes will hold a finite container of Vector3s(Vertices),
a container of Vector3(Indices), a container of Vector2(Texture Coords),
a container of Vector4(Material - Color with Alpha), and a container
of Vector3(Normals).
The base class will probably only hold an unsigned int (ID value) and or a std::string (Name Identifier) and an Enum Value of Type Geometry that is being created
Each Derived class will have a constructor on required parameters, the size of the 3 dimensions (don't need to worry about Texture & Color Information Here that will come later)
Create a Material Class
Create a base Light class
Derived Types: Directional, Point & Spot
Create A Texture Class
Create A Texture Transform Class - This Will Allow you to apply transformations directly on the texture that is attached to your Geometry To give the effect that an object is moving, when only the texture transform is.
Example: Geometry Type Box( 2,1,4 ) Texture Applied "conveyor_belt1.png" this way you don't have to change the box's transform and move all the vertices, texcoords, and normals on each render pass, you can just simply make the texture move in place. Less computationally expensive
Create Node Class - Base class for all nodes that will belong to a Scene Graph
Types of Nodes for SceneGraph - ShapeNode(Geometry), LightNode, TransformNode(contains vector & matrix information for Translation, Rotation & Scaling),
The combination of the Node Classes and The SceneGraph will create a tree like structure such as this:
// This would be an example of text file that you would read in to parse the data
// and it will construct the scene graph for you, as well as rending it.
// Amb = Ambient, Dif = Diffuse - The Materials Work With Lighting & Shading and will blend with which ever texture is applied
// Items needed to construct
// Objects
Grid geoid_1 Name_plane Wid_5 Dep_5 divx_10 divz_10
Sphere geoid_2 name_uranus radius_20
Sphere geoid_3 name_earth radius_1
Sphere geoid_4 name_earth_moon radius_0.3
// Materials
Material matID_1 Amb_1,1,1,1 Dif_1,1,1,1 // (white fully opaque)
// Textures
Texture texID_1 fileFromTexture_"assets\textures\Uranus.png"
Texture texID_2 fileFromTexture_"assets\textures\Earth.png"
Texture texID_3 fileFromTexture "assets\textures\EarthMoon.png"
// Lights (Directional, Point, & Spot Types)
Transform Trans_0,0,0 // This is the center of the World Space and has to be a
+Shape geoID_1 matID_1 // Applies flat grid to root node to create your horizontal plane
+Transform Trans_10,2,10000 // Nest a transform node that is relative to the root; this will allow you to rotate, translate and scale every object that is attached and nested below this node. Any Nodes Higher in the branch will not be affected
++Shape geoID_2 matID_1 texID_1
+Transform Trans_10,1.5,200 // This node has 1 '+' so it is nested under the root and not Uranus
++Shape geoID_3 matID_1 tex1ID_2
+++Transform Trans_10,1.5,201 // This node is nested under the Earth's Transform Node and will belong to the Earth's Moon.
+++Shape geoID_4 matID_1 textID_3
END // End Of File To Parse
With this kind of construct, you would be able to translate, rotate and scale objects independently of each other, or by a hierarchy. For example: And as for the moons of Uranus you can apply the same technique as I showed with the Earth and its moon. Each moon would have its own transform but those transforms would be nested under the planets transforms where the planets transforms would be nested to the root or even the Sun's transform if you added one in. (Sun = Light Source) it would have several light nodes attached. (I've already done this before with the Sun - Earth & Moon and had the objects rotate accordingly.
You have a Model of a Jeep but you have 4 different models to render to make the full object in your game. 1 Body, 2 Front Wheels, 3 Back Wheels & 4 Steering Wheel. With this construct a graph my look like this
Model modID_1 modelFromFile_"Assets/Models/jeep_base.mod"
Model modID_2 modelFromFile_"Assets/Models/jeep_front_wheel.mod"
Model modID_3 modelFromFile_"Assets/Models/jeep_rear_wheel.mod"
Model modID_4 modelFromFile_"Assets/Models/jeep_steering.mod"
Material matID_1 Amb_1_1_1_1 Diff_1_1_1_1
Texture texID_1 textureFromFile_"Assets/Textures/jeep1_.png"
TextureTransform texTransID_1 name_front
TextureTransform texTransID_2 name_back
TextureTransform texTransID_3 name_steering
+Transform_0,0,0 name_root
++Transform_0,0.1,1 name_jeep
+++Shape modID_1 matID_1 texID_1 texCoord_0,0, size_75,40
+++Transform_0.1,0.101,1 name_jeepFrontWheel
+++Shape modID_2 matID_1 texID_1 texCoord_80,45 size_8,8
+++Transform_-0.1,-0.101,1 name_jeepBackWheel
+++Shape modID_3 matID_1 texID_2 texCoord_80,45 size_8,8
+++Transform_0.07,0.05,-0.02 name_jeepSteering
+++Shape modID_4 matID_1 texID_2 texCoord_80,55 size_10,10
END
Then in your code you can grab the transform nodes that belongs to jeep and when you translate it across the screen all the jeeps parts move together as one object, yet independently all tires can rotate front & back using the texture transforms, and your fronts can turn left and right by is own node transform by a constrained degree, also your steering can turn the same direction as the wheels using the texture transform, but may rotate less or more, depending on the characteristics of this particular vehicle and how it will handle within the scene or game.
With this type of system; the process is a bit harder to construct, but once it is working it allows for a simple automation of generating a scene graph. The down side to this is text file is easy to read at the human level but parsing a text file is much harder than parsing a binary file. The main reason for that is your function that belongs to your Scene class to parse and create all of these different nodes (All Geometry(Shapes), All (Lights), Material Texture and Transform Nodes.
The reason for having the Geometry and Lights having a base class is when you create a ShapeNode or a LightNode that is attached or nested to a TransformNode it can accept any derived type from the base class. This way you don't have to code your scene graph construction and parser to accept every type of geometry and light, you can have it accept any geometry and light, but to tell it what type it is.
Now you can make this a bit easier by creating a parser that works reads in binary files. The plus side is, it is easier to write the parser as long as you know the structure of the file, how much data to read in, and the expected type of data at the current read location. The down side is you can not read or set this up manually as I demonstrated above with a human readable text file. However going with this method would require you to have a decent Hex Editor Program that allows for file template types. such as 010 Editor this way you can add your own template to your file structure and when you read in the binary file with the applied template, you can then see if the values are correct and the fields have the appropriate data type and values.
In truth the best advice I can give you is this: The construct above is still good to follow, but may not be the best, it is a good starting point. But what you are learning right now on OpenGL appears to be OpenGL v1.0 which is basically outdated and deprecated. You should scrap all that you have learned from this relic of an API and begin to learn on modern OpenGL any version that is higher than 3.3. From there you can then learn to build and write shaders in GLSL and it will simplify a lot of things for you once you have your frame work up and working. Then once you get that up and working, instead of rendering everything to the screen from the CPU and rendering it from the GPU instead is much more efficient. Once you have that in place then consider looking into the concept of Batch Rendering. Batch Rendering will allow you to control how many buckets there are and how many vertices each bucket will contain. This batch process prevents the bottle neck between the rate of I/O from the CPU over the BUS to the GPU since GPUs do computation much faster than your CPU. Then with your scene graph you will not have to worry about creating lights, creating materials and applying them for all of these will be done in your fragment shader (GLSL) or what is called pixel shader (DirectX). All of your vertices and index information will be passed into the vertex Shader. Then it is just a matter of linking your shaders to an openGL program.
If you would like to see and learn about everything I have described above then visit the community that I've been a member of since 2007-08 at www.MarekKnows.com and join our community. He has several video tutorial series to learn from.

Proper gluLookAt for gluCylinder

I'm trying to draw a cylinder in a specific direction with gluCylinder. To specify the direction I use gluLookAt, however, as so many before me, I am not sure about the "up" vector and thus can't get the cylinder to point to the correct direction.
I've read from another SO answer that
The intuition behind the "up" vector in gluLookAt is simple: Look at anything. Now tilt your head 90 degrees. Where you are hasn't changed, the direction you're looking at hasn't changed, but the image in your retina clearly has. What's the difference? Where the top of your head is pointing to. That's the up vector.
It is a simple explanation but in the case of my cylinder I feel like the up vector is totally unimportant. Since a cylinder can be rotated around its axis and still look the same, a different up vector wouldn't change anything. So there should be infinitely many valid up vectors for my problem: all orthogonals to the vector from start point to end point.
So this is what I do:
I have the world coordinates of where the start-point and end-point of the cylinder should be, A_world and B_world.
I project them to viewport coordinates A_vp and B_vp with gluProject:
GLdouble A_vp[3], B_vp[3], up[3], model[16], projection[16];
GLint gl_viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, &model[0]);
glGetDoublev(GL_PROJECTION_MATRIX, &projection[0]);
glGetIntegerv(GL_VIEWPORT, gl_viewport);
gluProject(A_world[0], A_world[1], A_world[2], &model[0], &projection[0], &gl_viewport[0], &A_vp[0], &A_vp[1], &A_vp[2]);
gluProject(B_world[0], B_world[1], B_world[2], &model[0], &projection[0], &gl_viewport[0], &B_vp[0], &B_vp[1], &B_vp[2]);
I call glOrtho to reset the camera to its default position: Negative z into picture, x to the right, y up:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, vp_edgelen, vp_edgelen, 0, 25, -25);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
I translate to coordinate A_vp, calculate the up vector as the normal to the vector A_vp — B_vp and specify the view with gluLookAt:
glTranslatef(A_vp[0], gl_viewport[2] - A_vp[1], A_vp[2]);
glMatrixMode(GL_MODELVIEW);
GLdouble[] up = {A_vp[1] * B_vp[2] - A_vp[2] * B_vp[1],
A_vp[2] * B_vp[0] - A_vp[0] * B_vp[2],
A_vp[0] * B_vp[1] - A_vp[1] * B_vp[0]};
gluLookAt(0, 0, 0,
B_vp[0], gl_viewport[2] - B_vp[1], B_vp[2],
up[0], up[1], up[2]);
I draw the cylinder with gluCylinder:
GLUquadricObj *gluCylObj = gluNewQuadric();
gluQuadricNormals(gluCylObj, GLU_SMOOTH);
gluQuadricOrientation(gluCylObj, GLU_OUTSIDE);
gluCylinder(gluCylObj, 10, 10, 50, 10, 10);
Here is the unexpected result:
Since the cylinder starts at the correct position and since I was able to draw a circle at position B_vp, the only thing that must be wrong is the "up" vector in gluLookAt, right?
gluLookAt() is not necessary to achieve the proper perspective. It is enough to rotate the current z-vector to point to the direction the cylinder should point.

OpenGL: why the spheres with same X coordinate are not in a straight in the output?

I am working on a C++ project and it is written in MFC Templates;
using the OpenGL Library I am drawing the spheres in a special coordinate. I go to this special coordinate with glTranslatef function, but when I draw two spheres with the same X coordinates, it look likes they have a difference in their x.
For example when I draw two sphere in (x,y,z):(1,1,0), and (x,y,z):(1,2,0) the output is this:
this view is from the above:
This is my function for drawing the spheres:
void MYGLView::DrawSphere(double X_position, double Y_Position, double Z_Position,
GLdouble radius, int longitudeSubdiv, int latitudeSubdiv,
double Red, double Green,double Blue)
{
gluQuadricDrawStyle(m_quadrObj, GLU_FILL);
float shininess = 64.0f;
glPushMatrix();
glTranslatef(X_position,Y_Position,Z_Position);
glColor3f(Red,Green,Blue);
gluSphere(m_quadrObj,radius,longitudeSubdiv,latitudeSubdiv);
//glTranslatef(-3,0,0);
glFlush();
glPopMatrix();
}
Can you tell me where I make the mistake?
your camera is slightly turned downwards. Therefore you have a vanishing point for all vertical lines. If you want that all vertical lines are parallel on the screen your camera is not allowed to tilt downwards. Alternatively you can use parallel projection, where all lines that are parallel in the world remain parallel in the image.

Light and shadow not working in opengl and c++

I am creating the solar system and I keep running into problems with the lighting. The first problem is that the moon casts no shadows on the earth and the earth casts no shadows on the moon.
The other problem is that the light that is shining on the the earth and the moon are not coming from my sun, but from the center point of the orbit. I added the red lines in the picture below to show what I mean.
the picture below should illustrate what my two problems are.
Here is the code that is dealing with the lights and the planets.
glDisable(GL_LIGHTING);
drawCircle(800, 720, 1, 50);
//SUN
//Picture location, major radius, minor radius, major orbit, minor orbit, angle
Planet Sun ("/home/rodrtu/Desktop/SolarSystem/images/Sun.png",
100, 99, 200.0, 0.0, 0.0);
double sunOrbS = 0;
double sunRotS = rotatSpeed/10;
cout << sunRotS << " Sun Rotation" << endl;
//orbit speed, rotation speed, moon reference coordinates (Parent planet's major and minor Axis)
Sun.displayPlanet(sunOrbS, sunRotS, 0.0, 0.0);
//Orbit path
//EARTH
GLfloat light_diffuse[] = { 1.5, 1.5, 1.5, 1.5 };
GLfloat pos[] = { 0.0, 0.0, 0.0, 200.0 };
glEnable(GL_LIGHTING);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, pos);
Planet Earth ("/home/rodrtu/Desktop/SolarSystem/images/EarthTopography.png",
50, 49, 500.0, 450.0, 23.5);
double eaOrbS = orbitSpeed;
double eaRotS = rotatSpeed*3;
Earth.displayPlanet(eaOrbS, eaRotS, 0.0, 0.0);
//EARTH'S MOON
Planet Moon ("/home/rodrtu/Desktop/SolarSystem/images/moonTest.png",
25, 23, 100.0, 100.0, 15);
double moOrbS = rotatSpeed*4;
double moRotS = eaOrbS;
Moon.displayPlanet(moOrbS, moRotS, Earth.getMajorAxis(), Earth.getMinorAxis());
orbitSpeed+=.9;
if (orbitSpeed > 359.0)
orbitSpeed = 0.0;
rotatSpeed+=2.0;
if (rotatSpeed > 7190.0)
rotatSpeed = 0.0;
This next functions are used to determine the orbit coordinate and location of each planet
void Planet::setOrbit(double orbitSpeed, double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
majorAxis = orbitSemiMajor * cos(orbitSpeed / 180.0 * Math::Constants<double>::pi);
minorAxis = orbitSemiMinor * sin(orbitSpeed / 180.0 * Math::Constants<double>::pi);
glTranslate(majorAxis+moonOrbitX, minorAxis+moonOrbitY, 0.0);
glRotatef(orbitAngle, 0.0, 1.0, 1.0);
glRotatef(rotationSpeed, 0.0, 0.0, 1.0);
}
void Planet::displayPlanet(double orbitSpeed,double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
GLuint surf;
Images::RGBImage surfaceImage;
surfaceImage=Images::readImageFile(texture);
glEnable(GL_TEXTURE_2D);
glGenTextures(0, &surf);
glBindTexture(GL_TEXTURE_2D, surf);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
surfaceImage.glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
glPushMatrix();
setOrbit(orbitSpeed,rotationSpeed, moonOrbitX, moonOrbitY);
drawSolidPlanet(equatRadius, polarRadius, 1, 40, 40);
glPopMatrix();
}
What am I doing wrong? I read up on the w component of GL_POSITION and I changed my position to be 200 (where the sun is centered), but the light source is still coming from the center of the orbit.
To make a proper reply for the light position issue..
[X, Y, Z, W] is called homogenous coordinates
A coordinate [X, Y, Z, W] in homogenous space is will be [X/W, Y/W, Z/W] in 3D space.
Now, consider the following W values :
W=1.0 : [1.0, 1.0, 1.0, 1.0] is [1.0, 1.0, 1.0] in 3D place.
W=0.1 : [1.0, 1.0, 1.0, 0.1] is [10.0, 10.0, 10.0] in 3D place.
W=0.001 : [1.0, 1.0, 1.0, 0.001] is [1000.0, 1000.0, 1000.0] in 3D place.
When we keep moving towards W=0 the [X/W, Y/W, Z/W] values approaches a point at infinity. It's actually no longer a point, but a direction from [0,0,0] to [X,Y,Z].
So when defining the light position we need to make sure to get this right.
W=0 defines a directional light, so x,y,z is a directional vector
W=1 defined a positional light, so x,y,z is a position in 3D space
You'll get to play around with this a lot once you dig deeper into matrix math. If you try to transform a direction (W=0) with a translation matrix for example, it will not have any effect. This is very relevant here as well since the light position will be affected by the modelview matrix.
Some easy to understand information here for further reading :
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
If OpenGL doesn't have a "cast shadow" function, how could I acomplish this then?
What you must understand is, that OpenGL has no concept of a "scene". All OpenGL does is drawing points, lines or triangles to the screen, one at a time. After it's drawn, it has no influence on the following drawing operations.
So to do something fancy like shadows, you must get, well, artistic. By that I mean, like an artist who paints a plastic picture which has depth with "just" a brush and a palette of colours, you must use OpenGL in a artistic way to recreate with it the effects you desire. Drawing a shadow can be done in various ways. But the most popular one is known by the term Shadow Mapping.
Shadow Mapping is a two step process. In the first step the scene is rendered into a "grayscale" picture "seen" from the points of view of the light, where the distance from the light is drawn as the "gray" value. This is called a Shadow Depth Map.
In the second step the scene is drawn as usual, where the lights' shadow depth map(s) are projected into the scene, as if the lights were a slide projector (where everything receives that image, as OpenGL doesn't shadow). In a shader the depth value in the shadow depth map is compared with the actual distance to the light source for each processed fragments; if the distance to the light is farther than the corresponding pixel in the shadow map this means that while rendering the shadow map something got in front of the currently processed geometry fragment, which hence lies in the shadow, so it's drawn in a shadow color (usually the ambient illumination color); you might want to combine this with an Ambient Occlusion effect to simulate soft, self shadowing ambient illumination.

How to draw a filled envelop like a cone on OpenGL (using GLUT)?

I am using freeglut for opengl rendering...
I need to draw an envelop looking like a cone (2D) that has to be filled with some color and some transparency applied.
Is the freeglut toolkit equipped with such an inbuilt functionality to draw filled geometries(or some trick)?
or is there some other api that has an inbuilt support for filled up geometries..
Edit1:
just to clarify the 2D cone thing... the envelop is the graphical interpretation of the coverage area of an aircraft during interception(of an enemy aircraft)...that resembles a sector of a circle..i should have mentioned sector instead..
and glutSolidCone doesnot help me as i want to draw a filled sector of a circle...which i have already done...what remains to do is to fill it with some color...
how to fill geometries with color in opengl?
Edit2:
All the answers posted to this questions can work for my problem in a way..
But i would definitely would want to know a way how to fill a geometry with some color.
Say if i want to draw an envelop which is a parabola...in that case there would be no default glut function to actually draw a filled parabola(or is there any?)..
So to generalise this question...how to draw a custom geometry in some solid color?
Edit3:
The answer that mstrobl posted works for GL_TRIANGLES but for such a code:
glBegin(GL_LINE_STRIP);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glEnd();
which draws a square...only a wired square is drawn...i need to fill it with blue color.
anyway to do it?
if i put some drawing commands for a closed curve..like a pie..and i need to fill it with a color is there a way to make it possible...
i dont know how its possible for GL_TRIANGLES... but how to do it for any closed curve?
On Edit3: The way I understand your question is that you want to have OpenGL draw borders and anything between them should be filled with colors.
The idea you had was right, but a line strip is just that - a strip of lines, and it does not have any area.
You can, however, have the lines connect to each other to define a polygon. That will fill out the area of the polygon on a per-vertex basis. Adapting your code:
glBegin(GL_POLYGON);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glEnd();
Please note however, that drawing a polygon this way has two limitations:
The polygon must be convex.
This is a slow operation.
But I assume you just want to get the job done, and this will do it. For the future you might consider just triangulating your polygon.
I'm not sure what you mean by "an envelop", but a cone is a primitive that glut has:
glutSolidCone(radius, height, number_of_slices, number_of_stacks)
The easiest way to fill it with color is to draw it with color. Since you want to make it somewhat transparent, you need an alpha value too:
glColor4f(float red, float green, float blue, float alpha)
// rgb and alpha run from 0.0f to 1.0f; in the example here alpha of 1.0 will
// mean no transparency, 0.0 total transparency. Call before drawing.
To render translucently, blending has to be enabled. And you must set the blending function to use. What you want to do will probably be achieved with the following. If you want to learn more, drop me a comment and I will look for some good pointers. But here goes your setup:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Call that before doing any drawing operations, possibly at program initialization. :)
Since you reclarified your question to ask for a pie: there's an easy way to draw that too using opengl primitives:
You'd draw a solid sphere using gluSolidSphere(). However, since you only want to draw part of it, you just clip the unwanted parts away:
void glClipPlane(GLenum plane, const GLdouble * equation);
With plane being GL_CLIPPLANE0 to GL_CLIPPLANEn and equation being a plane equation in normal form (ax + by + c*z + d = 0 would mean equation would hold the values { a, b, c, d }. Please note that those are doubles and not floats.
I remember there was a subroutine for that. But it's neither too hard to do by yourself.
But I don't understand the 2D -thing. Cone in 2D? Isn't it just a triangle?
Anyway, here's an algorithm to drawing a cone in opengl
First take a circle, subdivision it evenly so that you get a nice amount of edges.
Now pick the center of the circle, make triangles from the edges to the center of the circle. Then select a point over the circle and make triangles from the edges to that point.
The size shape and orientation depends about the values you use to generate the circle and two points. Every step is rather simple and shouldn't cause trouble for you.
First just subdivision a scalar value. Start from [0-2] -range. Take the midpoint ((start+end)/2) and split the range with it. Store the values as pairs. For instance, subdividing once should give you: [(0,1), (1,2)] Do this recursively couple of times, then calculate what those points are on the circle. Simple trigonometry, just remember to multiply the values with π before proceeding. After this you have a certain amount of edges. 2^n where n is the amount of subdivisions. Then you can simply turn them into triangles by giving them one vertex point more. Amount of triangles ends up being therefore: 2^(n+1). (The amounts are useful to know if you are doing it with fixed size arrays.
Edit: What you really want is a pie. (Sorry the pun)
It's equally simple to render. You can again use just triangles. Just select scalar range [-0.25 - 0.25], subdivide, project to circle, and generate one set of triangles.
The scalar - circle projection is simple as: x=cos(v*pi)r, y=sin(vpi)*r where (x,y) is the resulting vertex point, r is a radius, and trigonometric functions work on radiances, not degrees. (if they work with degrees, replace pi with 180)
Use vertex buffers or lists to render it yourself.
Edit: About the coloring question. glColor4f, if you want some parts of the geometry to be different by its color, you can assign a color for each vertex in vertex buffer itself. I don't right now know all the API calls to do it, but API reference in opengl is quite understandable.
On the edit on colors:
OpenGL is actually a state machine. This means that the current material and/or color position is used when drawing. Since you probably won't be using materials, ignore that for now. You want colors.
glColor3f(float r, float g, float b) // draw with r/g/b color and alpha of 1
glColor4f(float r, float g, float b, float alpha)
This will affect the colors of any vertices you draw, of any geometry you render - be it glu's or your own - after the glColorXX call has been executed. If you draw a face with vertices and change the color inbetween the glVertex3f/glVertex2f calls, the colors are interpolated.
Try this:
glBegin(GL_TRIANGLES);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(-3.0, 0.0, 0.0);
glColor3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 3.0, 0.0);
glColor3f(1.0, 0.0, 0.0);
glVertex3f(3.0, 0.0, 0.0);
glEnd();
But I pointed at glColor4f already, so I assume you want to set the colors on a per-vertex basis. And you want to render using display lists.
Just like you can display lists of vertices, you can also make them have a list of colors: all you need to do is enable the color lists and tell opengl where the list resides. Of course, they need to have the same outfit as the vertex list (same order).
If you had
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices_);
glDisableClientState(GL_VERTEX_ARRAY);
you should add colors this way. They need not be float; in fact, you tell it what format it should be. For a color list with 1 byte per channel and 4 channels (R, G, B and A) use this:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices_);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, colors_);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
EDIT: Forgot to add that you then have to tell OpenGL which elements to draw by calling glDrawElements.