OpenGL: Deform (scale) stencil shadow from light source - opengl

I have a basic stencil shadow functioning in my game engine. I'm trying to deform the shadow based on the lighting direction, which I have:
/*
* #brief Applies translation, rotation and scale for the shadow of the specified
* entity. In order to reuse the vertex arrays from the primary rendering
* pass, the shadow origin must transformed into model-view space.
*/
static void R_RotateForMeshShadow_default(const r_entity_t *e) {
vec3_t origin, delta;
if (!e) {
glPopMatrix();
return;
}
R_TransformForEntity(e, e->lighting->shadow_origin, origin);
VectorSubtract(e->lighting->shadow_origin, e->origin, delta);
const vec_t scale = 1.0 + VectorLength(delta) / LIGHTING_MAX_SHADOW_DISTANCE;
/*const vec_t dot = DotProduct(e->lighting->shadow_normal, e->lighting->dir);
const vec_t sy = sin(Radians(e->angles[YAW]));
const vec_t cy = cos(Radians(e->angles[YAW]));*/
glPushMatrix();
glTranslatef(origin[0], origin[1], origin[2] + 1.0);
glRotatef(-e->angles[PITCH], 0.0, 1.0, 0.0);
glScalef(scale, scale, 0.0);
}
I've commented out the dot product of the ground plane (shadow_normal) and lighting direction, as well as the sin and cos of the yaw of the model, because while I'm pretty sure they are what I need to augment the scale of the shadow, I don't know what the correct formula is to yield a perspective-correct deformation. For someone who better understands projections, this is probably child's play.. for me, I'm stabbing in the dark.

I was eventually able to achieve the desired effect by managing my own matrices and adapting code from the SGI's OpenGL Cookbook. The code uses LordHavoc's matrix library from his DarkPlaces Quake engine. Inline comments call out the major steps. Here's the full code:
/*
* #brief Projects the model view matrix for the given entity onto the shadow
* plane. A perspective shear is then applied using the standard planar shadow
* deformation from SGI's cookbook, adjusted for Quake's negative planes:
*
* ftp://ftp.sgi.com/opengl/contrib/blythe/advanced99/notes/node192.html
*/
static void R_RotateForMeshShadow_default(const r_entity_t *e, r_shadow_t *s) {
vec4_t pos, normal;
matrix4x4_t proj, shear;
vec_t dot;
if (!e) {
glPopMatrix();
return;
}
const cm_bsp_plane_t *p = &s->plane;
// project the entity onto the shadow plane
vec3_t vx, vy, vz, t;
Matrix4x4_ToVectors(&e->matrix, vx, vy, vz, t);
dot = DotProduct(vx, p->normal);
VectorMA(vx, -dot, p->normal, vx);
dot = DotProduct(vy, p->normal);
VectorMA(vy, -dot, p->normal, vy);
dot = DotProduct(vz, p->normal);
VectorMA(vz, -dot, p->normal, vz);
dot = DotProduct(t, p->normal) - p->dist;
VectorMA(t, -dot, p->normal, t);
Matrix4x4_FromVectors(&proj, vx, vy, vz, t);
glPushMatrix();
glMultMatrixf((GLfloat *) proj.m);
// transform the light position and shadow plane into model space
Matrix4x4_Transform(&e->inverse_matrix, s->illumination->light.origin, pos);
pos[3] = 1.0;
const vec_t *n = p->normal;
Matrix4x4_TransformPositivePlane(&e->inverse_matrix, n[0], n[1], n[2], p->dist, normal);
// calculate shearing, accounting for Quake's negative plane equation
normal[3] = -normal[3];
dot = DotProduct(pos, normal) + pos[3] * normal[3];
shear.m[0][0] = dot - pos[0] * normal[0];
shear.m[1][0] = 0.0 - pos[0] * normal[1];
shear.m[2][0] = 0.0 - pos[0] * normal[2];
shear.m[3][0] = 0.0 - pos[0] * normal[3];
shear.m[0][1] = 0.0 - pos[1] * normal[0];
shear.m[1][1] = dot - pos[1] * normal[1];
shear.m[2][1] = 0.0 - pos[1] * normal[2];
shear.m[3][1] = 0.0 - pos[1] * normal[3];
shear.m[0][2] = 0.0 - pos[2] * normal[0];
shear.m[1][2] = 0.0 - pos[2] * normal[1];
shear.m[2][2] = dot - pos[2] * normal[2];
shear.m[3][2] = 0.0 - pos[2] * normal[3];
shear.m[0][3] = 0.0 - pos[3] * normal[0];
shear.m[1][3] = 0.0 - pos[3] * normal[1];
shear.m[2][3] = 0.0 - pos[3] * normal[2];
shear.m[3][3] = dot - pos[3] * normal[3];
glMultMatrixf((GLfloat *) shear.m);
Matrix4x4_Copy(&s->matrix, &proj);
}
The full implementation of this lives here:
https://github.com/jdolan/quake2world/blob/master/src/client/renderer/r_mesh_shadow.c

Related

Issue with Picking (custom unProject() function)

I'm currently working on a STL file viewer. This one use an Arcball camera :
To provide more features on this viewer (which can handle more than one object) I would like to implement a click select. To achieve it, I have used picking(Pseudo code I have used)
At this time, my code to check for a any object 3D between 2 points works. However the conversion of mouse position to a correct set of vector is far away from working:
glm::vec3 range = transform.GetPosition() + ( transform.GetFront() * 1000.0f);
// x and y are cursor position on the screen
glm::vec3 start = UnProject(x,y, transform.GetPosition().z);
glm::vec3 end = UnProject(x,y,range.z);
/*
The code which iterate over all objects in the scene and checks for collision
between my start / end and the object hitbox
*/
As you can see I have tried (maybe it is stupid) to set a the z distance between my start and my end to 100 * theFront vector of my camera. But it's not working the set of vectors I get are incoherents.
By example, placing the camera at 0 0 0 with a front of 0 0 -1 give me this set of Vectors :
Start : 0.0000~ , 0.0000~ , 0.0000~
End : 0.0000~ , 0.0000~ , 0.0000~
which is (by my logic) incoherent, I would have expected something more like (Start : 0, 0, 0) ( End : 0, 0, -1000)
I think there's an issue with my UnProject function :
glm::vec3 UnProject(float winX, float winY, float winZ)
{
// Compute (projection x modelView) ^ -1:
glm::mat4 modelView = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
const glm::mat4 m = glm::inverse(projection * modelView);
// Need to invert Y since screen Y-origin point down,
// while 3D Y-origin points up (this is an OpenGL only requirement):
winY = ScreenSize.cy - winY;
// Transformation of normalized coordinates between -1 and 1:
glm::vec4 in;
in.x = winX / ScreenSize.cx * 2.0 - 1.0;
in.y = winY / ScreenSize.cy * 2.0 - 1.0;
in.z = 2.0 * winZ - 1.0;
in.w = 1.0;
// To world coordinates:
glm::vec4 out(m * in);
if (out.w == 0.0) // Avoid a division by zero
{
return glm::vec3(0.0f);
}
out.w = 1.0 / out.w;
return glm::vec3(out.x * out.w, out.y * out.w,out.z * out.w);
}
Since this function is basic rewrite of the pseudo code (from here) and I'm far from behind good at mathematics I don't really see what could go wrong...
PS: my view matrix (provided by GetViewMatrix()) is correct (since I use it to show my scene)
my projection matrix is also correct
the ScreenSize object carry my viewport size
I have found what's wrong, the return vec3 should be made by dividing each component by the perspective instead of being multiply by it. Here is the new UnProject function :
glm::vec3 UnProject2(float winX, float winY,float winZ){
glm::mat4 View = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
glm::mat4 viewProjInv = glm::inverse(projection * View);
winY = ScreenSize.cy - winY;
glm::vec4 clickedPointOnSreen;
clickedPointOnSreen.x = ((winX - 0.0f) / (ScreenSize.cx)) *2.0f -1.0f;
clickedPointOnSreen.y = ((winY - 0.0f) / (ScreenSize.cy)) * 2.0f -1.0f;
clickedPointOnSreen.z = 2.0f*winZ-1.0f;
clickedPointOnSreen.w = 1.0f;
glm::vec4 clickedPointOrigin = viewProjInv * clickedPointOnSreen;
return glm::vec3(clickedPointOrigin.x / clickedPointOrigin.w,clickedPointOrigin.y / clickedPointOrigin.w,clickedPointOrigin.z / clickedPointOrigin.w);
}
I also changed the way start and end are calculated :
glm::vec3 start = UnProject2(x,y,0.0f);
glm::vec3 end = UnProject2(x,y,1.0f);

Implement camera with off-axis projection

I'm trying to create a 3D viewer for a parallax barrier display, but I'm stuck with camera movements. You can see a parallax barrier display at: displayblocks.org
Multiple views are needed for this effect, this tutorial provide code for calculating the interViewpointDistance depending of the display properties and so selecting the head Position.
Here are the parts of the code involved in the matrix creation:
for (y = 0; y < viewsCountY; y++) {
for (x = 0; x <= viewsCountX; x++) {
viewMatrix = glm::mat4(1.0f);
// selection of the head Position
float cameraX = (float(x - int(viewsCountX / 2))) * interViewpointDistance;
float cameraY = (float(y - int(mviewsCountY / 2))) * interViewpointDistance;
camera.Position = glm::vec3(camera.Position.x + cameraX, camera.Position.y + cameraY, camera.Position.z);
// Move the apex of the frustum to the origin.
viewMatrix = glm::translate(viewMatrix -camera.Position);
projectionMatrix = get_off_Axis_Projection_Matrix();
// render's stuff
// (...)
// glfwSwapBuffers();
}
}
The following code is the projection matrix function. I use the Robert Kooima's paper generalized perspective projection.
glm::mat4 get_off_Axis_Projection_Matrix() {
glm::vec3 Pe = camera.Position;
// space corners coordinates (space points)
glm::vec3 Pa = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pb = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pc = glm::vec3(screenSizeX, screenSizeY, 0.0);
// Compute an orthonormal basis for the screen.
glm::vec3 Vr = Pb - Pa;
Vr = glm::normalize(Vr);
glm::vec3 Vu = Pc - Pa;
Vu = glm::normalize(Vu);
glm::vec3 Vn = glm::cross(Vr, Vu);
Vn = glm::normalize(Vn);
// Compute the screen corner vectors.
glm::vec3 Va = Pa - Pe;
glm::vec3 Vb = Pb - Pe;
glm::vec3 Vc = Pc - Pe;
//-- Find the distance from the eye to screen plane.
float d = -glm::dot(Va, Vn);
// Find the extent of the perpendicular projection.
float left = glm::dot(Va, Vr) * const_near / d;
float right = glm::dot(Vr, Vb) * const_near / d;
float bottom = glm::dot(Vu, Va) * const_near / d;
float top = glm::dot(Vu, Vc) * const_near / d;
// Load the perpendicular projection.
return glm::frustum(left, right, bottom, top, const_near, const_far + d);
}
These two methods works, and I can see that my multiple views are well projected.
But I cant manage to make a camera that works normally, like in a FPS, with Tilt and Pan.
This code for example give me the "head tracking" effect (but with the mouse), it was handy to test projections, but this is not what I'm looking for.
float cameraX = (mouseX - windowWidth / 2) / (windowWidth * headDisplacementFactor);
float cameraY = (mouseY - windowHeight / 2) / (windowHeight * headDisplacementFactor);
camera.Position = glm::vec3(cameraX, cameraY, 60.0f);
viewMatrix = glm::translate(viewMatrix, -camera.Position);
My camera class works if viewmatrix is created with lookAt. But with the off-axis projection, using lookAt will rotate the scene, by which the correspondence between near plane and screen plane will be lost.
I may need to translate/rotate the space corners coordinates Pa, Pb, Pc, used to create the frustum, but I don't know how.

opengl camera zoom to cursor, avoiding > 90deg fov

I'm trying to set up a google maps style zoom-to-cursor control for my opengl camera. I'm using a similar method to the one suggested here. Basically, I get the position of the cursor, and calculate the width/height of my perspective view at that depth using some trigonometry. I then change the field of view, and calculate how to much I need to translate in order to keep the point under the cursor in the same apparent position on the screen. That part works pretty well.
The issue is that I want to limit the fov to be less than 90 degrees. When it ends up >90, I cut it in half and then translate everything away from the camera so that the resulting scene looks the same as with the larger fov. The equation to find that necessary translation isn't working, which is strange because it comes from pretty simple algebra. I can't find my mistake. Here's the relevant code.
void Visual::scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
glm::mat4 modelview = view*model;
glm::vec4 viewport = { 0.0, 0.0, width, height };
float winX = cursorPrevX;
float winY = viewport[3] - cursorPrevY;
float winZ;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
glm::vec3 screenCoords = { winX, winY, winZ };
glm::vec3 cursorPosition = glm::unProject(screenCoords, modelview, projection, viewport);
if (isinf(cursorPosition[2]) || isnan(cursorPosition[2])) {
cursorPosition[2] = 0.0;
}
float zoomFactor = 1.1;
// = zooming in
if (yoffset > 0.0)
zoomFactor = 1/1.1;
//the width and height of the perspective view, at the depth of the cursor position
glm::vec2 fovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
camera.setZoomFromFov(fovXY.y * zoomFactor, cursorPosition[2] - zTranslate);
//don't want fov to be greater than 90, so cut it in half and move the world farther away from the camera to compensate
//not working...
if (camera.Zoom > 90.0 && zTranslate*2 > MAX_DEPTH) {
float prevZoom = camera.Zoom;
camera.Zoom *= .5;
//need increased distance between camera and world origin, so that view does not appear to change when fov is reduced
zTranslate = cursorPosition[2] - tan(glm::radians(prevZoom)) / tan(glm::radians(camera.Zoom) * (cursorPosition[2] - zTranslate));
}
else if (camera.Zoom > 90.0) {
camera.Zoom = 90.0;
}
glm::vec2 newFovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
//translate so that position under the cursor does not appear to move.
xTranslate += (newFovXY.x - fovXY.x) * (winX / width - .5);
yTranslate += (newFovXY.y - fovXY.y) * (winY / height - .5);
updateView = true;
}
The definition of my view matrix. Called ever iteration of the main loop.
void Visual::setView() {
view = glm::mat4();
view = glm::translate(view, { xTranslate,yTranslate,zTranslate });
view = glm::rotate(view, glm::radians(camera.inclination), glm::vec3(1.f, 0.f, 0.f));
view = glm::rotate(view, glm::radians(camera.azimuth), glm::vec3(0.f, 0.f, 1.f));
camera.Right = glm::column(view, 0).xyz();
camera.Up = glm::column(view, 1).xyz();
camera.Front = -glm::column(view, 2).xyz(); // minus because OpenGL camera looks towards negative Z.
camera.Position = glm::column(view, 3).xyz();
updateView = false;
}
Field of view helper functions.
glm::vec2 getFovXY(float depth, float aspectRatio) {
float fovY = tan(glm::radians(Zoom / 2)) * depth;
float fovX = fovY * aspectRatio;
return glm::vec2{ 2*fovX , 2*fovY };
}
//you have a desired fov, and you want to set the zoom to achieve that.
//factor of 1/2 inside the atan because we actually need the half-fov. Keep full-fov as input for consistency
void setZoomFromFov(float fovY, float depth) {
Zoom = glm::degrees(2 * atan(fovY / (2 * depth)));
}
The equations I'm using can be found from the diagram here. Since I want to have the same field of view dimensions before and after the angle is changed, I start with
fovY = tan(theta1) * d1 = tan(theta2) * d2
d2 = (tan(theta1) / tan(theta2)) * d1
d1 = distance between camera and cursor position, before fov change = cursorPosition[2] - zTranslate
d2 = distance after
theta1 = fov angle before
theta2 = fov angle after = theta1 * .5
Appreciate the help.

Stepping between spherical coords (OpenGL, C++, GLUT)

I have defined 2 points on the surface of a sphere using spherical coordinates.
// define end point positions
float theta_point_1 = (5/10.0)*M_PI;
float phi_point_1 = (5/10.0)*2*M_PI;
float x_point_1 = Radius * sin(theta_point_1) * cos(phi_point_1);
float y_point_1 = Radius * sin(theta_point_1) * sin(phi_point_1);
float z_point_1 = Radius * cos(theta_point_1);
float theta_point_2 = (7/10.0)*M_PI;
float phi_point_2 = (1/10.0)*2*M_PI;
float x_point_2 = Radius * sin(theta_point_2) * cos(phi_point_2);
float y_point_2 = Radius * sin(theta_point_2) * sin(phi_point_2);
float z_point_2 = Radius * cos(theta_point_2);
// draw end points
void end_points ()
{
glColor3f (1.0, 1.0, 1.0);
glPointSize(25.0);
glBegin(GL_POINTS);
glVertex3f(x_point_1,y_point_1,z_point_1);
glVertex3f(x_point_2,y_point_2,z_point_2);
glEnd();
}
To step between the two points, I do the following:
find the difference between theta_points_1,2 and phi_points_1,2
divide the differences by 'n' (yielding 's')
redraw 'n' times, while stepping up the theta and phi by 's' each time
In the following, I've defined the differences between my theta and phi values, divided them, and then redraw them.
// begining spherical coords
float theta_point_1_value=5;
float phi_point_1_value=5;
// ending sperical coords
float theta_point_2_value=7;
float phi_point_2_value=1;
// dividing the difference evenly
float step_points=30;
float step_theta = 2/step_points;
float step_phi = 4/step_points;
// step between spherical coordinates
void stepping_points ()
{
glColor3f (1.0, 0.0, 0.0);
for (int i = 1; i < step_points; i++)
{
float theta = (theta_point_1_value/10.0)*M_PI;
float phi = (phi_point_1_value/10.0)*2*M_PI;
float x = Radius * sin(theta) * cos(phi);
float y = Radius * sin(theta) * sin(phi);
float z = Radius * cos(theta);
glPushMatrix();
glTranslatef(x,y,z);
glutSolidSphere (0.05,10,10);
glPopMatrix();
}
glEnd();
}
Now I understand, this displays 30 solid spheres at the same position. Because I have NOT included 'step_theta' or 'step_phi' in any of the redraws.
And that is the root of my question. How do I employ 'step_theta' and 'step_phi' in my redraws?
What I want to do is say something like this at the top of my 'for' loop:
for (int i = 1; i < step_points; i++)
{
float theta_point_1_value = (theta_point_1_value+step_theta);
float phi_point_1_value = (phi_point_1_value+step_phi);
float theta = (theta_point_1_value/10.0)*M_PI;
float phi = (phi_point_1_value/10.0)*2*M_PI;
float x = Radius * sin(theta) * cos(phi);
float y = Radius * sin(theta) * sin(phi);
float z = Radius * cos(theta);
glPushMatrix();
glTranslatef(x,y,z);
glutSolidSphere (0.05,10,10);
glPopMatrix();
}
The above will redraw 30 solid spheres, but they don't show between my defined end points. It's pretty clear that either my math or syntax is screwy (or more than likely, both are).
Hint: What is the range of your loop variable, i? What do you want the range of your step_theta and step_phi to be?
When you declare a variable inside the loop, it goes out of scope and is destructed after every iteration. As such, only the value of i changes between your loop iterations.
Also: Consider using a vector/point class. (x_point_1, y_point_1) is not C++ :).
If you want consistent timing regardless of frame rate, you need to track the passage of time and use that to control how far you interpolate between the two points. Remember the start time and calculate the desired end time, then each frame, calculate (float)(now-start)/(end-start). This will give you a value between 0.0 and 1.0. Multiply that value by the delta of each spherical coordinate and add their start angles and you'll get what angles you need to be at now.

lighting the sun giving absurd results

I am trying to develop space simulator. I am trying to use sun as the light source. My problem is that the lighting dosent work as expected. Maybe i am using the wrong calculation for the normals. I am using a single "createsphere" function to create a sphere, and then use different coordinates and sizes to display them. The problem is that all the spheres on the screen show almost the same effect(i.e i've applied only one light source but it seems to have been implemented to all the spheres) .and also the light rotates along with them. I am not sure where the problem is ...i am posting my code ...
the code for sphere display
void DisplaySphere_sun (double R, GLuint texture)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
int b,m = 0;
glScalef (0.0125 * R, 0.0125 * R, 0.0125 * R);
glBindTexture (GL_TEXTURE_2D, texture);
glBegin (GL_TRIANGLE_STRIP);
for ( b = 0; b <VertexCount; b++)
{
/*if((b%3)==0)
{
glNormal3f(normal[m].x,normal[m].y,normal[m].z);
m++;
}*/
glTexCoord2f (VERTEX[b].U, VERTEX[b].V);
/*glNormal3f(-VERTEX[b].X, -VERTEX[b].Y, -VERTEX[b].Z);*/
glVertex3f (VERTEX[b].Y, VERTEX[b].X, -VERTEX[b].Z);
}
m = 0;
for ( b = 0; b <VertexCount; b++)
{
/*if((b%3)==0)
{
glNormal3f(normal[m].x,normal[m].y,normal[m].z);
m++;
}*/
glTexCoord2f (VERTEX[b].U, -VERTEX[b].V);
/* glNormal3f(-VERTEX[b].X, -VERTEX[b].Y, -VERTEX[b].Z);*/
glVertex3f (VERTEX[b].Y, VERTEX[b].X, VERTEX[b].Z);
}
glEnd();
//glRotatef(120,0,0,0);
}
the code for creating a sphere
void CreateSphere (double R, double X, double Y, double Z) {
int n,m;
double a;
double b;
n = 0;
m = 0;
for( b = 0; b <= 90 - space; b+=space){
for( a = 0; a <= 360 - space; a+=space)
{
VERTEX[n].X = R * sin((a) / 180 * PI) * sin((b) / 180 * PI) - X;
VERTEX[n].Y = R * cos((a) / 180 * PI) * sin((b) / 180 * PI) + Y;
VERTEX[n].Z = R * cos((b) / 180 * PI) - Z;
VERTEX[n].V = (2 * b) / 360;
VERTEX[n].U = (a) / 360;
n++;
VERTEX[n].X = R * sin((a) / 180 * PI) * sin((b + space) / 180 * PI) - X;
VERTEX[n].Y = R * cos((a) / 180 * PI) * sin((b + space) / 180 * PI) + Y;
VERTEX[n].Z = R * cos((b + space) / 180 * PI) - Z;
VERTEX[n].V = (2 * (b + space)) / 360;
VERTEX[n].U = (a) / 360;
n++;
VERTEX[n].X = R * sin((a + space) / 180 * PI) * sin((b) / 180 * PI) - X;
VERTEX[n].Y = R * cos((a + space) / 180 * PI) * sin((b) / 180 * PI) + Y;
VERTEX[n].Z = R * cos((b) / 180 * PI) - Z;
VERTEX[n].V = (2 * b) / 360;
VERTEX[n].U = (a + space) / 360;
n++;
VERTEX[n].X = R * sin((a + space) / 180 * PI) * sin((b + space) /180 * PI) - X;
VERTEX[n].Y = R * cos((a + space) / 180 * PI) * sin((b + space) /180 * PI) + Y;
VERTEX[n].Z = R * cos((b + space) / 180 * PI) - Z;
VERTEX[n].V = (2 * (b + space)) / 360;
VERTEX[n].U = (a + space) / 360;
n++;
}
}
}
and code for lighting the sun
glPushMatrix();
gluLookAt (0.0, 10.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0); //defines a viewing transformation.
// Now translate to the sun
glTranslatef(0.0, -7.0, 3.0);
/* For LIGHT0 */
GLfloat lightZeroPosition[] = {0.0f, 0.0f, 0.0f, 1.0f};
/*GLfloat lightvec[] = {0.5f, 0.2f, 0.0f, 1.0f};*/
GLfloat lightZeroColor[] = {0.5f, 0.5f, 0.5f, 1.0f};
GLfloat amb[] = {1, 1, 1, 1};
GLfloat spec[] = {0.3, 0.3, 0.3, 1};
glLightfv(GL_LIGHT0, GL_POSITION, lightZeroPosition);
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightZeroColor);
glLightfv(GL_LIGHT0, GL_SPECULAR, spec);
glEnable(GL_LIGHT0);
glRotatef(angle,0,0,1);
DisplaySphere(5,textures);
// function to display the sun
glPopMatrix();
I'm a bit puzzled, why you don't draw the sun at the orign of the solar system? The sun is a star, and stars carry over 95% of their stellar systems mass, so the center of gravity of the whole thing is within the sun for most planets (only Jupiter has so much mass, that it shifts the center of gravity slightly outside the sun's photosphere radius).
As for your lighting problem, one normally doesn't illuminate light sources. Just switch off lighting when drawing the sun. Then when drawing the planets place the light source within the sun. OpenGL is not a global renderer, i.e. after you've drawn something, it completely forgets about it, i.e. you won't get any lighting interactions between the things you draw (means also, no shadows for free).
This is how I'd draw a solar system (pseudocode):
draw_solar_system():
glPushMatrix()
glDisable(GL_LIGHTING)
draw_origin_sphere(sun_radius)
glEnable(GL_LIGHTING)
glLightfv(GL_LIGHT0, GL_POSITION, (0., 0., 0., 1.))
glLightfv(GL_LIGHT0, GL_DIFFUSE, (1., 1., 1., 1.))
glLightfv(GL_LIGHT0, GL_AMBIENT, (0., 0., 0., 1.))
for p in planets:
glPushMatrix()
glRotatef(p.orbital_inclination, p.axis_of_orbital_inclination)
glRotatef(p.orbital_angle, 0., 1., 0.)
glTranslatef(p.orbit_radius, 1., 0. 0.)
glRotate(p.axial_of_inclination, p.axis_of_axis_inclination)
glRotate(p.time_of_day, 0., 1., 0.)
draw_origin_sphere(p.radius)
glPopMatrix()
glPopMatrix()