Need help expanding particle system spread / divergence from 2 to 3 dimensions - c++

I need help. I've been struggling with this for a week now and getting nowhere. I am building a 3D particle system mainly for learning and I am currently working on particle spread / divergence. In specific, introducing random direction to the particle direction so as to create something that looks more like a fountain as opposed to a solid stream.
I have been successful in getting this to work in one axis but no matter what I do, I cannot get it to work in 3 dimensions.
Here is what I am doing:
// Compute a random angle between -180 to +180 for velocity angle x, y and z. spreadAmount is a float from 0.0 to 1.0 to control degree of spread.
float velangrndx = spreadAmount * ((((double)(rand() % RAND_MAX) / (RAND_MAX)) - 0.5) * 360.0 * 3.14159265359 / 180.0);
float velangrndy = spreadAmount * ((((double)(rand() % RAND_MAX) / (RAND_MAX)) - 0.5) * 360.0 * 3.14159265359 / 180.0);
float velangrndz = spreadAmount * ((((double)(rand() % RAND_MAX) / (RAND_MAX)) - 0.5) * 360.0 * 3.14159265359 / 180.0);
// Compute Angles
float vsin_anglex_dir = -PF_SIN(velangrndx);
float vcos_anglex_dir = -PF_COS(velangrndx);
float vsin_angley_dir = -PF_SIN(velangrndy);
float vcos_angley_dir = -PF_COS(velangrndy);
float vsin_anglez_dir = -PF_SIN(velangrndz);
float vcos_anglez_dir = -PF_COS(velangrndz);
// Assign initial velocity to velocity x, y, z. vel is a float ranging from 0.0 - 0.1 specified by user. velx, vely, and velz are also floats.
velx = vel; vely = vel; velz = vel;
And finally, we get to the particle spread / divergence function below. If I use only the first X axis (comment out the Y and Z) it works as it should (see images), but if I use the Y and Z axis, it works totally incorrectly. px0, py0, and pz0 are temporary float variables so as to preserve the velocity variables.
// X Divergence
px0 = (velx * vsin_anglex_dir);
py0 = (velx * vcos_anglex_dir);
pz0 = velz;
velx = px0; vely = py0; velz = pz0;
// Y Divergence
py0 = (vely * vsin_angley_dir);
pz0 = (vely * vcos_angley_dir);
px0 = velx;
velx = px0; vely = py0; velz = pz0;
// Z Divergence
pz0 = (velz * vsin_anglez_dir);
px0 = (velz * vcos_anglez_dir);
py0 = vely;
velx = px0; vely = py0; velz = pz0;
The velx, vely, and velz are then used to calculate for particle screen position.
This is what the particle spread looks like at 25%, 75% and 100% for the X axis only (if I comment out the Y and Z code). This works as it should and in theory, if the rest of my code was working correctly, I should get this same result for the Y and Z axis. But I don't.
I could really use some help here. Any suggestions on what I am doing wrong and how to correctly expand the currently working spread function from 2 dimensions to 3?
Thanks,
-Richard

Likely it is because the values of velx, vely and velz are getting overwritten on subsequent calculations. See whether the below works the way you are expecting.
// X Divergence
float velxXD = (velx * vsin_anglex_dir);
float velyXD = (velx * vcos_anglex_dir);
float velzXD = velz;
// Y Divergence
float velxYD = velx;
float velyYD = (vely * vsin_angley_dir);
float velzYD = (vely * vcos_angley_dir);
// Z Divergence
float velxZD = (velz * vcos_anglez_dir);
float velyZD = vely;
float velzZD = (velz * vsin_anglez_dir);
velx=velxXD+velxYD+velxZD;
vely=velyXD+velyYD+velyZD;
velz=velzXD+velzYD+velzZD;

Related

Quaternion calculation in "RosInertialUnit.cpp" of Webots ROS default controller

today I was taking a closer look at the quaternion calculation used in the "RosInertialUnit.cpp" file as part of the default ROS controller.
I wanted to try out the InterialUnit using the "keyboard_teleop.wbt" - world and added the sensor to the Pioneer robot.
I was then comparing the robot's rotation values given in the scene tree (in axis + angle format) with the output of the sensor in ROS (orientation converterd to a quaternion). You can see both in the screenshots below:
In my mind the quaternion output doesn't match the values given in the scene tree. When using MATLAB's function "quat = axang2quat(axang)" I would obtain the following for the example above:
quat = 0.7936 0.0131 -0.6082 0.0104 % w x y z
which when comparing with the ROS message shows that y and z are switched. I'm not quite sure if this is on purpose (maybe a different convention?). I didn't want to start a pull request right away but wanted to discuss the issue here before.
I was testing the following implementation in a changed version of "RosInertialUnit.cpp", which gives me the expected results (same results as calculated in MATLAB).
double halfRoll = mInertialUnit->getRollPitchYaw()[0] * 0.5; // turning around x
double halfPitch = mInertialUnit->getRollPitchYaw()[2] * 0.5; // turning around y
double halfYaw = mInertialUnit->getRollPitchYaw()[1] * 0.5; // turning around z
double cosYaw = cos(halfYaw);
double sinYaw = sin(halfYaw);
double cosPitch = cos(halfPitch);
double sinPitch = sin(halfPitch);
double cosRoll = cos(halfRoll);
double sinRoll = sin(halfRoll);
value.orientation.x = cosYaw * cosPitch * sinRoll - sinYaw * sinPitch * cosRoll;
value.orientation.y = sinYaw * cosPitch * sinRoll + cosYaw * sinPitch * cosRoll;
value.orientation.z = sinYaw * cosPitch * cosRoll - cosYaw * sinPitch * sinRoll;
value.orientation.w = cosYaw * cosPitch * cosRoll + sinYaw * sinPitch * sinRoll;
This is the same implementation as used in this wikipedia article.
This inversion is due to the fact that Webots and ROS coordinate systems are not equivalent.
In Webots:
X: left
Y: up
Z: forward
Which leads to: (https://cyberbotics.com/doc/reference/inertialunit#field-summary)
roll: left (Webots X)
pitch: forward (Webots Z)
yaw: up (Webots Y)
In ROS: (https://www.ros.org/reps/rep-0103.html#axis-orientation)
X: forward
Y: left
Z: up
Which leads to: (https://www.ros.org/reps/rep-0103.html#rotation-representation)
roll: forward (ROS X)
pitch: left (ROS Y)
yaw: up (ROS Z)
As you can see the roll and pitch axes are switched, this is why they are switched in the code too.

C++ Angles between a vector and a point

I got 2 points own=(x, y, z) and en=(x, y, z) which represents my own position in the world and some other player position. the other player also got pitch (from 90 degrees to -90) and yaw (0 to 360). I want to calculate the angles between the other player look and my own position.
In 2D, alpha is what I'm trying to calculate:
int main()
{
float own_x = 1, own_y = 1, own_z = 1;
float en_x = 10, en_y = 1, en_z = 10;
float pi = 3.14159265;
float pitch = 0.f * (pi / 180), yaw = 45.f * (pi / 180);
float x = sin(yaw) * cos(pitch);
float y = sin(pitch);
float z = cos(pitch) * cos(yaw);
float vec_length = sqrt(pow(en_x - own_x, 2) + pow(en_y - own_y, 2) + pow(en_y - own_y, 2));
x /= vec_length;
y /= vec_length;
z /= vec_length;
float cos_t = ((en_x - own_x)*x + (en_y - own_y)*y + (en_z - own_z)*z) / sqrt(pow(en_x - own_x, 2) + pow(en_y - own_y, 2) + pow(en_y - own_y, 2));
float arc = acos(cos_t) * (180 / pi);
return 0;
}
you divide twice with the length of en-own: You should remove
vec_length, and xyz /= vec_length.
your division at cos_t is buggy, you use _y twice in the
expression instead of _y and _z
Note: instead of pow(x, 2), use x*x, it is faster usually (compilers may not optimize pow(x, 2) to x*x).

How to get vertices for a sphere? [duplicate]

Are there any tutorials out there that explain how I can draw a sphere in OpenGL without having to use gluSphere()?
Many of the 3D tutorials for OpenGL are just on cubes. I have searched but most of the solutions to drawing a sphere are to use gluSphere(). There is also a site that has the code to drawing a sphere at this site but it doesn't explain the math behind drawing the sphere. I have also other versions of how to draw the sphere in polygon instead of quads in that link. But again, I don't understand how the spheres are drawn with the code. I want to be able to visualize so that I could modify the sphere if I need to.
One way you can do it is to start with a platonic solid with triangular sides - an octahedron, for example. Then, take each triangle and recursively break it up into smaller triangles, like so:
Once you have a sufficient amount of points, you normalize their vectors so that they are all a constant distance from the center of the solid. This causes the sides to bulge out into a shape that resembles a sphere, with increasing smoothness as you increase the number of points.
Normalization here means moving a point so that its angle in relation to another point is the same, but the distance between them is different.
Here's a two dimensional example.
A and B are 6 units apart. But suppose we want to find a point on line AB that's 12 units away from A.
We can say that C is the normalized form of B with respect to A, with distance 12. We can obtain C with code like this:
#returns a point collinear to A and B, a given distance away from A.
function normalize(a, b, length):
#get the distance between a and b along the x and y axes
dx = b.x - a.x
dy = b.y - a.y
#right now, sqrt(dx^2 + dy^2) = distance(a,b).
#we want to modify them so that sqrt(dx^2 + dy^2) = the given length.
dx = dx * length / distance(a,b)
dy = dy * length / distance(a,b)
point c = new point
c.x = a.x + dx
c.y = a.y + dy
return c
If we do this normalization process on a lot of points, all with respect to the same point A and with the same distance R, then the normalized points will all lie on the arc of a circle with center A and radius R.
Here, the black points begin on a line and "bulge out" into an arc.
This process can be extended into three dimensions, in which case you get a sphere rather than a circle. Just add a dz component to the normalize function.
If you look at the sphere at Epcot, you can sort of see this technique at work. it's a dodecahedron with bulged-out faces to make it look rounder.
I'll further explain a popular way of generating a sphere using latitude and longitude (another
way, icospheres, was already explained in the most popular answer at the time of this writing.)
A sphere can be expressed by the following parametric equation:
F(u, v) = [ cos(u)*sin(v)*r, cos(v)*r, sin(u)*sin(v)*r ]
Where:
r is the radius;
u is the longitude, ranging from 0 to 2π; and
v is the latitude, ranging from 0 to π.
Generating the sphere then involves evaluating the parametric function at fixed intervals.
For example, to generate 16 lines of longitude, there will be 17 grid lines along the u axis, with a step of
π/8 (2π/16) (the 17th line wraps around).
The following pseudocode generates a triangle mesh by evaluating a parametric function
at regular intervals (this works for any parametric surface function, not just spheres).
In the pseudocode below, UResolution is the number of grid points along the U axis
(here, lines of longitude), and VResolution is the number of grid points along the V axis
(here, lines of latitude)
var startU=0
var startV=0
var endU=PI*2
var endV=PI
var stepU=(endU-startU)/UResolution // step size between U-points on the grid
var stepV=(endV-startV)/VResolution // step size between V-points on the grid
for(var i=0;i<UResolution;i++){ // U-points
for(var j=0;j<VResolution;j++){ // V-points
var u=i*stepU+startU
var v=j*stepV+startV
var un=(i+1==UResolution) ? endU : (i+1)*stepU+startU
var vn=(j+1==VResolution) ? endV : (j+1)*stepV+startV
// Find the four points of the grid
// square by evaluating the parametric
// surface function
var p0=F(u, v)
var p1=F(u, vn)
var p2=F(un, v)
var p3=F(un, vn)
// NOTE: For spheres, the normal is just the normalized
// version of each vertex point; this generally won't be the case for
// other parametric surfaces.
// Output the first triangle of this grid square
triangle(p0, p2, p1)
// Output the other triangle of this grid square
triangle(p3, p1, p2)
}
}
The code in the sample is quickly explained. You should look into the function void drawSphere(double r, int lats, int longs):
void drawSphere(double r, int lats, int longs) {
int i, j;
for(i = 0; i <= lats; i++) {
double lat0 = M_PI * (-0.5 + (double) (i - 1) / lats);
double z0 = sin(lat0);
double zr0 = cos(lat0);
double lat1 = M_PI * (-0.5 + (double) i / lats);
double z1 = sin(lat1);
double zr1 = cos(lat1);
glBegin(GL_QUAD_STRIP);
for(j = 0; j <= longs; j++) {
double lng = 2 * M_PI * (double) (j - 1) / longs;
double x = cos(lng);
double y = sin(lng);
glNormal3f(x * zr0, y * zr0, z0);
glVertex3f(r * x * zr0, r * y * zr0, r * z0);
glNormal3f(x * zr1, y * zr1, z1);
glVertex3f(r * x * zr1, r * y * zr1, r * z1);
}
glEnd();
}
}
The parameters lat defines how many horizontal lines you want to have in your sphere and lon how many vertical lines. r is the radius of your sphere.
Now there is a double iteration over lat/lon and the vertex coordinates are calculated, using simple trigonometry.
The calculated vertices are now sent to your GPU using glVertex...() as a GL_QUAD_STRIP, which means you are sending each two vertices that form a quad with the previously two sent.
All you have to understand now is how the trigonometry functions work, but I guess you can figure it out easily.
If you wanted to be sly like a fox you could half-inch the code from GLU. Check out the MesaGL source code (http://cgit.freedesktop.org/mesa/mesa/).
See the OpenGL red book: http://www.glprogramming.com/red/chapter02.html#name8
It solves the problem by polygon subdivision.
My example how to use 'triangle strip' to draw a "polar" sphere, it consists in drawing points in pairs:
const float PI = 3.141592f;
GLfloat x, y, z, alpha, beta; // Storage for coordinates and angles
GLfloat radius = 60.0f;
int gradation = 20;
for (alpha = 0.0; alpha < GL_PI; alpha += PI/gradation)
{
glBegin(GL_TRIANGLE_STRIP);
for (beta = 0.0; beta < 2.01*GL_PI; beta += PI/gradation)
{
x = radius*cos(beta)*sin(alpha);
y = radius*sin(beta)*sin(alpha);
z = radius*cos(alpha);
glVertex3f(x, y, z);
x = radius*cos(beta)*sin(alpha + PI/gradation);
y = radius*sin(beta)*sin(alpha + PI/gradation);
z = radius*cos(alpha + PI/gradation);
glVertex3f(x, y, z);
}
glEnd();
}
First point entered (glVertex3f) is as follows the parametric equation and the second one is shifted by a single step of alpha angle (from next parallel).
Although the accepted answer solves the question, there's a little misconception at the end. Dodecahedrons are (or could be) regular polyhedron where all faces have the same area. That seems to be the case of the Epcot (which, by the way, is not a dodecahedron at all). Since the solution proposed by #Kevin does not provide this characteristic I thought I could add an approach that does.
A good way to generate an N-faced polyhedron where all vertices lay in the same sphere and all its faces have similar area/surface is starting with an icosahedron and the iteratively sub-dividing and normalizing its triangular faces (as suggested in the accepted answer). Dodecahedrons, for instance, are actually truncated icosahedrons.
Regular icosahedrons have 20 faces (12 vertices) and can easily be constructed from 3 golden rectangles; it's just a matter of having this as a starting point instead of an octahedron. You may find an example here.
I know this is a bit off-topic but I believe it may help if someone gets here looking for this specific case.
Python adaptation of #Constantinius answer:
lats = 10
longs = 10
r = 10
for i in range(lats):
lat0 = pi * (-0.5 + i / lats)
z0 = sin(lat0)
zr0 = cos(lat0)
lat1 = pi * (-0.5 + (i+1) / lats)
z1 = sin(lat1)
zr1 = cos(lat1)
glBegin(GL_QUAD_STRIP)
for j in range(longs+1):
lng = 2 * pi * (j+1) / longs
x = cos(lng)
y = sin(lng)
glNormal(x * zr0, y * zr0, z0)
glVertex(r * x * zr0, r * y * zr0, r * z0)
glNormal(x * zr1, y * zr1, z1)
glVertex(r * x * zr1, r * y * zr1, r * z1)
glEnd()
void draw_sphere(float r)
{
float pi = 3.141592;
float di = 0.02;
float dj = 0.04;
float db = di * 2 * pi;
float da = dj * pi;
for (float i = 0; i < 1.0; i += di) //horizonal
for (float j = 0; j < 1.0; j += dj) //vertical
{
float b = i * 2 * pi; //0 to 2pi
float a = (j - 0.5) * pi; //-pi/2 to pi/2
//normal
glNormal3f(
cos(a + da / 2) * cos(b + db / 2),
cos(a + da / 2) * sin(b + db / 2),
sin(a + da / 2));
glBegin(GL_QUADS);
//P1
glTexCoord2f(i, j);
glVertex3f(
r * cos(a) * cos(b),
r * cos(a) * sin(b),
r * sin(a));
//P2
glTexCoord2f(i + di, j);//P2
glVertex3f(
r * cos(a) * cos(b + db),
r * cos(a) * sin(b + db),
r * sin(a));
//P3
glTexCoord2f(i + di, j + dj);
glVertex3f(
r * cos(a + da) * cos(b + db),
r * cos(a + da) * sin(b + db),
r * sin(a + da));
//P4
glTexCoord2f(i, j + dj);
glVertex3f(
r * cos(a + da) * cos(b),
r * cos(a + da) * sin(b),
r * sin(a + da));
glEnd();
}
}
One way is to make a quad that faces the camera and write a vertex and fragment shader that renders something that looks like a sphere. You could use equations for a circle/sphere that you can find on the internet.
One nice thing is that the silhouette of a sphere looks the same from any angle. However, if the sphere is not in the center of a perspective view, then it would appear perhaps more like an ellipse. You could work out the equations for this and put them in the fragment shading. Then the light shading needs to changed as the player moves, if you do indeed have a player moving in 3D space around the sphere.
Can anyone comment on if they have tried this or if it would be too expensive to be practical?

Drawing Sphere in OpenGL without using gluSphere()?

Are there any tutorials out there that explain how I can draw a sphere in OpenGL without having to use gluSphere()?
Many of the 3D tutorials for OpenGL are just on cubes. I have searched but most of the solutions to drawing a sphere are to use gluSphere(). There is also a site that has the code to drawing a sphere at this site but it doesn't explain the math behind drawing the sphere. I have also other versions of how to draw the sphere in polygon instead of quads in that link. But again, I don't understand how the spheres are drawn with the code. I want to be able to visualize so that I could modify the sphere if I need to.
One way you can do it is to start with a platonic solid with triangular sides - an octahedron, for example. Then, take each triangle and recursively break it up into smaller triangles, like so:
Once you have a sufficient amount of points, you normalize their vectors so that they are all a constant distance from the center of the solid. This causes the sides to bulge out into a shape that resembles a sphere, with increasing smoothness as you increase the number of points.
Normalization here means moving a point so that its angle in relation to another point is the same, but the distance between them is different.
Here's a two dimensional example.
A and B are 6 units apart. But suppose we want to find a point on line AB that's 12 units away from A.
We can say that C is the normalized form of B with respect to A, with distance 12. We can obtain C with code like this:
#returns a point collinear to A and B, a given distance away from A.
function normalize(a, b, length):
#get the distance between a and b along the x and y axes
dx = b.x - a.x
dy = b.y - a.y
#right now, sqrt(dx^2 + dy^2) = distance(a,b).
#we want to modify them so that sqrt(dx^2 + dy^2) = the given length.
dx = dx * length / distance(a,b)
dy = dy * length / distance(a,b)
point c = new point
c.x = a.x + dx
c.y = a.y + dy
return c
If we do this normalization process on a lot of points, all with respect to the same point A and with the same distance R, then the normalized points will all lie on the arc of a circle with center A and radius R.
Here, the black points begin on a line and "bulge out" into an arc.
This process can be extended into three dimensions, in which case you get a sphere rather than a circle. Just add a dz component to the normalize function.
If you look at the sphere at Epcot, you can sort of see this technique at work. it's a dodecahedron with bulged-out faces to make it look rounder.
I'll further explain a popular way of generating a sphere using latitude and longitude (another
way, icospheres, was already explained in the most popular answer at the time of this writing.)
A sphere can be expressed by the following parametric equation:
F(u, v) = [ cos(u)*sin(v)*r, cos(v)*r, sin(u)*sin(v)*r ]
Where:
r is the radius;
u is the longitude, ranging from 0 to 2π; and
v is the latitude, ranging from 0 to π.
Generating the sphere then involves evaluating the parametric function at fixed intervals.
For example, to generate 16 lines of longitude, there will be 17 grid lines along the u axis, with a step of
π/8 (2π/16) (the 17th line wraps around).
The following pseudocode generates a triangle mesh by evaluating a parametric function
at regular intervals (this works for any parametric surface function, not just spheres).
In the pseudocode below, UResolution is the number of grid points along the U axis
(here, lines of longitude), and VResolution is the number of grid points along the V axis
(here, lines of latitude)
var startU=0
var startV=0
var endU=PI*2
var endV=PI
var stepU=(endU-startU)/UResolution // step size between U-points on the grid
var stepV=(endV-startV)/VResolution // step size between V-points on the grid
for(var i=0;i<UResolution;i++){ // U-points
for(var j=0;j<VResolution;j++){ // V-points
var u=i*stepU+startU
var v=j*stepV+startV
var un=(i+1==UResolution) ? endU : (i+1)*stepU+startU
var vn=(j+1==VResolution) ? endV : (j+1)*stepV+startV
// Find the four points of the grid
// square by evaluating the parametric
// surface function
var p0=F(u, v)
var p1=F(u, vn)
var p2=F(un, v)
var p3=F(un, vn)
// NOTE: For spheres, the normal is just the normalized
// version of each vertex point; this generally won't be the case for
// other parametric surfaces.
// Output the first triangle of this grid square
triangle(p0, p2, p1)
// Output the other triangle of this grid square
triangle(p3, p1, p2)
}
}
The code in the sample is quickly explained. You should look into the function void drawSphere(double r, int lats, int longs):
void drawSphere(double r, int lats, int longs) {
int i, j;
for(i = 0; i <= lats; i++) {
double lat0 = M_PI * (-0.5 + (double) (i - 1) / lats);
double z0 = sin(lat0);
double zr0 = cos(lat0);
double lat1 = M_PI * (-0.5 + (double) i / lats);
double z1 = sin(lat1);
double zr1 = cos(lat1);
glBegin(GL_QUAD_STRIP);
for(j = 0; j <= longs; j++) {
double lng = 2 * M_PI * (double) (j - 1) / longs;
double x = cos(lng);
double y = sin(lng);
glNormal3f(x * zr0, y * zr0, z0);
glVertex3f(r * x * zr0, r * y * zr0, r * z0);
glNormal3f(x * zr1, y * zr1, z1);
glVertex3f(r * x * zr1, r * y * zr1, r * z1);
}
glEnd();
}
}
The parameters lat defines how many horizontal lines you want to have in your sphere and lon how many vertical lines. r is the radius of your sphere.
Now there is a double iteration over lat/lon and the vertex coordinates are calculated, using simple trigonometry.
The calculated vertices are now sent to your GPU using glVertex...() as a GL_QUAD_STRIP, which means you are sending each two vertices that form a quad with the previously two sent.
All you have to understand now is how the trigonometry functions work, but I guess you can figure it out easily.
If you wanted to be sly like a fox you could half-inch the code from GLU. Check out the MesaGL source code (http://cgit.freedesktop.org/mesa/mesa/).
See the OpenGL red book: http://www.glprogramming.com/red/chapter02.html#name8
It solves the problem by polygon subdivision.
My example how to use 'triangle strip' to draw a "polar" sphere, it consists in drawing points in pairs:
const float PI = 3.141592f;
GLfloat x, y, z, alpha, beta; // Storage for coordinates and angles
GLfloat radius = 60.0f;
int gradation = 20;
for (alpha = 0.0; alpha < GL_PI; alpha += PI/gradation)
{
glBegin(GL_TRIANGLE_STRIP);
for (beta = 0.0; beta < 2.01*GL_PI; beta += PI/gradation)
{
x = radius*cos(beta)*sin(alpha);
y = radius*sin(beta)*sin(alpha);
z = radius*cos(alpha);
glVertex3f(x, y, z);
x = radius*cos(beta)*sin(alpha + PI/gradation);
y = radius*sin(beta)*sin(alpha + PI/gradation);
z = radius*cos(alpha + PI/gradation);
glVertex3f(x, y, z);
}
glEnd();
}
First point entered (glVertex3f) is as follows the parametric equation and the second one is shifted by a single step of alpha angle (from next parallel).
Although the accepted answer solves the question, there's a little misconception at the end. Dodecahedrons are (or could be) regular polyhedron where all faces have the same area. That seems to be the case of the Epcot (which, by the way, is not a dodecahedron at all). Since the solution proposed by #Kevin does not provide this characteristic I thought I could add an approach that does.
A good way to generate an N-faced polyhedron where all vertices lay in the same sphere and all its faces have similar area/surface is starting with an icosahedron and the iteratively sub-dividing and normalizing its triangular faces (as suggested in the accepted answer). Dodecahedrons, for instance, are actually truncated icosahedrons.
Regular icosahedrons have 20 faces (12 vertices) and can easily be constructed from 3 golden rectangles; it's just a matter of having this as a starting point instead of an octahedron. You may find an example here.
I know this is a bit off-topic but I believe it may help if someone gets here looking for this specific case.
Python adaptation of #Constantinius answer:
lats = 10
longs = 10
r = 10
for i in range(lats):
lat0 = pi * (-0.5 + i / lats)
z0 = sin(lat0)
zr0 = cos(lat0)
lat1 = pi * (-0.5 + (i+1) / lats)
z1 = sin(lat1)
zr1 = cos(lat1)
glBegin(GL_QUAD_STRIP)
for j in range(longs+1):
lng = 2 * pi * (j+1) / longs
x = cos(lng)
y = sin(lng)
glNormal(x * zr0, y * zr0, z0)
glVertex(r * x * zr0, r * y * zr0, r * z0)
glNormal(x * zr1, y * zr1, z1)
glVertex(r * x * zr1, r * y * zr1, r * z1)
glEnd()
void draw_sphere(float r)
{
float pi = 3.141592;
float di = 0.02;
float dj = 0.04;
float db = di * 2 * pi;
float da = dj * pi;
for (float i = 0; i < 1.0; i += di) //horizonal
for (float j = 0; j < 1.0; j += dj) //vertical
{
float b = i * 2 * pi; //0 to 2pi
float a = (j - 0.5) * pi; //-pi/2 to pi/2
//normal
glNormal3f(
cos(a + da / 2) * cos(b + db / 2),
cos(a + da / 2) * sin(b + db / 2),
sin(a + da / 2));
glBegin(GL_QUADS);
//P1
glTexCoord2f(i, j);
glVertex3f(
r * cos(a) * cos(b),
r * cos(a) * sin(b),
r * sin(a));
//P2
glTexCoord2f(i + di, j);//P2
glVertex3f(
r * cos(a) * cos(b + db),
r * cos(a) * sin(b + db),
r * sin(a));
//P3
glTexCoord2f(i + di, j + dj);
glVertex3f(
r * cos(a + da) * cos(b + db),
r * cos(a + da) * sin(b + db),
r * sin(a + da));
//P4
glTexCoord2f(i, j + dj);
glVertex3f(
r * cos(a + da) * cos(b),
r * cos(a + da) * sin(b),
r * sin(a + da));
glEnd();
}
}
One way is to make a quad that faces the camera and write a vertex and fragment shader that renders something that looks like a sphere. You could use equations for a circle/sphere that you can find on the internet.
One nice thing is that the silhouette of a sphere looks the same from any angle. However, if the sphere is not in the center of a perspective view, then it would appear perhaps more like an ellipse. You could work out the equations for this and put them in the fragment shading. Then the light shading needs to changed as the player moves, if you do indeed have a player moving in 3D space around the sphere.
Can anyone comment on if they have tried this or if it would be too expensive to be practical?

UV mapping for a dome?

I am trying to understand how can I change UV mapping of a dome, I need a different texture map projection than this one coded below:
protected final void createDome(final float radius) {
int lats=16;
int longs=16;
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textures2x4[0].getTextureID());
int i, j;
int halfLats = lats / 2;
for(i = 0; i <= halfLats; i++)
{
double lat0 = MathUtils.PI * (-0.5 + (double) (i - 1) / lats);
double z0 = Math.sin(lat0)* radius;
double zr0 = Math.cos(lat0)* radius;
double lat1 = MathUtils.PI * (-0.5 + (double) i / lats);
double z1 = Math.sin(lat1)* radius;
double zr1 = Math.cos(lat1)* radius;
GL11.glBegin(GL11.GL_QUAD_STRIP);
for(j = 0; j <= longs; j++)
{
double lng = 2 * MathUtils.PI * (double) (j - 1) / longs;
double x = Math.cos(lng);
double y = Math.sin(lng);
double s1, s2, t;
s1 = ((double) i) / halfLats;
s2 = ((double) i + 1) / halfLats;
t = ((double) j) / longs;
// HERE: I don't know how to calculate the UV mapping
GL11.glTexCoord2d(s1, t);
GL11.glNormal3d(x * zr0, y * zr0, z0);
GL11.glVertex3d(x * zr0, y * zr0, z0);
GL11.glTexCoord2d(s2, t);
GL11.glNormal3d(x * zr1, y * zr1, z1);
GL11.glVertex3d(x * zr1, y * zr1, z1);
}
GL11.glEnd();
}
}
I linked the output image and the original map. Pratically I need a UV mapping which places the Artic at the zenith/top of the dome, and the Antartic streched on the bottom side of the dome... the Artic/Antartic map is only used to figure out what I mean, my need it's not to fit a globe emisphere
Output image http://img831.imageshack.us/img831/3481/lwjgl.png
Source map http://img203.imageshack.us/img203/4930/earthc.png
Take a look at this function calls (disclaimer: untested - I haven't used LWJGL, but the concept should be identical):
GL11.glMatrixMode(GL11.GL_TEXTURE);
GL11.glRotate(90, 0, 0, 1); // (1) Here you transform texture space
GL11.glMatrixMode(GL11.GL_MODELVIEW);
// and so on
Basically, you need to rotate texture on object. And that's the way you do it - transform texture projection matrix. The line (1) rotates texture 90 degrees along Z axis (perpendicular to texture plane). It's Z axis, because the last argument is 1. Last three arguments denote X, Y and Z respectively (I'll leave the whole explanation for later if you're interested).
The best You can do is to grasp all the basic stuff (projection, texture space, normal vectors, triangulation, continuity, particle systems and a lot more) is to download some trial version of a 3d package and play with it. I learned a lot just out of playing with 3D Studio Max (trial version available, and many more for free). If you have some free time and will to learn something new I strongly advise to look into it. In the end, if You're really interested in 3D graphics You'll end up using one any way - be it 3d package or game engine level editor.
EDIT: After more reading I recognized my own code... Basically you could only swap some of the coordinates to reflect symmetrically along diagonal. You might end up upside down, but that can also be fixed with additional tweaking (or transforming the view axis). Here is my untested guess:
// tweaked to get pole right
s1 = ((double) j) / longs;
s2 = ((double) j + 1) / longs;
t = ((double) i) / halfLats;
Try swapping s1 with s2 if it's not right.