Current coordinates of an object in OpenGL - c++

I need to perform a static collision detection between two objects using C++/OpenGL. I have written the code for the collision detection but this code uses the vertices of the two models as given in their .obj files and not the coords of their current positions (which is what I want).
I have performed to both models a Translate and a Scale transformation and I need to know what are these coordinates right now. I guess it has something to do with transformation matrices etc but how is this combined to the initial coordinates?
Can anyone help me?

OpenGL doesn't know what a model or a scene is, it just draws points, lines and triangles to a pixel framebuffer.
It's time for you to abandon some or all of the fixed function pipeline. You should no longer build your modelview matrix using glRotate, glTranslate and glScale. Instead you should maintain the matrix yourself, using some math library like GLM or Eigen. This gives you a single instance of the model transformation matrix which you can apply on your original model coordinates for collision testing, but also can be loaded into the modelview matrix using glLoadMatrix (if you also have the view transformation applied) or glMultMatrix in the GL_MODELVIEW matrix mode. Or you go for shaders and use uniforms.

If you just have translation t and scaling s, you can simply calculate the actual positions with:
world_position = s * model_position + t
If you have an arbitrary transformation given as a matrix M, you calculate the position with:
world_position = M * model_position
//where model_position should be a 4d vector with w-coordinate 1

I recommend you first draw your objects whith their centroid in the origin (0, 0 , 0)
So when you call the funcion
GLfloat matrixMV[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrixMV);
the coordinates x, y, z
will be respectively matrixMV[12] matrixMV[13] matrixMV[14]
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(-25.0, 0.0, 1.0, 0.0);
glTranslatef(100.0, 50.0, 0.0);
/*Draw a triangle with centroid in the orign*/
glBegin(GL_POLYGON);
glVertex2f(0.0 , 1.0 );
glVertex2f(-1.0,-0.27 );
glVertex2f(1.0 ,-0.27 );
glEnd();
GLfloat matrixMV[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrixMV);
double xCenterTriangle = matrixMV[12];
double yCenterTriangle = matrixMV[13];
double zCenterTriangle = matrixMV[14];

You would actually need to perform the same transformations you did to your coordinates.
If you are using fixed function pipline you might just get the current ModelView Matrix before you start to paint one object via glGetFloatv(GL_MODELVIEW_MATRIX, m) into some GLfloat m[16] and store the matrices to later apply them to your objects.
To get the multiplication right, here's the way the matrix elements are ordered, when multiplied from the left.
| m[0] m[4] m[8] m[12] | v[0]
| m[1] m[5] m[9] m[13] | x v[1]
| m[2] m[6] m[10] m[14] | v[2]
| m[3] m[7] m[11] m[15] | v[3]

Related

object keep his distance around camera c++ opengl

I want to keep the object permanently at a certain distance from the camera. How i can made this? I tried this:
vec3 obj_pos = -cam->Get_CameraPos() ;
obj_pos .z -= 10.0f ;
...
o_modelMatrix = glm::translate(o_modelMatrix, obj_pos);
but it's not working; The object simply stands on the determined position and not moving
Full code of render:
void MasterRenderer::renderPlane() {
PlaneShader->useShaderProgram();
glm::mat4 o_modelMatrix;
glm::mat4 o_view = cam->Get_ViewMatrix();
glm::mat4 o_projection = glm::perspective(static_cast<GLfloat>(glm::radians(cam->Get_fov())),
static_cast<GLfloat>(WIDTH) / static_cast<GLfloat>(HEIGHT), 0.1f, 1000.0f);
glUniformMatrix4fv(glGetUniformLocation(PlaneShader->ShaderProgramID, "projection"), 1, GL_FALSE, glm::value_ptr(o_projection ));
glUniformMatrix4fv(glGetUniformLocation(PlaneShader->ShaderProgramID, "view"), 1, GL_FALSE, glm::value_ptr(o_view ));
vec3 eye_pos = vec3(o_view [3][0], o_view [3][1], o_view [3][2]); //or cam->getCameraPosition();
glm::vec3 losDirection = glm::normalize(vec3(0.0f, 0.0f, -1.0f) - eye_pos);
vec3 obj_pos = eye_pos + losDirection * 1.0f;
b_modelMatrix = scale(o_modelMatrix, vec3(20.0f));
b_modelMatrix = glm::translate(b_modelMatrix, obj_pos );
glUniformMatrix4fv(glGetUniformLocation(PlaneShader->ShaderProgramID,
"model"), 1, GL_FALSE, glm::value_ptr(o_modelMatrix));
...
/// draw
Maybe this is a shot from the hip, but I suppose that you set up a lookat matrix and that you the position of your object is defined in world coordinates.
Commonly a camera is defined by a eye position, at target (center) position and an up vector.
The direction in which the camera looks is the line of sight, which is the unit vector from the eye position to the target position.
Calcualte the line of sight:
glm::vec3 cameraPosition ...; // the eye position
glm::vec3 cameraTarget ...; // the traget (center) posiiton
glm::vec3 losDirection = glm::normalize( cameraTarget - cameraPosition );
Possibly the camera class knows the direction of view (line of sight), then you can skip this calculation.
If the object is always to be placed a certain distance in front of the camera, the position of the object is the position of the camera plus a distance in the direction of the line of sight:
float distance = ...;
float objectPosition = cameraPosition + losDirection * distance;
glm::mat4 modelPosMat = glm::translate( glm::mat4(1.0f) , objectPosition );
glm::mat4 objectModelMat = ...; // initial model matrix of the object
o_modelMatrix = modelPosMat * objectModelMat;
Note the objectModelMat is the identity matrix if the object has no further transformations glm::mat4(1.0f).
so you want to move the object with camera (instead of moving camera with object like camera follow). If this is just for some GUI stuff you can use different static view matrices for it. But if you want to do this in way you suggested then this is the way:
definitions
First we need few 3D 4x4 homogenuous transform matrices (read the link to see how to disect/construct what you need). So lets define some matrices we need for this:
C - inverse camera matrix (no projection)
M - direct object matrix
R - direct object rotation
Each matrix has 4 vectors X,Y,Z are the axises of the coordinate system represented by it and O is the origin. Direct matrix means the matrix directly represents the coordinate system and inverse means that it is the inverse of such matrix.
Math
so we want to construct M so it is placed at some distance d directly in front of C and has rotation R. I assume you are using perspective projection and C viewing direction is -Z axis. So what you need to do is compute position of M. That is easy you just do this:
iC = inverse(C); // get the direct matrix of camera
M = R; // set rotation of object
M.O = iC.O - d*iC.Z; // set position of object
The M.O = (M[12],M[13],M[14]) and iC.Z = (iC.Z[8],iC.Z[9],iC.Z[10]) so if you got direct access to your matrix you can do this on your own in case GLM does not provide element access.
Beware that all this is for standard OpenGL matrix convention and multiplication order. If you use DirectX convention instead then M,R are inverse and C is direct matrix so you would need to change the equations accordingly. Sorry I do not use GLM so I am not confident to generate any code for you.
In case you want to apply camera rotations on object rotations too then you need to change M = R to M = R*iC or M = iC*R which depends on what effect you want to achieve.
It's work fine with not multiplication, but addition
obj_pos = glm::normalize(glm::cross(vec3(0.0f, 0.0f, -1.0f), vec3(0.0f, 1.0f, 0.0f)));
o_modelMatrix[3][0] = camera_pos.x;
o_modelMatrix[3][1] = camera_pos.y;
o_modelMatrix[3][2] = camera_pos.z + distance;
o_modelMatrix = glm::translate(o_modelMatrix, obj_pos);

How to texture a random convex quad in openGL

Alright, so I started looking up tutorials on openGL for the sake of making a minecraft mod. I still don't know too much about it because I figured that I really shouldn't have to when it comes to making the small modification that I want, but this is giving me such a headache. All I want to do is be able to properly map a texture to an irregular concave quad.
Like this:
I went into openGL to figure out how to do this before I tried running code in the game. I've read that I need to do a perspective-correct transformation, and I've gotten it to work for trapezoids, but for the life of me I can't figure out how to do it if both pairs of edges aren't parallel. I've looked at this: http://www.imagemagick.org/Usage/scripts/perspective_transform, but I really don't have a clue where the "8 constants" this guy is talking about came from or if it will even help me. I've also been told to do calculations with matrices, but I've got no idea how much of that openGL does or doesn't take care of.
I've looked at several posts regarding this, and the answer I need is somewhere in those, but I can't make heads or tails of 'em. I can never find a post that tells me what arguments I'm supposed to put in the glTexCoord4f() method in order to have the perspective-correct transform.
If you're thinking of the "Getting to know the Q coordinate" page as a solution to my problem, I'm afraid I've already looked at it, and it has failed me.
Is there something I'm missing? I feel like this should be a lot easier. I find it hard to believe that openGL, with all its bells and whistles, would have nothing for making something other than a rectangle.
So, I hope I haven't pissed you off too much with my cluelessness, and thanks in advance.
EDIT: I think I need to make clear that I know openGL does perspective transform for you when your view of the quad is not orthogonal. I'd know to just change the z coordinates or my fov. I'm looking to smoothly texture non-rectangular quadrilateral, not put a rectangular shape in a certain fov.
OpenGL will do a perspective correct transform for you. I believe you're simply facing the issue of quad vs triangle interpolation. The difference between affine and perspective-correct transforms are related to the geometry being in 3D, where the interpolation in image space is non-linear. Think of looking down a road: the evenly spaced lines appear more frequent in the distance. Anyway, back to triangles vs quads...
Here are some related posts:
How to do bilinear interpolation of normals over a quad?
Low polygon cone - smooth shading at the tip
https://gamedev.stackexchange.com/questions/66312/quads-vs-triangles
Applying color to single vertices in a quad in opengl
An answer to this one provides a possible solution, but it's not simple:
The usual approach to solve this, is by performing the interpolation "manually" in a fragment shader, that takes into account the target topology, in your case a quad. Or in short you have to perform barycentric interpolation not based on a triangle but on a quad. You might also want to apply perspective correction.
The first thing you should know is that nothing is easy with OpenGL. It's a very complex state machine with a lot of quirks and a poor interface for developers.
That said, I think you're confusing a lot of different things. To draw a textured rectangle with perspective correction, you simply draw a textured rectangle in 3D space after setting the projection matrix appropriately.
First, you need to set up the projection you want. From your description, you need to create a perspective projection. In OpenGL, you usually have 2 main matrixes you're concerned with - projection and model-view. The projection matrix is sort of like your "camera".
How you do the above depends on whether you're using Legacy OpenGL (less than version 3.0) or Core Profile (modern, 3.0 or greater) OpenGL. This page describes 2 ways to do it, depending on which you're using.
void BuildPerspProjMat(float *m, float fov, float aspect, float znear, float zfar)
{
float xymax = znear * tan(fov * PI_OVER_360);
float ymin = -xymax;
float xmin = -xymax;
float width = xymax - xmin;
float height = xymax - ymin;
float depth = zfar - znear;
float q = -(zfar + znear) / depth;
float qn = -2 * (zfar * znear) / depth;
float w = 2 * znear / width;
w = w / aspect;
float h = 2 * znear / height;
m[0] = w;
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = h;
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = q;
m[11] = -1;
m[12] = 0;
m[13] = 0;
m[14] = qn;
m[15] = 0;
}
and here is how to use it in an OpenGL 1 / OpenGL 2 code:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(m);
// okay we can switch back to modelview mode
// for all other matrices
glMatrixMode(GL_MODELVIEW);
With a real OpenGL 3.0 code, we must use GLSL shaders and uniform variables to pass and exploit the transformation matrices:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glUseProgram(shaderId);
glUniformMatrix4fv("projMat", 1, GL_FALSE, m);
RenderObject();
glUseProgram(0);
Since I've not used Minecraft, I don't know whether it gives you a projection matrix to use or if you have the other information to construct it.

How to recalculate axis-aligned bounding box after translate/rotate

When I first load my object I calculate the initial AABB with the maximum and minimum (x,y,z) points. But this is in object space and the object moves around the world and more importantly, rotates.
How do I recalculate the new AABB every time the object is translated/rotated? This happens basically in every frame. Is it going to be a very intensive operation to recalculate the new AABB every frame? If so, what would be the alternative?
I know AABBs will make my collision detection less accurate, but it's easier to implement the collision detection code than OBBs and I want to take this one step at a time.
Here's my current code after some insight from the answers below:
typedef struct sAxisAlignedBoundingBox {
Vector3D bounds[8];
Vector3D max, min;
} AxisAlignedBoundingBox;
void drawAxisAlignedBoundingBox(AxisAlignedBoundingBox box) {
glPushAttrib(GL_LIGHTING_BIT | GL_POLYGON_BIT);
glEnable(GL_COLOR_MATERIAL);
glDisable(GL_LIGHTING);
glColor3f(1.0f, 1.0f, 0.0f);
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[0].x, box.bounds[0].y, box.bounds[0].z);
glVertex3f(box.bounds[1].x, box.bounds[1].y, box.bounds[1].z);
glVertex3f(box.bounds[2].x, box.bounds[2].y, box.bounds[2].z);
glVertex3f(box.bounds[3].x, box.bounds[3].y, box.bounds[3].z);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[4].x, box.bounds[4].y, box.bounds[4].z);
glVertex3f(box.bounds[5].x, box.bounds[5].y, box.bounds[5].z);
glVertex3f(box.bounds[6].x, box.bounds[6].y, box.bounds[6].z);
glVertex3f(box.bounds[7].x, box.bounds[7].y, box.bounds[7].z);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[0].x, box.bounds[0].y, box.bounds[0].z);
glVertex3f(box.bounds[5].x, box.bounds[5].y, box.bounds[5].z);
glVertex3f(box.bounds[6].x, box.bounds[6].y, box.bounds[6].z);
glVertex3f(box.bounds[1].x, box.bounds[1].y, box.bounds[1].z);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[4].x, box.bounds[4].y, box.bounds[4].z);
glVertex3f(box.bounds[7].x, box.bounds[7].y, box.bounds[7].z);
glVertex3f(box.bounds[2].x, box.bounds[2].y, box.bounds[2].z);
glVertex3f(box.bounds[3].x, box.bounds[3].y, box.bounds[3].z);
glEnd();
glPopAttrib();
}
void calculateAxisAlignedBoundingBox(GLMmodel *model, float matrix[16]) {
AxisAlignedBoundingBox box;
float dimensions[3];
// This will give me the absolute dimensions of the object
glmDimensions(model, dimensions);
// This calculates the max and min points in object space
box.max.x = dimensions[0] / 2.0f, box.min.x = -1.0f * box.max.x;
box.max.y = dimensions[1] / 2.0f, box.min.y = -1.0f * box.max.y;
box.max.z = dimensions[2] / 2.0f, box.min.z = -1.0f * box.max.z;
// These calculations are probably the culprit but I don't know what I'm doing wrong
box.max.x = matrix[0] * box.max.x + matrix[4] * box.max.y + matrix[8] * box.max.z + matrix[12];
box.max.y = matrix[1] * box.max.x + matrix[5] * box.max.y + matrix[9] * box.max.z + matrix[13];
box.max.z = matrix[2] * box.max.x + matrix[6] * box.max.y + matrix[10] * box.max.z + matrix[14];
box.min.x = matrix[0] * box.min.x + matrix[4] * box.min.y + matrix[8] * box.min.z + matrix[12];
box.min.y = matrix[1] * box.min.x + matrix[5] * box.min.y + matrix[9] * box.min.z + matrix[13];
box.min.z = matrix[2] * box.min.x + matrix[6] * box.min.y + matrix[10] * box.min.z + matrix[14];
/* NOTE: If I remove the above calculations and do something like this:
box.max = box.max + objPlayer.position;
box.min = box.min + objPlayer.position;
The bounding box will move correctly when I move the player, the same does not
happen with the calculations above. It makes sense and it's very simple to move
the box like this. The only problem is when I rotate the player, the box should
be adapted and increased/decreased in size to properly fit the object as a AABB.
*/
box.bounds[0] = Vector3D(box.max.x, box.max.y, box.min.z);
box.bounds[1] = Vector3D(box.min.x, box.max.y, box.min.z);
box.bounds[2] = Vector3D(box.min.x, box.min.y, box.min.z);
box.bounds[3] = Vector3D(box.max.x, box.min.y, box.min.z);
box.bounds[4] = Vector3D(box.max.x, box.min.y, box.max.z);
box.bounds[5] = Vector3D(box.max.x, box.max.y, box.max.z);
box.bounds[6] = Vector3D(box.min.x, box.max.y, box.max.z);
box.bounds[7] = Vector3D(box.min.x, box.min.y, box.max.z);
// This draw call is for testing porpuses only
drawAxisAlignedBoundingBox(box);
}
void drawObjectPlayer(void) {
static float mvMatrix[16];
if(SceneCamera.GetActiveCameraMode() == CAMERA_MODE_THIRD_PERSON) {
objPlayer.position = SceneCamera.GetPlayerPosition();
objPlayer.rotation = SceneCamera.GetRotationAngles();
objPlayer.position.y += -PLAYER_EYE_HEIGHT + 0.875f;
/* Only one of the two code blocks below should be active at the same time
Neither of them is working as expected. The bounding box doesn't is all
messed up with either code. */
// Attempt #1
glPushMatrix();
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
glCallList(gameDisplayLists.player);
glGetFloatv(GL_MODELVIEW_MATRIX, mvMatrix);
glPopMatrix();
// Attempt #2
glPushMatrix();
glLoadIdentity();
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, mvMatrix);
glPopMatrix();
calculateAxisAlignedBoundingBox(objPlayer.model, mvMatrix);
}
}
But it doesn't work as it should... What I'm doing wrong?
Simply recompute the AABB of the transformed AABB. This means transforming 8 vertices (8 vertex - matrix multiplications) and 8 vertex-vertex comparisons.
So at initialisation, you compute your AABB in model space: for each x,y,z of each vertex of the model, you check against xmin, xmax, ymin, ymax, etc.
For each frame, you generate a new transformation matrix. In OpenGL this is done with glLoadIdentity followed by glTransform/Rotate/Scale (if using the old API). This is the Model Matrix, as lmmilewski said.
You compute this transformation matrix a second time (outside OpenGL, for instance using glm). You also can get OpenGL's resulting matrix using glGet.
You multiply each of your AABB's eight vertices by this matrix. Use glm for matrix-vector multiplication. You'll get your transformed AABB (in world space). It it most probably rotated (not axis-aligned anymore).
Now your algorithm probably only work with axis-aligned stuff, hence your question. So now you approximate the new bounding box of the transformed model by takinf the bounding box of the transformed bounding box:
For each x,y,z of each vertex of the new AABB, you check against xmin, xmax, ymin, ymax, etc. This gives you an world-space AABB that you can use in your clipping algorithm.
This is not optimal (AABB-wise). You'll get lots of empty space, but performance-wise, it's much much better that recomputing the AABB of the whole mesh.
As for the transformation matrix, in drawObjectPlayer:
gLLoadIdentity();
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, mvMatrix);
// Now you've got your OWN Model Matrix (don't trust the GL_MODELVIEW_MATRIX flag : this is a workaround, and I know what I'm doing ^^ )
gLLoadIdentity(); // Reset the matrix so that you won't make the transformations twice
gluLookAt( whatever you wrote here earlier )
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
// Now OpenGL is happy, he's got his MODELVIEW matrix correct ( gluLookAt is the VIEW part; Translate/Rotate is the MODEL part
glCallList(gameDisplayLists.player); // Transformed correcty
I can't explain it further than that... as said in the comments, you had to do it twice. You wouldn't have these problems and ugly workarounds in OpenGL 3, btw, because you'd be fully responsible of your own matrices. Equivalent in OpenGL 2:
glm::mat4 ViewMatrix = glm::LookAt(...);
glm::mat4 ModelMatrix = glm::rotate() * glm::translate(...);
// Use ModelMatrix for whatever you want
glm::mat4 ModelViewMatrix = ViewMatrix * ModelMatrix;
glLoadMatrix4fv( &ModelViewMatrix[0][0] ); // In OpenGL 3 you would use an uniform instead
Much cleaner, right?
Yep, you can transform the eight corner vertices and do min/max on the results, but there is a faster way, as described by Jim Arvo from his chapter in Graphics Gems (1990).
Performance-wise, Arvo's method is roughly equivalent to two transforms instead of eight and basically goes as follows (this transforms box A into box B)
// Split the transform into a translation vector (T) and a 3x3 rotation (M).
B = zero-volume AABB at T
for each element (i,j) of M:
a = M[i][j] * A.min[j]
b = M[i][j] * A.max[j]
B.min[i] += a < b ? a : b
B.max[i] += a < b ? b : a
return B
One variation of Arvo's method uses center / extent representation rather than mix / max, which is described by Christer Ericson in Real-Time Collision Detection (photo).
Complete C code for Graphics Gems article can be found here.
To do that you have to loop over every vertex, calculate its position in the world (multiply by modelview) and find the minimum / maximum vertex coordinates within every object (just like when you compute it for the first time).
You can scale your AABB a bit, so that you don't have to recalculate it - it is enough to enlarge it by factor sqrt(2) - your rotated object then always fits in AABB.
There is also a question in which direction you rotate(?). If always in one then you can enlarge AABB only in that direction.
Optionally, you can use bounding spheres instead of AABBs. Then you don't care about rotation and scaling is not a problem.
To quote a previous response on AABB # Stack Overflow:
"Sadly yes, if your character rotates you need to recalculate your AABB . . .
Skurmedel
The respondent's suggestion, and mine, is to implement oriented bounding boxes once you have AABB working, and also to note you can make aabb's of portions of a mesh to fudge collision detection with greater accuracy than one enormous box for each object.
Why not use your GPU? Today I implimented a solution of this problem by rendening a couple of frames.
Temporary place your camera over the object, above it, pointing
down at the object.
Render only your object, with out lights or
anything.
Use orthographic projection too.
Then read the frame buffer. Rows and columns of black pixels means the model isn't there. Hit a white pixel - you hit one of the model AABB borders.
I know this isn't a solution for all the cases, but with some prior knowledge, this is very efficient.
For rendering off screen see here.

how to draw a spiral using opengl

I want to know how to draw a spiral.
I wrote this code:
void RenderScene(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLfloat x,y,z = -50,angle;
glBegin(GL_POINTS);
for(angle = 0; angle < 360; angle += 1)
{
x = 50 * cos(angle);
y = 50 * sin(angle);
glVertex3f(x,y,z);
z+=1;
}
glEnd();
glutSwapBuffers();
}
If I don't include the z terms I get a perfect circle but when I include z, then I get 3 dots that's it. What might have happened?
I set the viewport using glviewport(0,0,w,h)
To include z should i do anything to set viewport in z direction?
You see points because you are drawing points with glBegin(GL_POINTS).
Try replacing it by glBegin(GL_LINE_STRIP).
NOTE: when you saw the circle you also drew only points, but drawn close enough to appear as a connected circle.
Also, you may have not setup the depth buffer to accept values in the range z = [-50, 310] that you use. These arguments should be provided as zNear and zFar clipping planes in your gluPerspective, glOrtho() or glFrustum() call.
NOTE: this would explain why with z value you only see a few points: the other points are clipped because they are outside the z-buffer range.
UPDATE AFTER YOU HAVE SHOWN YOUR CODE:
glOrtho(-100*aspectratio,100*aspectratio,-100,100,1,-1); would only allow z-values in the [-1, 1] range, which is why only the three points with z = -1, z = 0 and z = 1 will be drawn (thus 3 points).
Finally, you're probably viewing the spiral from the top, looking directly in the direction of the rotation axis. If you are not using a perspective projection (but an isometric one), the spiral will still show up as a circle. You might want to change your view with gluLookAt().
EXAMPLE OF SETTING UP PERSPECTIVE
The following code is taken from the excellent OpenGL tutorials by NeHe:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glLoadIdentity(); // Reset The Projection Matrix
// Calculate The Aspect Ratio Of The Window
gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,100.0f);
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
Then, in your draw loop would look something like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity();
glTranslatef(-1.5f,0.0f,-6.0f); // Move Left 1.5 Units And Into The Screen 6.0
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd();
Of course, you should alter this example code your needs.
catchmeifyoutry provides a perfectly capable method, but will not draw a spatially accurate 3D spiral, as any render call using a GL_LINE primitive type will rasterize to fixed pixel width. This means that as you change your perspective / view, the lines will not change width. In order to accomplish this, use a geometry shader in combination with GL_LINE_STRIP_ADJACENCY to create 3D geometry that can be rasterized like any other 3D geometry. (This does require that you use the post fixed-function pipeline however)
I recommended you to try catchmeifyoutry's method first as it will be much simpler. If you are not satisfied, try the method I described. You can use the following post as guidance:
http://prideout.net/blog/?tag=opengl-tron
Here is my Spiral function in C. The points are saved into a list which can be easily drawn by OpenGL (e.g. connect adjacent points in list with GL_LINES).
cx,cy ... spiral centre x and y coordinates
r ... max spiral radius
num_segments ... number of segments the spiral will have
SOME_LIST* UniformSpiralPoints(float cx, float cy, float r, int num_segments)
{
SOME_LIST *sl = newSomeList();
int i;
for(i = 0; i < num_segments; i++)
{
float theta = 2.0f * 3.1415926f * i / num_segments; //the current angle
float x = (r/num_segments)*i * cosf(theta); //the x component
float y = (r/num_segments)*i * sinf(theta); //the y component
//add (x + cx, y + cy) to list sl
}
return sl;
}
An example image with r = 1, num_segments = 1024:
P.S. There is difference in using cos(double) and cosf(float).
You use a float variable for a double function cos.

Howto convert transformed coordinates to world coordinates in OpenGL?

I have a transformation set up in Opengl like this:
glPushMatrix();
glTranslated(pntPos.X(), pntPos.Y(), pntPos.Z());
glRotated(dx, 1, 0, 0);
glRotated(dy, 0, 1, 0);
glRotated(dz, 0, 0, 1);
//I use this to Render a freely placeable textbox in 3d
//space which is based on the FTGL-Toolkit [1] (for TTF support).
m_FTLayout.Render(m_wcCaption, m_iCaptionSize);
glPopMatrix();
This works as expected.
However, i would like to calculate the 3d-Boundingbox wich bounds my text in word space. I know the coordinates of this bounding-box relative to my GL-Transformation.
But i have a hard time calculating their world coordinates from these.
Could you give me some insight on how to retrieve the respective World coordinates of the four vertexes in the bounding Box?
[1] http://sourceforge.net/projects/ftgl/
The simplest way is to ask the GL what its current modelview matrix is glGetFloatv(GL_MODELVIEW_MATRIX, m), and transform your bounding box vertices by the resulting matrix.
m[0] m[4] m[8] m[12] v[0]
m[1] m[5] m[9] m[13] v[1]
M(v) = m[2] m[6] m[10] m[14] X v[2]
m[3] m[7] m[11] m[15] v[3]
That will give you the view-space 4-D homogeneous position of your vertex.
Or, you could compute the modelview yourself, based on the math that is available in the man pages of glTranslated and glRotated