Screen Coordinates to World Coordinates - c++

I want to convert from Screen coordinates to world coordinates in OpenGL. I am using glm for that purpose (also I am using glfw)
This is my code:
static void mouse_callback(GLFWwindow* window, int button, int action, int mods)
{
if (button == GLFW_MOUSE_BUTTON_LEFT) {
if(GLFW_PRESS == action){
int height = 768, width =1024;
double xpos,ypos,zpos;
glfwGetCursorPos(window, &xpos, &ypos);
glReadPixels(xpos, ypos, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &zpos);
glm::mat4 m_projection = glm::perspective(glm::radians(45.0f), (float)(1024/768), 0.1f, 1000.0f);
glm::vec3 win(xpos,height - ypos, zpos);
glm::vec4 viewport(0.0f,0.0f,(float)width, (float)height);
glm::vec3 world = glm::unProject(win, mesh.getView() * mesh.getTransform(),m_projection,viewport);
std::cout << "screen " << xpos << " " << ypos << " " << zpos << std::endl;
std::cout << "world " << world.x << " " << world.y << " " << world.z << std::endl;
}
}
}
Now, I have 2 problem, the first is that the world vector that I get from glm::unProject has a very small x, y and z. If i use this values to translate the mesh, the mesh suffers a small translate and doesn't follow the mouse pointer.
The second problem is, that as said in the glm docs (https://glm.g-truc.net/0.9.8/api/a00169.html#ga82a558de3ce42cbeed0f6ec292a4e1b3) the result is returned in object coordinates. So in order to convert screen to world coordinates I should use a transform matrix from one mesh, but what happens if a have many meshes and i want to convert from screen to world coordinates? what model matrix should I multuply by camera view matrix to form ModelView matrix?

There are a couple of issues with this sequence:
glfwGetCursorPos(window, &xpos, &ypos);
glReadPixels(xpos, ypos, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &zpos);
[...]
glm::vec3 win(xpos,height - ypos, zpos);
Window space origin. glReadPixels is a GL function, and as such adheres to GL's conventions, with the origin beeing the lower left pixel. While you flip to that convention for your win variable, you do still use the wrong origin for reading the depth buffer.
Furthermore, your flipping is wrong. Since ypos should be in [0,height-1], the correct formula is height-1 - ypos, so you are also off by one here. (We will see later that that isn't exactly true either.)
"Screen Coordinates" vs. Pixel Coordinates. Your code assumes that the coordinates you get back from GLFW are in pixels. This is not the case. GLFW uses the concept of "virtual screen coordinates" which don't necessarily map to pixels:
Pixels and screen coordinates may map 1:1 on your machine, but they
won't on every other machine, for example on a Mac with a Retina
display. The ratio between screen coordinates and pixels may also
change at run-time depending on which monitor the window is currently
considered to be on.
GLFW generally provides two sizes for a window, glfwGetWindowSize will return the result in said virtual screen coordinates, while glfwGetFramebufferSize will return the actual size in pixels, relevant for OpenGL. So basically, you must query both sizes, and than can appropriately scale the mouse coords from screen coords to the actual pixels you need.
Sub-Pixel position. While glReadPixels addresses a specific pixel with integer coordinates, the whole transformation math works with floating point and can represent arbitrary sub-pixel positions. GL's window space is defined so that integer coordinates represent the corners of the pixels, the pixel centers lie at half integer coordinates. Your win variable will represent the lower left corner of said pixel, but the more useful convention would be to use the pixel center, so you'd better add an offset of (0.5f, 0.5f, 0.0f) to win, assuming you point to the pixel center. (We can do a bit better if the virtual screen coords are higher resolution than our pixels, which means we already get a sub-pixel position for the mouse cursor, but the math won't change, because we have still to switch to the GL's convent where integer means border instead of integer means center). Note that since we now consider a space which is going from [0,w) in x and [0,h) in y, this also affects point 1. If you click at pixel (0,0), it will have the center (0.5, 0.5), and the y flipping should be h-y so h-0.5 (which should be rounded down towards h-1 when accessing the framebuffer pixel).
To put it all together, you could do (conceptually):
glfwGetWindowSize(win, &screen_w, &screen_h); // better use the callback and cache the values
glfwGetFramebufferSize(win, &pixel_w, &pixel_h); // better use the callback and cache the values
glfwGetCursorPos(window, &xpos, &ypos);
glm::vec2 screen_pos=glm::vec2(xpos, ypos);
glm::vec2 pixel_pos=screen_pos * glm::vec2(pixel_w, pixel_h) / glm::vec2(screen_w, screen_h); // note: not necessarily integer
pixel_pos = pixel_pos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention
glm::vec3 win=glm::vec3(pixel_pos., pixel_h-pixel_pos.y, 0.0f);
glReadPixels( (GLint)win.x, (GLint)win.y, ..., &win.z)
// ... unproject win
what model matrix should I multuply by camera view matrix to form ModelView matrix?
None. The basic coordinate transformation pipeline is
object space -> {MODEL} -> World Space -> {VIEW} -> Eye Space -> {PROJ} -> Clip Space -> {perspective divide} -> NDC -> {Viewport/DepthRange} -> Window Space
There is no model matrix influencing the way from world to window space, hence inverting it will also not depend on any model matrix either.
that as said in the glm docs (https://glm.g-truc.net/0.9.8/api/a00169.html#ga82a558de3ce42cbeed0f6ec292a4e1b3) the result is returned in object coordinates.
The math doesn't care about which spaces you transform between. The documentation mentions object space, and the function uses an argument named modelView, but what matrix you put there is totally irrelevant. Putting just view there will be fine.
So in order to convert screen to world coordinates I should use a transform matrix from one mesh.
Well, you could even do that. You could use any model matrix of any object, as long as the matrix isn't singular, and as long as you use the same matrix for the unproject as you later use for going from object space to world space. You can even make up a random matrix, if you make sure it is regular. (Well, there might be numerical issues if the matrix is ill-conditioned). The key thing here is that when you specify (V*M) and P as the matrices for glm::unproject, it will internally calculate (V*M)^-1 * P^-1 * ndc_pos which is M^-1 * V^-1 & P^-1 * ndc_pos. If you transform the result back from object space to world space, you multiply that by M again, resulting in M * M^-1 * V^-1 & P^-1 * ndc_pos, which is of course just V^-1 & P^-1 * ndc_pos which you would directly have gotten if you didn't put M into the unproject in the first place. You just added more computational work, and introduced more potential for numerical issues...

Related

OpenGL: mouse move objects, object can't follow mouse

After picking an object with the mouse, I want to be able to move the object using the mouse. First, I translate mouse position to world position, and use glReadPixels() to read the depth of the object as z's:
double xpos, ypos, zpos;
glfwGetCursorPos(window_ptr, &xpos, &ypos);
float xPercent = (xpos + 0.5f) / scr_width_ * 2.0f - 1; // range is -1 to +1
float yPercent = (ypos + 0.5f) / scr_height_ * 2.0f - 1; // range is -1 to +1
yPercent = -yPercent;
glReadPixels(xpos, scr_height_ - ypos - 1, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &zPercent);
then we move the mouse until we want to release the mouse.
last_position = (xPercent, yPercent, zPercent);
Finally we use the same value as z's value and calculate the x's world position and y's position:
current_position = (xPercent, yPercent, zPercent);
then we translate the object model:
model = glm::translate(model, current_position - last_postion);
the issue is:
the object's speed is not same as mouse's.
waiting for your answer.
The problem is caused by the difference in coordinate systems between the screen and the world. Your cursor position is in "percent" (fraction of the screen width), but your objects are likely placed in some other coordinate system that is determined by your projection matrix (for example, in meters). Unless you are using an orthographic projection and the world-space coordinates of the object are in the same space as the screen-space coordinates, you will get different motion.
For example, even if you had an orthographic projection, it may be configured such that the screen maps to a region of game world that is 20 meters wide, so moving your mouse anywhere within the range [-1, 1] (across the full width of the screen) will only translate to the object moving over 1/10th of the screen.
Furthermore, you may be working with a perspective projection. In that case, not only will your world coordinates differ from the screen coordinates, but there is actually a nonlinear transformation between the screen coordinates and the world coordinates that will cause the object's motion to be distorted near the edges of the screen. You can correct for this by un-projecting your mouse cursor's screen-space coordinates back into world coordinates using glm::unproject. Here is a good explanation of this process.

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

OpenGL - Why is my ray picking not working?

I recently setup a project that uses OpenGL (Via the C# Wrapper Library OpenTK) which should do the following:
Create a perspective projection camera - this camera will be used to make the user rotate,move etc. to look at my 3d models.
Draw some 3d objects.
Use 3d ray picking via unproject to let the user pick points/models in the 3d view.
The last step (ray picking) looks ok on my 3d preview (GLControl) but returns invalid results like Vector3d (1,86460186949617; -45,4086124979203; -45,0387025610247). I have no idea why this is the case!
I am using the following code to setup my viewport:
this.RenderingControl.MakeCurrent();
int w = RenderingControl.Width;
int h = RenderingControl.Height;
// Use all of the glControl painting area
GL.Viewport(0, 0, w, h);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
Matrix4 p = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, w / (float)h, 0.1f, 64.0f);
GL.LoadMatrix(ref p);
I use this method for unprojecting:
/// <summary>
/// This methods maps screen coordinates to viewport coordinates.
/// </summary>
/// <param name="screen"></param>
/// <param name="view"></param>
/// <param name="projection"></param>
/// <param name="view_port"></param>
/// <returns></returns>
private Vector3d UnProject(Vector3d screen, Matrix4d view, Matrix4d projection, int[] view_port)
{
Vector4d pos = new Vector4d();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)view_port[0]) / (float)view_port[2] * 2.0f - 1.0f;
pos.Y = (screen.Y - (float)view_port[1]) / (float)view_port[3] * 2.0f - 1.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4d pos2 = Vector4d.Transform(pos, Matrix4d.Invert(Matrix4d.Mult(view, projection)));
Vector3d pos_out = new Vector3d(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
I use this code to position my camera (including rotation) and do the ray picking:
// Clear buffers
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
// Apply camera
GL.MatrixMode(MatrixMode.Modelview);
Matrix4d mv = Matrix4d.LookAt(EyePosition, Vector3d.Zero, Vector3d.UnitY);
GL.LoadMatrix(ref mv);
GL.Translate(0, 0, ZoomFactor);
// Rotation animation
if (RotationAnimationActive)
{
CameraRotX += 0.05f;
}
if (CameraRotX >= 360)
{
CameraRotX = 0;
}
GL.Rotate(CameraRotX, Vector3.UnitY);
GL.Rotate(CameraRotY, Vector3.UnitX);
// Apply useful rotation
GL.Rotate(50, 90, 30, 0f);
// Draw Axes
drawAxes();
// Draw vertices of my 3d objects ...
// Picking Test
int x = MouseX;
int y = MouseY;
int[] viewport = new int[4];
Matrix4d modelviewMatrix, projectionMatrix;
GL.GetDouble(GetPName.ModelviewMatrix, out modelviewMatrix);
GL.GetDouble(GetPName.ProjectionMatrix, out projectionMatrix);
GL.GetInteger(GetPName.Viewport, viewport);
// get depth of clicked pixel
float[] t = new float[1];
GL.ReadPixels(x, RenderingControl.Height - y, 1, 1, OpenTK.Graphics.OpenGL.PixelFormat.DepthComponent, PixelType.Float, t);
var res = UnProject(new Vector3d(x, viewport[3] - y, t[0]), modelviewMatrix, projectionMatrix, viewport);
GL.Begin(BeginMode.Lines);
GL.Color3(Color.Yellow);
GL.Vertex3(0, 0, 0);
GL.Vertex3(res);
Debug.WriteLine(res.ToString());
GL.End();
I get the following result from my ray picker:
Clicked Position = (1,86460186949617; -45,4086124979203;
-45,0387025610247)
This vector is shown as the yellow line on the attached screenshot.
Why is the Y and Z Position not in the range -1/+1? Where do these values like -45 come from and why is the ray rendered correctly on the screen?
If you have only a tip about what could be broken I would also appreciate your reply!
Screenshot:
If you break down the transform from screen to world into individual matrices, print out the inverses of the M, V, and P matrices, and print out the intermediate result of each (matrix inverse) * (point) calculation from screen to world/model, then I think you'll see the problem. Or at least you'll see that there is a problem with using the inverse of the M-V-P matrix and then intuitively grasp the solution. Or maybe just read the list of steps below and see if that helps.
Here's the approach I've used:
Convert the 2D vector for mouse position in screen/control/widget coordinates to the 4D vector (mouse.x, mouse.y, 0, 1).
Transform the 4D vector from screen coordinates to Normalized Device Coordinates (NDC) space. That is, multiply the inverse of your NDC-to-screen matrix [or equivalent equations] by (mouse.x, mouse.y, 0, 1) to yield a 4D vector in NDC coordinate space: (nx, ny, 0, 1).
In NDC coordinates, define two 4D vectors: the source (near point) of the ray as (nx, ny, -1, 1) and a far point at (nx, ny, +1, 1).
Multiply each 4D vector by the inverse of the (perspective) projection matrix.
Convert each 4D vector to a 3D vector (i.e. divide through by the fourth component, often called "w"). *
Multiply the 3D vectors by the inverse of the view matrix.
Multiply the 3D vectors by the inverse of the model matrix (which may well be the identity matrix).
Subtract the 3D vectors to yield the ray.
Normalize the ray.
Yee-haw. Go back and justify each step with math, if desired, or save that review for later [if ever] and work frantically towards catching up on creating actual 3D graphics and interaction and whatnot.
Go back and refactor, if desired.
(* The framework I use allows multiplication of a 3D vector by a 4x4 matrix because it treats the 3D vector as a 4D vector. I can make this more clear later, if necessary, but I hope the point is reasonably clear.)
That worked for me. This set of steps also works for Ortho projections, though with Ortho you could cheat and write simpler code since the projection matrix isn't goofy.
It's late as I write this and I may have misinterpreted your problem. I may have also misinterpreted your code since I use a different UI framework. But I know how aggravating ray casting for OpenGL can be, so I'm posting in the hope that at least some of what I write is useful, and that I can thereby alleviate some human misery.
Postscript. Speaking of misery: I found numerous forum posts and blog pages that address ray casting for OpenGL, but most posts start with some variant of the following: "First, you have to know X" [where X is not necessary to know]; or "Go look at the unproject function [in library X in repository Y for which you'll need client app Z . ..]"; or a particular favorite of mine: "Review a textbook on linear algebra."
Having to slog through yet another description of the OpenGL rendering pipeline or the OpenGL transformation conga line when you just need to debug ray casting--a common problem--is like having to listen to a lecture on hydraulics when you discover your brake pedal isn't working.

How to correctly represent 3D rotation in games

In most 3D platform games, only rotation around the Y axis is needed since the player is always positioned upright.
However, for a 3D space game where the player needs to be rotated on all axises, what is the best way to represent the rotation?
I first tried using Euler angles:
glRotatef(anglex, 1.0f, 0.0f, 0.0f);
glRotatef(angley, 0.0f, 1.0f, 0.0f);
glRotatef(anglez, 0.0f, 0.0f, 1.0f);
The problem I had with this approach is that after each rotation, the axises change. For example, when anglex and angley are 0, anglez rotates the ship around its wings, however if anglex or angley are non zero, this is no longer true. I want anglez to always rotate around the wings, irrelevant of anglex and angley.
I read that quaternions can be used to exhibit this desired behavior however was unable to achieve it in practice.
I assume my issue is due to the fact that I am basically still using Euler angles, but am converting the rotation to its quaternion representation before usage.
struct quaternion q = eulerToQuaternion(anglex, angley, anglez);
struct matrix m = quaternionToMatrix(q);
glMultMatrix(&m);
However, if storing each X, Y, and Z angle directly is incorrect, how do I say "Rotate the ship around the wings (or any consistent axis) by 1 degree" when my rotation is stored as a quaternion?
Additionally, I want to be able to translate the model at the angle that it is rotated by. Say I have just a quaternion with q.x, q.y, q.z, and q.w, how can I move it?
Quaternions are very good way to represent rotations, because they are efficient, but I prefer to represent the full state "position and orientation" by 4x4 matrices.
So, imagine you have a 4x4 matrix for every object in the scene. Initially, when the object is unrotated and untraslated, this matrix is the identity matrix, this is what I will call "original state". Suppose, for instance, the nose of your ship points towards -z in its original state, so a rotation matrix that spin the ship along the z axis is:
Matrix4 around_z(radian angle) {
c = cos(angle);
s = sin(angle);
return Matrix4(c, -s, 0, 0,
s, c, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
}
now, if your ship is anywhere in space and rotated to any direction, and lets call this state t, if you want to spin the ship around z axis for an angle amount as if it was on its "original state", it would be:
t = t * around_z(angle);
And when drawing with OpenGL, t is what you multiply for every vertex of that ship. This assumes you are using column vectors (as OpenGL does), and be aware that matrices in OpenGL are stored columns first.
Basically, your problem seems to be with the order you are applying your rotations. See, quaternions and matrices multiplication are non-commutative. So, if instead, you write:
t = around_z(angle) * t;
You will have the around_z rotation applied not to the "original state" z, but to global coordinate z, with the ship already affected by the initial transformation (roatated and translated). This is the same thing when you call the glRotate and glTranslate functions. The order they are called matters.
Being a little more specific for your problem: you have the absolute translation trans, and the rotation around its center rot. You would update each object in your scene with something like:
void update(quaternion delta_rot, vector delta_trans) {
rot = rot * delta_rot;
trans = trans + rot.apply(delta_trans);
}
Where delta_rot and delta_trans are both expressed in coordinates relative to the original state, so, if you want to propel your ship forward 0.5 units, your delta_trans would be (0, 0, -0.5). To draw, it would be something like:
void draw() {
// Apply the absolute translation first
glLoadIdentity();
glTranslatevf(&trans);
// Apply the absolute rotation last
struct matrix m = quaternionToMatrix(q);
glMultMatrix(&m);
// This sequence is equivalent to:
// final_vertex_position = translation_matrix * rotation_matrix * vertex;
// ... draw stuff
}
The order of the calls I choose by reading the manual for glTranslate and glMultMatrix, to guarantee the order the transformations are applied.
About rot.apply()
As explained at Wikipedia article Quaternions and spatial rotation, to apply a rotation described by quaternion q on a vector p, it would be rp = q * p * q^(-1), where rp is the newly rotated vector. If you have a working quaternion library implemented on your game, you should either already have this operation implemented, or should implement it now, because this is the core of using quaternions as rotations.
For instance, if you have a quaternion that describes a rotation of 90° around (0,0,1), if you apply it to (1,0,0), you will have the vector (0,1,0), i.e. you have the original vector rotated by the quaternion. This is equivalent to converting your quaternion to matrix, and doing a matrix to colum-vector multiplication (by matrix multiplication rules, it yields another column-vector, the rotated vector).

How to tell the size of font in pixels when rendered with openGL

I'm working on the editor for Bitfighter, where we use the default OpenGL stroked font. We generally render the text with a linewidth of 2, but this makes smaller fonts less readable. What I'd like to do is detect when the fontsize will fall below some threshold, and drop the linewidth to 1. The problem is, after all the transforms and such are applied, I don't know how to tell how tall (in pixels) a font of size <fontsize> will be rendered.
This is the actual inner rendering function:
if(---something--- < thresholdSizeInPixels)
glLineWidth(1);
float scalefactor = fontsize / 120;
glPushMatrix();
glTranslatef(x, y + (fix ? 0 : size), 0);
glRotatef(angle * radiansToDegreesConversion, 0, 0, 1);
glScalef(scaleFactor, -scaleFactor, 1);
for(S32 i = 0; string[i]; i++)
OpenglUtils::drawCharacter(string[i]);
glPopMatrix();
Just before calling this, I want to check the height of the font, then drop the linewidth if necessary. What goes in the ---something--- spot?
Bitfighter is a pure old-school 2D game, so there are no fancy 3D transforms going on. All code is in C++.
My solution was to combine the first part Christian Rau's solution with a fragment of the second. Basically, I can get the current scaling factor with this:
static float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview); // Fills modelview[]
float scalefact = modelview[0];
Then, I multiply scalefact by the fontsize in pixels, and multiply that by the ratio of windowHeight / canvasHeight to get the height in pixels that my text will be rendered.
That is...
textheight = scalefact * fontsize * widndowHeight / canvasHeight
And I liked also the idea of scaling the line thickness rather than stepping from 2 to 1 when a threshold is crossed. It all works very nicely now.
where we use the default OpenGL stroked font
OpenGL doesn't do fonts. There is no default OpenGL stroked font.
Maybe you are referring to GLUT and its glutStrokeCharacter function. Then please take note that GLUT is not part of OpenGL. It's an independent library, focused on providing a simplicistic framework for small OpenGL demos and tutorials.
To answer your question: GLUT Stroke Fonts are defined in terms of vertices, so the usual transformations apply. Since usually all transformations are linear, you can simply transform the vector (0, base_height, 0) through modelview and projection finally doing the perspective divide (gluProject does all this for you – GLU is not part OpenGL, too), the resulting vector is what you're looking for; take the vector length for scaling the width.
This should be determinable rather easily. The font's size in pixels just depends on the modelview transformation (actually only the scaling part), the projection transformation (which is a simple orthographic projection, I suppose) and the viewport settings, and of course on the size of an individual character of the font in untransformed form (what goes into the glVertex calls).
So you just take the font's basic size (lets consider the height only and call it height) and first do the modelview transformation (assuming the scaling shown in the code is the only one):
height *= scaleFactor;
Next we do the projection transformation:
height /= (top-bottom);
with top and bottom being the values you used when specifying the orthographic transformation (e.g. using glOrtho). And last but not least we do the viewport transformation:
height *= viewportHeight;
with viewportHeight being, you guessed it, the height of the viewport specified in the glViewport call. The resulting height should be the height of your font in pixels. You can use this to somehow scale the line width (without an if), as the line width parameter is in floats anyway, let OpenGL do the discretization.
If your transformation pipeline is more complicated, you could use a more general approach using the complete transformation matrices, perhaps with the help of gluProject to transform an object-space point to a screen-space point:
double x0, x1, y0, y1, z;
double modelview[16], projection[16];
int viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(0.0, 0.0, 0.0, modelview, projection, viewport, &x0, &y0, &z);
gluProject(fontWidth, fontHeight, 0.0, modelview, projection, viewport, &x1, &y1, &z);
x1 -= x0;
y1 -= y0;
fontScreenSize = sqrt(x1*x1 + y1*y1);
Here I took the diagonal of the character and not only the height, to better ignore rotations and we used the origin as reference value to ignore translations.
You might also find the answers to this question interesting, which give some more insight into OpenGL's transformation pipeline.