Converting quaternions to Euler angles. Problems with the range of Y angle - c++

I'm trying to write a 3d simulation in C++ using Irrlicht as graphic engine and ODE for physics. Then I'm using a function to convert ODE quaternions to Irrlicht Euler angles. In order to do this, I'm using this code.
void QuaternionToEuler(const dQuaternion quaternion, vector3df &euler)
{
dReal w,x,y,z;
w = quaternion[0];
x = quaternion[1];
y = quaternion[2];
z = quaternion[3];
double sqw = w*w;
double sqx = x*x;
double sqy = y*y;
double sqz = z*z;
euler.Z = (irr::f32) (atan2(2.0 * (x*y + z*w),(sqx - sqy - sqz + sqw)) * (180.0f/irr::core::PI));
euler.X = (irr::f32) (atan2(2.0 * (y*z + x*w),(-sqx - sqy + sqz + sqw)) * (180.0f/irr::core::PI));
euler.Y = (irr::f32) (asin(-2.0 * (x*z - y*w)) * (180.0f/irr::core::PI));
}
It works fine for drawing in the correct position and rotation but the problems come with the asin instruction. It only return values in the range of 0..90 - 0..-90 and I need to get a range from 0..360 degrees. At least I need to get a rotation in the range of 0..360 when I call node->getRotation().Y.

Euler angles (of any type) have a singularity. In the case of those particular Euler angles that you are using (which look like Tait-Bryan angles, or some variation thereof), the singularity is at plus-minus 90 degrees of pitch (Y). This is an inherent limitation with Euler angles and one of the prime reasons why they are rarely used in any serious context (except in aircraft dynamics because all aircraft have a very limited ability to pitch w.r.t. their velocity vector (which might not be horizontal), so they rarely come anywhere near that singularity).
This also means that your calculation is actually just one of two equivalent solutions. For a given quaternion, there are two solutions for Euler angles that represent that same rotation, one on one side of the singularity and another that mirrors the first. Since both solutions are equivalent, you just pick the one on the easiest side, i.e., where the pitch is between -90 and 90 degrees.
Also, you code needs to deal with approaching the singularity in order to avoid getting NaN. In other words, you must check if you are getting close (with a small tolerance) to the singular points (-90 and 90 degrees on pitch), and if so, use an alternate formula (which can only compute one angle that best approximates the rotation).
If there is any way for you to avoid using Euler angles altogether, I highly suggest that you do that, pretty much any representation of rotations is preferable to Euler angles. Irrlicht uses matrices natively and also supports setting/getting rotations via an axis-angle representation, this is much nicer to work with (and much easier to obtain from a quaternion, and doesn't have singularities).

Think about the earth's globe. Each point on it can be defined only usin latitude(in the range [-90, 90]) and longitude(in the range [-180, 180]). So each point on a sphere may be specified by using these angles. Now a point on a sphere specifies a vector and all points on a sphere specify all possible vectors. So just like pointed out in this article, the formula you use will generate all possible directions.
Hope this helps.

Related

Robust atan(y,x) on GLSL for converting XY coordinate to angle

In GLSL (specifically 3.00 that I'm using), there are two versions of
atan(): atan(y_over_x) can only return angles between -PI/2, PI/2, while atan(y/x) can take all 4 quadrants into account so the angle range covers everything from -PI, PI, much like atan2() in C++.
I would like to use the second atan to convert XY coordinates to angle.
However, atan() in GLSL, besides not able to handle when x = 0, is not very stable. Especially where x is close to zero, the division can overflow resulting in an opposite resulting angle (you get something close to -PI/2 where you suppose to get approximately PI/2).
What is a good, simple implementation that we can build on top of GLSL atan(y,x) to make it more robust?
I'm going to answer my own question to share my knowledge. We first notice that the instability happens when x is near zero. However, we can also translate that as abs(x) << abs(y). So first we divide the plane (assuming we are on a unit circle) into two regions: one where |x| <= |y| and another where |x| > |y|, as shown below:
We know that atan(x,y) is much more stable in the green region -- when x is close to zero we simply have something close to atan(0.0) which is very stable numerically, while the usual atan(y,x) is more stable in the orange region. You can also convince yourself that this relationship:
atan(x,y) = PI/2 - atan(y,x)
holds for all non-origin (x,y), where it is undefined, and we are talking about atan(y,x) that is able to return angle value in the entire range of -PI,PI, not atan(y_over_x) which only returns angle between -PI/2, PI/2. Therefore, our robust atan2() routine for GLSL is quite simple:
float atan2(in float y, in float x)
{
bool s = (abs(x) > abs(y));
return mix(PI/2.0 - atan(x,y), atan(y,x), s);
}
As a side note, the identity for mathematical function atan(x) is actually:
atan(x) + atan(1/x) = sgn(x) * PI/2
which is true because its range is (-PI/2, PI/2).
Depending on your targeted platform, this might be a solved problem. The OpenGL spec for atan(y, x) specifies that it should work in all quadrants, leaving behavior undefined only when x and y are both 0.
So one would expect any decent implementation to be stable near all axes, as this is the whole purpose behind 2-argument atan (or atan2).
The questioner/answerer is correct in that some implementations do take shortcuts. However, the accepted solution makes the assumption that a bad implementation will always be unstable when x is near zero: on some hardware (my Galaxy S4 for example) the value is stable when x is near zero, but unstable when y is near zero.
To test your GLSL renderer's implementation of atan(y,x), here's a WebGL test pattern. Follow the link below and as long as your OpenGL implementation is decent, you should see something like this:
Test pattern using native atan(y,x): http://glslsandbox.com/e#26563.2
If all is well, you should see 8 distinct colors (ignoring the center).
The linked demo samples atan(y,x) for several values of x and y, including 0, very large, and very small values. The central box is atan(0.,0.)--undefined mathematically, and implementations vary. I've seen 0 (red), PI/2 (green), and NaN (black) on hardware I've tested.
Here's a test page for the accepted solution. Note: the host's WebGL version lacks mix(float,float,bool), so I added an implementation that matches the spec.
Test pattern using atan2(y,x) from accepted answer: http://glslsandbox.com/e#26666.0
Your proposed solution still fails in the case x=y=0. Here both of the atan() functions return NaN.
Further I would not rely on mix to switch between the two cases. I am not sure how this is implemented/compiled, but IEEE float rules for x*NaN and x+NaN result again in NaN. So if your compiler really used mix/interpolation the result should be NaN for x=0 or y=0.
Here is another fix which solved the problem for me:
float atan2(in float y, in float x)
{
return x == 0.0 ? sign(y)*PI/2 : atan(y, x);
}
When x=0 the angle can be ±π/2. Which of the two depends on y only. If y=0 too, the angle can be arbitrary (vector has length 0). sign(y) returns 0 in that case which is just ok.
Sometimes the best way to improve the performance of a piece of code is to avoid calling it in the first place. For example, one of the reasons you might want to determine the angle of a vector is so that you can use this angle to construct a rotation matrix using combinations of the angle's sine and cosine. However, the sine and cosine of a vector (relative to the origin) are already hidden in plain sight inside the vector itself. All you need to do is to create a normalized version of the vector by dividing each vector coordinate by the total length of the vector. Here's the two-dimensional example to calculate the sine and cosine of the angle of vector [ x y ]:
double length = sqrt(x*x + y*y);
double cos = x / length;
double sin = y / length;
Once you have the sine and cosine values, you can now directly populate a rotation matrix with these values to perform a clockwise or counterclockwise rotation of arbitrary vectors by the same angle, or you can concatenate a second rotation matrix to rotate to an angle other than zero. In this case, you can think of the rotation matrix as "normalizing" the angle to zero for an arbitrary vector. This approach is extensible to the three-dimensional (or N-dimensional) case as well, although for example you will have three angles and six sin/cos pairs to calculate (one angle per plane) for 3D rotation.
In situations where you can use this approach, you get a big win by bypassing the atan calculation completely, which is possible since the only reason you wanted to determine the angle was to calculate the sine and cosine values. By skipping the conversion to angle space and back, you not only avoid worrying about division by zero, but you also improve precision for angles which are near the poles and would otherwise suffer from being multiplied/divided by large numbers. I've successfully used this approach in a GLSL program which rotates a scene to zero degrees to simplify a computation.
It can be easy to get so caught up in an immediate problem that you can lose sight of why you need this information in the first place. Not that this works in every case, but sometimes it helps to think out of the box...
A formula that gives an angle in the four quadrants for any value
of coordinates x and y. For x=y=0 the result is undefined.
f(x,y)=pi()-pi()/2*(1+sign(x))* (1-sign(y^2))-pi()/4*(2+sign(x))*sign(y)
-sign(x*y)*atan((abs(x)-abs(y))/(abs(x)+abs(y)))

Rotation matrix to euler angles

I use the following code to convert a 3X3 rotation matrix to angles :
(_r = double[9] )
double angleZ=atan2(_r[3], _r[4])* (float) (180.0 / CV_PI);
double angleX=180-asin(-1*_r[5])* (float) (180.0 / CV_PI);
double angleY=180-atan2(_r[2],_r[8])* (float) (180.0 / CV_PI);
here is a little helper
_r[0] _r[1] _r[2]
_r[3] _r[4] _r[5]
_r[6] _r[7] _r[8]
does this make any sense ? cause the angles seem too... interdependent ? x y z all react to single pose change...
the rotation matrix is received from opencv cvPOSIT function so the points of interest might be wrong and giving this confusing effect ...
but somehow i think im just doing the conversion wrong :)
I am applying the angles in opengl to a cube :
glRotatef(angleX,1.0f,0.0f,0.0f);
glRotatef(angleY,0.0f,1.0f,0.0f);
glRotatef(angleZ,0.0f,0.0f,1.0f);
What you are trying to accomplish is not as easy as you might think. There are multiple conventions as to what the euler angles are called (x,y,z,alpha,beta,gamma,yaw,pitch,roll,heading,elevation,bank,...) and in which order they need to be applied.
The are also some problems with ambiguities in certain positions, see Wikpedia article on Gimbal Lock.
Please read the Euler Angle Formulas document by David Eberly. Its very useful and includes a lot of formulas for various conventions and you probably should base your code on them if you want to have stable formulas even in the corner cases.

Quaternion - Rotate To

I have some object in world space, let's say at (0,0,0) and want to rotate it to face (10,10,10).
How do i do this using quaternions?
This question doesn't quite make sense. You said that you want an object to "face" a specific point, but that doesn't give enough information.
First, what does it mean to face that direction? In OpenGL, it means that the -z axis in the local reference frame is aligned with the specified direction in some external reference frame. In order to make this alignment happen, we need to know what direction the relevant axis of the object is currently "facing".
However, that still doesn't define a unique transformation. Even if you know what direction to make the -z axis point, the object is still free to spin around that axis. This is why the function gluLookAt() requires that you provide an 'at' direction and an 'up' direction.
The next thing that we need to know is what format does the end-result need to be in? The orientation of an object is often stored in quaternion format. However, if you want to graphically rotate the object, then you might need a rotation matrix.
So let's make a few assumptions. I'll assume that your object is centered at the world's point c and has the default alignment. I.e., the object's x, y, and z axes are aligned with the world's x, y, and z axes. This means that the orientation of the object, relative to the world, can be represented as the identity matrix, or the identity quaternion: [1 0 0 0] (using the quaternion convention where w comes first).
If you want the shortest rotation that will align the object's -z axis with point p:=[p.x p.y p.z], then you will rotate by φ around axis a. Now we'll find those values. First we find axis a by normalizing the vector p-c and then taking the cross-product with the unit-length -z vector and then normalizing again:
a = normalize( crossProduct(-z, normalize(p-c) ) );
The shortest angle between those two unit vectors found by taking the inverse cosine of their dot-product:
φ = acos( dotProduct(-z, normalize(p-c) ));
Unfortunately, this is a measure of the absolute value of the angle formed by the two vectors. We need to figure out if it's positive or negative when rotating around a. There must be a more elegant way, but the first way that comes to mind is to find a third axis, perpendicular to both a and -z and then take the sign from its dot-product with our target axis. Vis:
b = crossProduct(a, -z );
if ( dotProduct(b, normalize(p-c) )<0 ) φ = -φ;
Once we have our axis and angle, turning it into a quaternion is easy:
q = [cos(φ/2) sin(φ/2)a];
This new quaternion represents the new orientation of the object. It can be converted into a matrix for rendering purposes, or you can use it to directly rotate the object's vertices, if desired, using the rules of quaternion multiplication.
An example of calculating the Quaternion that represents the rotation between two vectors can be found in the OGRE source code for the Ogre::Vector3 class.
In response to your clarification and to just answer this, I've shamelessly copied a very interesting and neat algorithm for finding the quat between two vectors that looks like I have never seen before from here. Mathematically, it seems valid, and since your question is about the mathematics behind it, I'm sure you'll be able to convert this pseudocode into C++.
quaternion q;
vector3 c = cross(v1,v2);
q.v = c;
if ( vectors are known to be unit length ) {
q.w = 1 + dot(v1,v2);
} else {
q.w = sqrt(v1.length_squared() * v2.length_squared()) + dot(v1,v2);
}
q.normalize();
return q;
Let me know if you need help clarifying any bits of that pseudocode. Should be straightforward though.
dot(a,b) = a1*b1 + a2*b2 + ... + an*bn
and
cross(a,b) = well, the cross product. it's annoying to type out and
can be found anywhere.
You may want to use SLERP (Spherical Linear Interpolation). See this article for reference on how to do it in c++

Confused about degrees and OpenGL/GLUT camera movement/rotation

NOTICE: I have edited the question below which is more relevant to my real issue than the text right below, you can skip this if you but I'll leave it here for historic reasons.
To see if I get this right, a float in C is the same as a value in radians right? I mean, 360º = 6.28318531 radians and I just noticed on my OpenGL app that a full rotation goes from 0.0 to 6.28, which seems to add up correctly. I just want to make sure I got that right.
I'm using a float (let's call it anglePitch) from 0.0 to 360.0 (it's easier to read in degrees and avoids casting int to float all the time) and all the code I see on the web uses some kind of DEG2RAD() macro which is defined as DEG2RAD 3.141593f / 180. In the end it would be something like this:
anglePitch += direction * 1; // direction will be 1 or -1
refY = tan(anglePitch * DEG2RAD);
This really does a full rotation but that full rotation will be when anglePitch = 180 and anglePitch * DEG2RAD = 3.14, but a full rotation should be 360|6.28. If I change the macro to any of the following:
#define DEG2RAD 3.141593f / 360
#define DEG2RAD 3.141593f / 2 / 180
It works as expected, a full rotation will happen when anglePitch = 360.
What am I missing here and what should I use to properly convert angles to radians/floats?
IMPORTANT EDIT (REAL QUESTION):
I understand now the code I see everywhere on the web about DEG2RAD, I'm just too dumb at math (yeah, I know, it's important when working with this kind of stuff). So I'm going to rephrase my question:
I have now added this to my code:
#define PI 3.141592654f
#define DEG2RAD(d) (d * PI / 180)
Now, when working the pitch/yawn angles in degrees, which are floats, once again, to avoid casting all the time, I just use the DEG2RAD macro and the degree value will be correctly converted to radians. These values will be passed to sin/cos/tan functions and will return the proper values to be used in GLUT camera.
Now the real question, where I was really confused before but couldn't explain myself better:
angleYaw += direction * ROTATE_SPEED;
refX = sin(DEG2RAD(angleYaw));
refZ = -cos(DEG2RAD(angleYaw));
This code will be executed when I press the LEFT/RIGHT keys and the camera will rotate in the Y axis accordingly. A full rotation goes from 0º to 360º.
anglePitch += direction * ROTATE_SPEED;
refY = tan(DEG2RAD(anglePitch));
This is similar code and will be executed when I press the UP/DOWN keys and the camera will rotate in the X axis. But in this situation, a full rotation goes from 0º to 180º degrees and that's what's really confusing me. I'm sure it has something to do with the tangent function but I can't get my head around it.
Is there way I could use sin/cos (as I do in the yawn code) to achieve the same rotation? What is the right way, the most simple code I can add/fix and what makes more sense to create a full pitch rotation from 0º to 360º?
360° = 2 * Pi, Pi = 3.141593…
Radians are defined by the arc length of an angle along a circle of radius 1. The circumfence of a circle is 2*r*Pi, so one full turn on a unit circle has an arc length of 2*Pi = 6.28…
The measure of angles in degrees stem from the fact, that by aligning 6 equilateral triangles you span a full turn. So we have 6 triangles, each making up a 6th of the turn, so the old babylonians divided a circle into pieces of 1/(6*6) = 1/36, and to further refine it this was subdivded by 10. That's why we ended up with 360° in a full circle. This number is arbitrarily choosen, though.
So if there are 2*Pi/360° this makes Pi/180° = 3.141593…/180° which is the conversion factor from degrees to radians. The reciprocal, 180°/Pi = 180/3.141593…
Why on earth the old OpenGL function glRotate and GLU's gluPerspective used degrees instead of radians I cannot fathom. From a mathematical point of view only radians make sense. Which I think is most beautifully demonstrated by Euler's equation
e^(i*Pi) - 1 = 0
There you have it, all the important numbers of mathematics in one single equation. What's this got to do with angles? Well:
e^(i*alpha) = cos(alpha) + i * sin(alpha), alpha is in radians!
EDIT, with respect to modified question:
Your angles being floats is all fine. Why would you even think degress being integers I cannot understand. Normally you don't have to define PI yourself, it comes predefined in math.h, usually called M_PI, M_2PI, and M_PI2 for Pi, 2*Pi and Pi/2. You also should change your macro, the way it's written now can create strange effects.
#define DEG2RAD(d) ( (d) * M_PI/180. )
GLUT has no camera at all. GLUT is a rather dumb OpenGL framework I recommend not using. You probably refer to gluLookAt.
Those obstacles out of the way let's see what you're doing there. Remember that trigonometric functions operate on the unit circle. Let the angle 0 point towards the right and angles increment counterclockwise. Then sin(a) is defined as the amount of rightwards and cos(a) and the amount of forwards to reach the point at angle a on the unit circle. This is what the refX and refZ are getting assigned to.
refY however makes no sense written that way. tan = sin/cos so as we approach n*pi/2 (i.e. 90°) it diverges to +/- infinity. At least it explains your pi/180° cyclic range, because that's the period of tan.
I was first thinking that tan may have been used to normalize the direction vector, but didn't make sense either. The factor would have been 1./sqrt(sin²(Pitch) + 1)
I double checked: using tan there does the right thing.
EDIT2: I don't see where your problem is: The pitch angle is -90° to +90°, which makes perfect sense. Go get yourself a globe (of the earth): The east-west coordinates (longitude) go from -180° to +180°, the south-north coordinate (latitude) goes -90° to +90°. Think about it: Any larger coordinate range would create ambiguities.
The only good suggestion I offer you is: Grab some math text book and bend your mind around spherical coordinates! Sorry to tell you that way. Whatever you have works perfectly fine, you just need to understand sperical geometry.
You're using the terms Yaw and Pitch. Those are normally used in Euler angles. Now unfortunately Euler angles, which compelling at first, cause serious trouble later on (like gimbal lock). You should not use them at all. It may also be a good idea if you used some pencil/sticks/whatever to decompose the rotations you're intending with your hands to understand their mechanics.
And by the way: There are also non-integer degrees. Just hop over to http://maps.google.com to see them in action (just select some place and let http://maps.google.com give you the link to it).
'float' is a type, like int or double. radians and degrees are units of measure, both of which can be represented with any precision you want. i.e., there's no reason you can't have 22.5 degrees, and keep that value in a float.
a full rotation in radians is 2*pi, about 6.283, whereas a full rotation in degrees is 360. You can convert between them by dividing out the starting unit's full circle, then multiplying by the desired unit's full circle.
for example, to get from 90 degrees to radians, first divide out the degrees. 90 over 360 is 0.25 (note this value is in 'revolutions'). Now multiply that 0.25 by 6.283 to arrive at 1.571 radians.
follow up
the reason you're seeing your pitch cycle twice as fast as it should is precisely because you're using tan(pitch) to compute the Y component. What you should have is that the Y component depends on sin(pitch). i.e., try changing
refY = tan(DEG2RAD(anglePitch));
to
refY = sin(DEG2RAD(anglePitch));
a technical detail: the numbers that go into the look matrix should all be in the range of -1 to +1, and if you were to inspect the values you're feeding to refY, and run your pitch outside of -45 to +45 degrees, you'd see the problem; tan() runs off to infinity at +/-90 degrees.
also, note that casting a value from int to float in no sense converts between degrees and radians. casting just gives you the nearest equivalent value in the new storage type. for example, if you cast the integer 22 to floating point, you get 22.0f, whereas if you cast 33.3333f to type int, you'd be left with 33. when working with angles, you really should just stick with floating point, unless you're constrained by working with an embedded processor or something. this is especially important with radians, where whole number increments represent leaps of (about) 57.3 degrees.
Assuming that your ref components are intended to be used as your look-at vector, I think what you need is
refY = sin(DEG2RAD(anglePitch));
XZfactor = cos(DEG2RAD(anglePitch));
refX = XZfactor*sin(DEG2RAD(angleYaw));
refZ = -XZfactor*cos(DEG2RAD(angleYaw));

Optimizing a pinhole camera rendering system

I'm making a software rasterizer for school, and I'm using an unusual rendering method instead of traditional matrix calculations. It's based on a pinhole camera. I have a few points in 3D space, and I convert them to 2D screen coordinates by taking the distance between it and the camera and normalizing it
Vec3 ray_to_camera = (a_Point - plane_pos).Normalize();
This gives me a directional vector towards the camera. I then turn that direction into a ray by placing the ray's origin on the camera and performing a ray-plane intersection with a plane slightly behind the camera.
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float dot = ray_to_camera.GetDotProduct(m_Direction);
if (dot < 0)
{
float time = (-m_ScreenDistance - plane_pos.GetDotProduct(m_Direction)) / dot;
// if time is smaller than 0 the ray is either parallel to the plane or misses it
if (time >= 0)
{
// retrieving the actual intersection point
a_Point -= (m_Direction * ((a_Point - plane_pos).GetDotProduct(m_Direction)));
// subtracting the plane origin from the intersection point
// puts the point at world origin (0, 0, 0)
Vec3 sub = a_Point - plane_pos;
// the axes are calculated by saying the directional vector of the camera
// is the new z axis
projected.x = sub.GetDotProduct(m_Axis[0]);
projected.y = sub.GetDotProduct(m_Axis[1]);
}
}
This works wonderful, but I'm wondering: can the algorithm be made any faster? Right now, for every triangle in the scene, I have to calculate three normals.
float length = 1 / sqrtf(GetSquaredLength());
x *= length;
y *= length;
z *= length;
Even with a fast reciprocal square root approximation (1 / sqrt(x)) that's going to be very demanding.
My questions are thus:
Is there a good way to approximate the three normals?
What is this rendering technique called?
Can the three vertex points be approximated using the normal of the centroid? ((v0 + v1 + v2) / 3)
Thanks in advance.
P.S. "You will build a fully functional software rasterizer in the next seven weeks with the help of an expert in this field. Begin." I ADORE my education. :)
EDIT:
Vec2 projected;
// the plane is behind the camera
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float scale = m_ScreenDistance / (m_Position - plane_pos).GetSquaredLength();
// times -100 because of the squared length instead of the length
// (which would involve a squared root)
projected.x = a_Point.GetDotProduct(m_Axis[0]).x * scale * -100;
projected.y = a_Point.GetDotProduct(m_Axis[1]).y * scale * -100;
return projected;
This returns the correct results, however the model is now independent of the camera position. :(
It's a lot shorter and faster though!
This is called a ray-tracer - a rather typical assignment for a first computer graphics course* - and you can find a lot of interesting implementation details on the classic Foley/Van Damm textbook (Computer Graphics Principes and Practice). I strongly suggest you buy/borrow this textbook and read it carefully.
*Just wait until you get started on reflections and refraction... Now the fun begins!
It is difficult to understand exactly what your code doing, because it seems to be performing a lot of redundant operations! However, if I understand what you say you're trying to do, you are:
finding the vector from the pinhole to the point
normalizing it
projecting backwards along the normalized vector to an "image plane" (behind the pinhole, natch!)
finding the vector to this point from a central point on the image plane
doing dot products on the result with "axis" vectors to find the x and y screen coordinates
If the above description represents your intentions, then the normalization should be redundant -- you shouldn't have to do it at all! If removing the normalization gives you bad results, you are probably doing something slightly different from your stated plan... in other words, it seems likely that you have confused yourself along with me, and that the normalization step is "fixing" it to the extent that it looks good enough in your test cases, even though it probably still isn't doing quite what you want it to.
The overall problem, I think, is that your code is massively overengineered: you are writing all your high-level vector algebra as code to be executed in the inner loop. The way to optimize this is to work out all your vector algebra on paper, find the simplest expression possible for your inner loop, and precompute all the necessary constants for this at camera setup time. The pinhole camera specs would only be the inputs to the camera setup routine.
Unfortunately, unless I miss my guess, this should reduce your pinhole camera to the traditional, boring old matrix calculations. (ray tracing does make it easy to do cool nonstandard camera stuff -- but what you describe should end up perfectly standard...)
Your code is a little unclear to me (plane_pos?), but it does seem that you could cut out some unnecessary calculation.
Instead of normalizing the ray (scaling it to length 1), why not scale it so that the z component is equal to the distance from the camera to the plane-- in fact, scale x and y by this factor, you don't need z.
float scale = distance_to_plane/z;
x *= scale;
y *= scale;
This will give the x and y coordinates on the plane, no sqrt(), no dot products.
Well, off the bat, you can calculate normals for every triangle when your program starts up. Then when you're actually running, you just have to access the normals. This sort of startup calculation to save costs later tends to happen a lot in graphics. This is why we have large loading screens in a lot of our video games!