Including arithmetic operations when defining a constant - c++

So I often see something like this:
#define gf_PI f32(3.14159265358979323846264338327950288419716939937510)
#define gf_PIhalf f32(3.14159265358979323846264338327950288419716939937510 * 0.5)
This means that half PI value is calculated every time I use gf_PIhalf in my code, right?
Wouldn't it be better to literally write the value of half PI instead?
Wouldn't it be even better to do the following:
#define gf_PI f32(3.14159265358979323846264338327950288419716939937510)
const float gf_PIHalf = gf_PI * 0.5f; // PIHalf is calculated once
Finally wouldn't it be best to do it like this (and why it doesn't seem to be a common practice):
const float gf_PI = 3.14159265358979323846264338327950288419716939937510;
const float gf_PIHalf = gf_PI * 0.5f;

This means that half PI value is calculated every time I use gf_PIhalf in my code, right?
Nope, not likely.
You can reasonably count on your compiler to do that multiplication at compile time, not runtime.

Your conclusions are somewhat right, except that the #define version will almost definitely resolve in compile time and the bit about types const globals being uncommon practice. They are common practice in modern good code. #defines are all but dead for this use. The best practice is to define your file scope globals in an unnamed namespace:
namespace
{
const float g_SomeGlobal = 123.456f;
}
This prevents anyone outside of your translation unit from being able to 'see' g_SomeGlobal.

Related

Using #define directives as "part" of code, not at top of file

When looking at c++ source code, I almost always see #define macros at the head of the file, this makes sense in most cases, and I can see why this is best practice, but I recently came upon a situation where it might be better to keep the preprocessor definitions in a function body.
I'm writing a quaternion class, and my code for the function in question looks like this:
Quaternion Quaternion::operator*(const Quaternion& q){
Quaternion resultQ;
// These are just macros to make the code easier to read,
// 1 denotes the quaternion on the LHS of *,
// 2 denotes the quaternion of the RHS 0f *, e.a. Q1 * Q2.
// the letter denotes the value of the real number constant part
// for each seperate part of the quaterion, e.a. (a+bi+cj+dk)
#define a1 this->get_real()
#define a2 q.get_real()
#define b1 this->get_i()
#define b2 q.get_i()
#define c1 this->get_j()
#define c2 q.get_j()
#define d1 this->get_k()
#define d2 q.get_k()
// This arithemtic is based off the Hamilton product
// (http://en.wikipedia.org/wiki/Quaternion#Hamilton_product)
resultQ.set_real(a1*a2 - b1*b2 - c1*c2 - d1*d2);
resultQ.set_i(a1*b2 + b1*a2 + c1*d2 - d1*c2);
resultQ.set_j(a1*c2 - b1*d2 + c1*a2 + d1*b2);
resultQ.set_k(a1*d2 + b1*c2 - c1*b2 + d1*a2);
return resultQ;
}
I decided to add in the #define because if I substituted in all the macros manually, each line would be too long, and either be cut off or carried over to the next line when read. I could have done the same thing with variables, but I decided that would be an unnecessary overhead, so I used #define because it has no runtime overhead. Is this an acceptable practice? Is there a better way to make what I am doing here readable?
Instead of
#define a1 this->get_real()
write
auto const a1 = get_real();
And just use different names for each value of a quantity that changes.
Yes there are cases where a local #define makes sense. No this is not such a case. In particular, since you've forgotten to #undef the macros they will almost certainly cause inadvertent text substitution in some other code, if this is in a header (as indicated).
By, the way, instead of
Quaternion Quaternion::operator*(const Quaternion& q){
I would write
Quaternion Quaternion::operator*(const Quaternion& q) const {
so that also const quaternions can be multiplied.
Macros don't respect scope. Those macros are defined for the rest of the file after they appear. If you have a variable a1 in the next function, it will mess up. You should #undef all the macros at the end of the function.
Better is to create some utility functions somehow. In C++11
auto a1 = [this](){ return this->get_real(); }
...
resultQ.set_real(a1()*a2() ...
Not quite the same as you need the () but maybe good enough for you.
If the values of a1 etc don't change during the calculations, you should use Alf's suggestion.
The other answers already give alternatives to macros. You should definitely follow one such alternative because this use of macros in your code is unnecessary and bad.
But I feel your code needs redesign anyway, after which you'll see that neither macros nor their alternatives are needed.
A quaternion is a generalization of a complex number, and as such, essentially it has a number of data members (four rather than two), a number of constructors, and a number of operators.
You might have a look at the design of std::complex to get ideas. The design of a quaternion need not be much different. In particular, why would you need setters/getters to access data from a member function? These methods are exactly what makes expressions long! (along with the unnecessary use of this).
So, if the data members of a quaternion are a, b, c, d, and if there is a constructor with these four values are arguments, then your operator* should really look like this:
Quaternion Quaternion::operator*(const Quaternion& q) const
{
return Quaternion{
a*q.a - b*q.b - c*q.c - d*q.d,
a*q.b + b*q.a + c*q.d - d*q.c,
a*q.c - b*q.d + c*q.a + d*q.b,
a*q.d + b*q.c - c*q.b + d*q.a
};
}
No need for macros, helper functions, or intermediate variables.

Is const double cast at compile time?

It makes sense to define mathematical constants as double values but what happens when one requires float values instead of doubles? Does the compiler automatically interpret the doubles as floats at compile-time (so they are actually treated as they were const floats) or is this conversion made at runtime?
If by "defining", you mean using #define, here's what happens:
Say you have:
#define CONST1 1.5
#define CONST2 1.12312455431461363145134614 // Assume some number too
// precise for float
Now if you have:
float x = CONST1;
float y = CONST2;
you don't get any warning for x because the compiler automatically makes CONST1 a float. For y, you get a warning because CONST2 doesn't fit in a float, but the compiler casts it to float anyway.
If by "defining", you mean using const variables, here's what happens:
Say you have
const double CONST1=1.5;
const double CONST2=1.12312455431461363145134614; // Assume some number too
// precise for float
Now if you have:
float x = CONST1;
float y = CONST2;
there is no way for the compiler to know the values of CONST1 and CONST2(*) and therefore cannot interpret the values as float at compile them. You will be given two warnings about possible loss of data and the conversion will be done at runtime.
(*) Actually there is a way. Since the values are const, the optimizer may decide not to take a variable for them, but replace the values throughout the code. This could get complicated though, as you may pass the address to these variables around, so the optimizer may decide not to do that. That is, don't count on it.
Note that, this whole thing is true for any basic type conversions. If you have
#define CONST3 1
then you think CONST3 is int, but if you put it in a float, it would become float at compile-time, or if you put it in a char, it would become char at compiler-time.

Given r^2, is there an efficient way to compute r^3?

double r2 = dx * dx + dy * dy;
double r3 = r2 * sqrt(r2);
Can the second line be replaced by something faster? Something that does not involve sqrt?
How about
double r3 = pow(r2,1.5);
If sqrt is implemented as a special case of pow, that will save you a multiplication. Not much in the grand scheme of things mind!
If you are really looking for greater efficiency, consider whether you really need r^3. If, for example, you are only testing it (or something derived from it) to see whether it exceeds a certain threshold, then test r2 instead e.g.
const double r3_threshold = 9;
//don't do this
if (r3 > r3_threshold)
....
//do do this
const double r2_threshold = pow(r3_threshold,2./3.);
if (r2 > r2_threshold)
....
That way pow will be called only once, maybe even at compile time.
EDIT If you do need to recompute the threshold each time, I think the answer concerning Q_rsqrt is worth a look and probably deserves to outrank this one
Use fast inverse sqrt (take the Q_rsqrt function).
You have:
float r2;
// ... r2 gets a value
float invsqrt = Q_rsqrt(r2);
float r3 = r2*r2*invsqrt; // x*x/sqrt(x) = x*sqrt(x)
NOTE: For double types there is a constant like 0x5f3759df which can help you write a function that handles also double data types.
LATER EDIT: Seems like the method has been already discussed here.
LATER EDIT2: The constant for double was in the wikipedia link:
Lomont pointed out that the "magic number" for 64 bit IEEE754 size
type double is 0x5fe6ec85e7de30da, but in fact it is close to
0x5fe6eb50c7aa19f9.
I think another way to look at your question would be "how to calculate (or approximate) sqrt(n)". From there your question would be trivial (n * sqrt(n)). Of course, you'd have to define how much error you could live with. Wikipedia gives you many options:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots

calculating constant library functions at compile time

I want to use boltzmann constant in my functions. I am using the following code to declare the boltzmann constant
const double boltzmann_constant = 1.3806503 * pow (10,-23);
Will this get calculated at the compile time itself? If now, how should i ensure that it does get calculated at compile time? Any other method to declare the constant?
The pow() function is very unlikely to be calculated at compile time. However, the operation requested is directly expressible in scientific notation, a standard aspect of floating point numbers:
const double boltzmann_constant = 1.3806503e-23;
For a more complex situation, like sin(M_PI / 3), it can be useful to write a program to calculate and display such values so they can be edited into a program. If you do this, do everyone a favor and include a comment explaining what the constant is:
const double magic_val = 0.8660254037844385965883; // sin(M_PI / 3);

Compiler optimization of references

I often use references to simplify the appearance of code:
vec3f& vertex = _vertices[index];
// Calculate the vertex position
vertex[0] = startx + col * colWidth;
vertex[1] = starty + row * rowWidth;
vertex[2] = 0.0f;
Will compilers recognize and optimize this so it is essentially the following?
_vertices[index][0] = startx + col * colWidth;
_vertices[index][1] = starty + row * rowWidth;
_vertices[index][2] = 0.0f;
Yes. This is a basic optimization that any modern (and even ancient) compilers will make.
In fact, I don't think it's really accurate to call that you've written an optimisation, since the move straightforward way to translate that to assembly involves a store to the _vertex address, plus index, plus {0,1,2} (multiplied by the appropriate sizes for things, of course).
In general though, modern compilers are amazing. Almost any optimization you can think of will be implemented. You should always write your code in a way that emphasizes readability unless you know that one way has significant performance benefits for your code.
As a simple example, code like this:
int func() {
int x;
int y;
int z;
int a;
x = 5*5;
y = x;
z = y;
a = 100 * 100 * 100* 100;
return z;
}
Will be optimized to this:
int func() {
return 25
}
Additionally, the compiler will also inline the function so that no call is actually made. Instead, everywhere 'func()' appears will just be replaced with '25'.
This is just a simple example. There are many more complex optimizations a modern compiler implements.
Compilers will even do more clever stuff than this. Maybe they'll do
vec3f * vertex = _vertices[index];
*vertex++ = startx + col * colWidth;
*vertex++ = starty + row * rowWidth;
*vertex++ = 0.0f;
Or even other variations …
Depending on the types of your variables, what you've described is a pessimization.
If vertices is a class type then your original form makes a single call to operator[] and reuses the returned reference. Your second form makes three separate calls. It can't necessarily be inferred that the returned reference will refer to the same object each time.
The cost of a reference is probably not material in comparison to repeated lookups in the original vertices object.
Except in limited cases, the compiler cannot optimize out (or pessimize in) extra function calls, unless the change introduced is not detectable by a conforming program. Often this requires visibility of an inline definition.
This is known as the "as if" rule. So long as the code behaves as if the language rules have been followed exactly, the implementation may make any optimizations it sees fit.