I'm writing a C++ program that uses an integer value to differentiate between a large number of derived class objects. The top-level class stores the integer in the constructor, and has a public method to query it; the idea is to make it so I can quickly tell objects apart when I store them in an array using pointers typed to their top-level class.
I'm using #defines to provide plain-English names for some of the IDs; I want to be able to reference them from (my) memory without the added computational load of using string-based identifiers to find them. I'm reluctant to make a completely static declaration in case I want to add or change identifiers later (for example, some are allocated by block, and I may someday need to increase the block size). I've started using #define to increment the index by one.
I think examples of what I'm doing might be easier:
#define OBJ_FRED 0
#define OBJ_CATHY OBJ_FRED + 1
#define OBJ_JOE OBJ_CATHY + 1
#define OBJ_TRIPLETS_START OBJ_JOE + 1
#define OBJ_NANCY OBJ_TRIPLETS_START + 3
...
// Note that there will potentially be a lot of these.
My question is this: will the preprocessor substitute all occurrences of OBJ_NANCY with "0 + 1 + 1 + 1 + 3" or is it actually evaluating the #defines as it goes along? If it does just substitute the text, should I be concerned that there may be a point at which the preprocessor or compiler will encounter a problem because the #define is too long? I looked around and couldn't find a straight answer to either question. I'm using MS Visual C++ 2015 if it matters.
I'm asking partly out of curiosity, and partly because I'll likely end up with a lot of these and I don't want to have to go back later and change all of them to static values.
To answer your question as asked, yes the preprocessor will substitute OBJ_NANCY with 0 + 1 + 1 + 1 + 3. That's what the processor does - it expands macros and does text substitution.
However, the processor is only one phase of compilation. Although (strictly speaking) not required to, later phases of compilation will typically evaluate constant expressions e.g. turn 0 + 1 + 1 + 1 + 3 into 6, rather than leaving the evaluation to be done at run time. It is a pretty basic optimisation, and most decent-quality compilers implement it (albeit, possibly as an optional optimisation).
There are implementation-specific limits on the size of code statements (the standards describe lower bounds on those limits). A consequence is that there are limits on the size of a macro expansion. In practice, it is rarely a problem - the limits most implementations support are related to available memory. And, if you are running into those limits, macro expansion is the least of your problems.
As others have said in comments (and another answer), you would be better off using an enumerated type.
You might want to look at your class design. A need for a base class to contain a field that identifies its derived classes is known as a "code smell" - something that is probably unwanted, and for which there are often better alternatives. Even if, sometimes, the smell is tolerable.
This is much better accomplished using an enumerated type:
enum {
OBJ_FRED, OBJ_CATHY, ...
};
It's cleaner, less-error-prone, and easier to read.
When creating custom typedefs for integers, is it possible for compiler to warn when you when using a default numeric type?
For example,
typedef int_fast32_t kint;
int_fast32_t test=0;//Would be ok
kint test=0; //Would be ok
int test=0; //Would throw a warning or error
We're converting a large project and the default int size on platform is 32767 which is causing some issues. This warning would warn a user to not use ints in the code.
If possible, it would be great if this would work on GCC and VC++2012.
I'm reasonably sure gcc has no such option, and I'd be surprised if VC did.
I suggest writing a program that detects references to predefined types in source code, and invoking that tool automatically as part of your build process. It would probably suffice to search for certain keywords.
Be sure you limit this to your own source files; predefined and third-party headers are likely to make extensive use of predefined types.
But I wouldn't make the prohibition absolute. There are a number of standard library functions that use predefined types. For example, in c = getchar() it makes no sense to declare c as anything other than int. And there's no problem for something like for (int i = 0; i <= 100; i ++) ...
Ideally, the goal should be to use predefined types properly. The language has never guaranteed that an int can exceed 32767. (But "proper" use is difficult or impossible to verify automatically.)
I'd approach this by doing a replace-all first and then documenting this thoroughly.
You can use a preprocessor directive:
#define int use kint instead
Note that technically this is undefined behavior and you'll run into trouble if you do this definition before including third-party headers.
I would recommend to make bulk replacement int -> old_int_t at the very beginning of your porting. This way you can continue modifying your code without facing major restrictions and at the same time have access to all places that are not yet updated.
Eventually, at the end of your work, all occurencies of old_int_t should go away.
Even if one could somehow undefine the keyword int, that would do nothing to prevent usage of that type, since there are many cases where the compiler will end up using that type. Beyond the obvious cases of integer literals, there are some more subtle cases involving integer promotion. For example, if int happens to be 64 bits, operations between two variables of type uint32_t will be performed using type int rather than uint32_t. As nice as it would be to be able to specify that some variables represent numbers (which should be eagerly promoted when practical) while others represent members of a wrapping algebraic ring (which should not be promoted), I know of no facility to do such a thing. Consequently, int is unavoidable.
I'm porting one of my C++ libraries to a somewhat wonky compiler -- it doesn't support stringstreams, or C99 features like snprintf(). I need to format int, float, etc values as char*, and the only options available seem to be 1) use sprintf() 2) hand-roll formatting procedures.
Given this, how do I determine (at either compile- or run-time) how many bytes are required for a formatted floating-point value? My library might be used for fuzz-testing, so it needs to handle even unusual or extreme values.
Alternatively, is there a small (100-200 lines preferred), portable implementation of snprintf() available that I could simply bundle with my library?
Ideally, I would end up with either normal snprintf()-based code, or something like this:
static const size_t FLOAT_BUFFER_SIZE = /* calculate max buffer somehow */;
char *fmt_double(double x)
{
char *buf = new char[FLOAT_BUFFER_SIZE + 1];
sprintf(buf, "%f", x);
return buf;
}
Related questions:
Maximum sprintf() buffer size for integers
Maximum sprintf() buffer size for %g-formatted floats
Does the compiler support any of ecvt, fcvt or gcvt? They are a bit freakish, and hard to use, but they have their own buffer (ecvt, fcvt) and/or you may get lucky and find the system headers have, as in VC++, a definition of the maximum number of chars gcvt will produce. And you can take it from there.
Failing that, I'd consider the following quite acceptable, along the lines of the code provided. 500 chars is pretty conservative for a double; valid values are roughly 10^-308 to 10^308, so even if the implementation is determined to be annoying by printing out all the digits there should be no overflow.
char *fmt_double(double d) {
static char buf[500];
sprintf(buf,"%f",d);
assert(buf[sizeof buf-1]==0);//if this fails, increase buffer size!
return strdup(buf);
}
This doesn't exactly provide any amazing guarantees, but it should be pretty safe(tm). I think that's as good as it gets with this sort of approach, unfortunately. But if you're in the habit of regularly running debug builds, you should at least get early warning of any problems...
I think GNU Libiberty is what you want. You can just include the implementation of snprintf.
vasprintf.c - 152 LOC.
I feel like I must just be unable to find it. Is there any reason that the C++ pow function does not implement the "power" function for anything except floats and doubles?
I know the implementation is trivial, I just feel like I'm doing work that should be in a standard library. A robust power function (i.e. handles overflow in some consistent, explicit way) is not fun to write.
As of C++11, special cases were added to the suite of power functions (and others). C++11 [c.math] /11 states, after listing all the float/double/long double overloads (my emphasis, and paraphrased):
Moreover, there shall be additional overloads sufficient to ensure that, if any argument corresponding to a double parameter has type double or an integer type, then all arguments corresponding to double parameters are effectively cast to double.
So, basically, integer parameters will be upgraded to doubles to perform the operation.
Prior to C++11 (which was when your question was asked), no integer overloads existed.
Since I was neither closely associated with the creators of C nor C++ in the days of their creation (though I am rather old), nor part of the ANSI/ISO committees that created the standards, this is necessarily opinion on my part. I'd like to think it's informed opinion but, as my wife will tell you (frequently and without much encouragement needed), I've been wrong before :-)
Supposition, for what it's worth, follows.
I suspect that the reason the original pre-ANSI C didn't have this feature is because it was totally unnecessary. First, there was already a perfectly good way of doing integer powers (with doubles and then simply converting back to an integer, checking for integer overflow and underflow before converting).
Second, another thing you have to remember is that the original intent of C was as a systems programming language, and it's questionable whether floating point is desirable in that arena at all.
Since one of its initial use cases was to code up UNIX, the floating point would have been next to useless. BCPL, on which C was based, also had no use for powers (it didn't have floating point at all, from memory).
As an aside, an integral power operator would probably have been a binary operator rather than a library call. You don't add two integers with x = add (y, z) but with x = y + z - part of the language proper rather than the library.
Third, since the implementation of integral power is relatively trivial, it's almost certain that the developers of the language would better use their time providing more useful stuff (see below comments on opportunity cost).
That's also relevant for the original C++. Since the original implementation was effectively just a translator which produced C code, it carried over many of the attributes of C. Its original intent was C-with-classes, not C-with-classes-plus-a-little-bit-of-extra-math-stuff.
As to why it was never added to the standards before C++11, you have to remember that the standards-setting bodies have specific guidelines to follow. For example, ANSI C was specifically tasked to codify existing practice, not to create a new language. Otherwise, they could have gone crazy and given us Ada :-)
Later iterations of that standard also have specific guidelines and can be found in the rationale documents (rationale as to why the committee made certain decisions, not rationale for the language itself).
For example the C99 rationale document specifically carries forward two of the C89 guiding principles which limit what can be added:
Keep the language small and simple.
Provide only one way to do an operation.
Guidelines (not necessarily those specific ones) are laid down for the individual working groups and hence limit the C++ committees (and all other ISO groups) as well.
In addition, the standards-setting bodies realise that there is an opportunity cost (an economic term meaning what you have to forego for a decision made) to every decision they make. For example, the opportunity cost of buying that $10,000 uber-gaming machine is cordial relations (or probably all relations) with your other half for about six months.
Eric Gunnerson explains this well with his -100 points explanation as to why things aren't always added to Microsoft products- basically a feature starts 100 points in the hole so it has to add quite a bit of value to be even considered.
In other words, would you rather have a integral power operator (which, honestly, any half-decent coder could whip up in ten minutes) or multi-threading added to the standard? For myself, I'd prefer to have the latter and not have to muck about with the differing implementations under UNIX and Windows.
I would like to also see thousands and thousands of collection the standard library (hashes, btrees, red-black trees, dictionary, arbitrary maps and so forth) as well but, as the rationale states:
A standard is a treaty between implementer and programmer.
And the number of implementers on the standards bodies far outweigh the number of programmers (or at least those programmers that don't understand opportunity cost). If all that stuff was added, the next standard C++ would be C++215x and would probably be fully implemented by compiler developers three hundred years after that.
Anyway, that's my (rather voluminous) thoughts on the matter. If only votes were handed out based on quantity rather than quality, I'd soon blow everyone else out of the water. Thanks for listening :-)
For any fixed-width integral type, nearly all of the possible input pairs overflow the type, anyway. What's the use of standardizing a function that doesn't give a useful result for vast majority of its possible inputs?
You pretty much need to have an big integer type in order to make the function useful, and most big integer libraries provide the function.
Edit: In a comment on the question, static_rtti writes "Most inputs cause it to overflow? The same is true for exp and double pow, I don't see anyone complaining." This is incorrect.
Let's leave aside exp, because that's beside the point (though it would actually make my case stronger), and focus on double pow(double x, double y). For what portion of (x,y) pairs does this function do something useful (i.e., not simply overflow or underflow)?
I'm actually going to focus only on a small portion of the input pairs for which pow makes sense, because that will be sufficient to prove my point: if x is positive and |y| <= 1, then pow does not overflow or underflow. This comprises nearly one-quarter of all floating-point pairs (exactly half of non-NaN floating-point numbers are positive, and just less than half of non-NaN floating-point numbers have magnitude less than 1). Obviously, there are a lot of other input pairs for which pow produces useful results, but we've ascertained that it's at least one-quarter of all inputs.
Now let's look at a fixed-width (i.e. non-bignum) integer power function. For what portion inputs does it not simply overflow? To maximize the number of meaningful input pairs, the base should be signed and the exponent unsigned. Suppose that the base and exponent are both n bits wide. We can easily get a bound on the portion of inputs that are meaningful:
If the exponent 0 or 1, then any base is meaningful.
If the exponent is 2 or greater, then no base larger than 2^(n/2) produces a meaningful result.
Thus, of the 2^(2n) input pairs, less than 2^(n+1) + 2^(3n/2) produce meaningful results. If we look at what is likely the most common usage, 32-bit integers, this means that something on the order of 1/1000th of one percent of input pairs do not simply overflow.
Because there's no way to represent all integer powers in an int anyways:
>>> print 2**-4
0.0625
That's actually an interesting question. One argument I haven't found in the discussion is the simple lack of obvious return values for the arguments. Let's count the ways the hypthetical int pow_int(int, int) function could fail.
Overflow
Result undefined pow_int(0,0)
Result can't be represented pow_int(2,-1)
The function has at least 2 failure modes. Integers can't represent these values, the behaviour of the function in these cases would need to be defined by the standard - and programmers would need to be aware of how exactly the function handles these cases.
Overall leaving the function out seems like the only sensible option. The programmer can use the floating point version with all the error reporting available instead.
Short answer:
A specialisation of pow(x, n) to where n is a natural number is often useful for time performance. But the standard library's generic pow() still works pretty (surprisingly!) well for this purpose and it is absolutely critical to include as little as possible in the standard C library so it can be made as portable and as easy to implement as possible. On the other hand, that doesn't stop it at all from being in the C++ standard library or the STL, which I'm pretty sure nobody is planning on using in some kind of embedded platform.
Now, for the long answer.
pow(x, n) can be made much faster in many cases by specialising n to a natural number. I have had to use my own implementation of this function for almost every program I write (but I write a lot of mathematical programs in C). The specialised operation can be done in O(log(n)) time, but when n is small, a simpler linear version can be faster. Here are implementations of both:
// Computes x^n, where n is a natural number.
double pown(double x, unsigned n)
{
double y = 1;
// n = 2*d + r. x^n = (x^2)^d * x^r.
unsigned d = n >> 1;
unsigned r = n & 1;
double x_2_d = d == 0? 1 : pown(x*x, d);
double x_r = r == 0? 1 : x;
return x_2_d*x_r;
}
// The linear implementation.
double pown_l(double x, unsigned n)
{
double y = 1;
for (unsigned i = 0; i < n; i++)
y *= x;
return y;
}
(I left x and the return value as doubles because the result of pow(double x, unsigned n) will fit in a double about as often as pow(double, double) will.)
(Yes, pown is recursive, but breaking the stack is absolutely impossible since the maximum stack size will roughly equal log_2(n) and n is an integer. If n is a 64-bit integer, that gives you a maximum stack size of about 64. No hardware has such extreme memory limitations, except for some dodgy PICs with hardware stacks that only go 3 to 8 function calls deep.)
As for performance, you'll be surprised by what a garden variety pow(double, double) is capable of. I tested a hundred million iterations on my 5-year-old IBM Thinkpad with x equal to the iteration number and n equal to 10. In this scenario, pown_l won. glibc pow() took 12.0 user seconds, pown took 7.4 user seconds, and pown_l took only 6.5 user seconds. So that's not too surprising. We were more or less expecting this.
Then, I let x be constant (I set it to 2.5), and I looped n from 0 to 19 a hundred million times. This time, quite unexpectedly, glibc pow won, and by a landslide! It took only 2.0 user seconds. My pown took 9.6 seconds, and pown_l took 12.2 seconds. What happened here? I did another test to find out.
I did the same thing as above only with x equal to a million. This time, pown won at 9.6s. pown_l took 12.2s and glibc pow took 16.3s. Now, it's clear! glibc pow performs better than the three when x is low, but worst when x is high. When x is high, pown_l performs best when n is low, and pown performs best when x is high.
So here are three different algorithms, each capable of performing better than the others under the right circumstances. So, ultimately, which to use most likely depends on how you're planning on using pow, but using the right version is worth it, and having all of the versions is nice. In fact, you could even automate the choice of algorithm with a function like this:
double pown_auto(double x, unsigned n, double x_expected, unsigned n_expected) {
if (x_expected < x_threshold)
return pow(x, n);
if (n_expected < n_threshold)
return pown_l(x, n);
return pown(x, n);
}
As long as x_expected and n_expected are constants decided at compile time, along with possibly some other caveats, an optimising compiler worth its salt will automatically remove the entire pown_auto function call and replace it with the appropriate choice of the three algorithms. (Now, if you are actually going to attempt to use this, you'll probably have to toy with it a little, because I didn't exactly try compiling what I'd written above. ;))
On the other hand, glibc pow does work and glibc is big enough already. The C standard is supposed to be portable, including to various embedded devices (in fact embedded developers everywhere generally agree that glibc is already too big for them), and it can't be portable if for every simple math function it needs to include every alternative algorithm that might be of use. So, that's why it isn't in the C standard.
footnote: In the time performance testing, I gave my functions relatively generous optimisation flags (-s -O2) that are likely to be comparable to, if not worse than, what was likely used to compile glibc on my system (archlinux), so the results are probably fair. For a more rigorous test, I'd have to compile glibc myself and I reeeally don't feel like doing that. I used to use Gentoo, so I remember how long it takes, even when the task is automated. The results are conclusive (or rather inconclusive) enough for me. You're of course welcome to do this yourself.
Bonus round: A specialisation of pow(x, n) to all integers is instrumental if an exact integer output is required, which does happen. Consider allocating memory for an N-dimensional array with p^N elements. Getting p^N off even by one will result in a possibly randomly occurring segfault.
One reason for C++ to not have additional overloads is to be compatible with C.
C++98 has functions like double pow(double, int), but these have been removed in C++11 with the argument that C99 didn't include them.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3286.html#550
Getting a slightly more accurate result also means getting a slightly different result.
The World is constantly evolving and so are the programming languages. The fourth part of the C decimal TR¹ adds some more functions to <math.h>. Two families of these functions may be of interest for this question:
The pown functions, that takes a floating point number and an intmax_t exponent.
The powr functions, that takes two floating points numbers (x and y) and compute x to the power y with the formula exp(y*log(x)).
It seems that the standard guys eventually deemed these features useful enough to be integrated in the standard library. However, the rational is that these functions are recommended by the ISO/IEC/IEEE 60559:2011 standard for binary and decimal floating point numbers. I can't say for sure what "standard" was followed at the time of C89, but the future evolutions of <math.h> will probably be heavily influenced by the future evolutions of the ISO/IEC/IEEE 60559 standard.
Note that the fourth part of the decimal TR won't be included in C2x (the next major C revision), and will probably be included later as an optional feature. There hasn't been any intent I know of to include this part of the TR in a future C++ revision.
¹ You can find some work-in-progress documentation here.
Here's a really simple O(log(n)) implementation of pow() that works for any numeric types, including integers:
template<typename T>
static constexpr inline T pown(T x, unsigned p) {
T result = 1;
while (p) {
if (p & 0x1) {
result *= x;
}
x *= x;
p >>= 1;
}
return result;
}
It's better than enigmaticPhysicist's O(log(n)) implementation because it doesn't use recursion.
It's also almost always faster than his linear implementation (as long as p > ~3) because:
it doesn't require any extra memory
it only does ~1.5x more operations per loop
it only does ~1.25x more memory updates per loop
Perhaps because the processor's ALU didn't implement such a function for integers, but there is such an FPU instruction (as Stephen points out, it's actually a pair). So it was actually faster to cast to double, call pow with doubles, then test for overflow and cast back, than to implement it using integer arithmetic.
(for one thing, logarithms reduce powers to multiplication, but logarithms of integers lose a lot of accuracy for most inputs)
Stephen is right that on modern processors this is no longer true, but the C standard when the math functions were selected (C++ just used the C functions) is now what, 20 years old?
As a matter of fact, it does.
Since C++11 there is a templated implementation of pow(int, int) --- and even more general cases, see (7) in
http://en.cppreference.com/w/cpp/numeric/math/pow
EDIT: purists may argue this is not correct, as there is actually "promoted" typing used. One way or another, one gets a correct int result, or an error, on int parameters.
A very simple reason:
5^-2 = 1/25
Everything in the STL library is based on the most accurate, robust stuff imaginable. Sure, the int would return to a zero (from 1/25) but this would be an inaccurate answer.
I agree, it's weird in some cases.
This is probably a C++ 101 question: I'm curious what the guidelines are for using size_t and offset_t, e.g. what situations they are intended for, what situations they are not intended for, etc. I haven't done a lot of portable programming, so I have typically just used something like int or unsigned int for array sizes, indexes, and the like. However, I gather it's preferable to use some of these more standard typedefs when possible, so I'd like to know how to do that properly.
As a follow-up question, for development on Windows using Visual Studio 2008, where should I look to find the actual typedefs? I've found size_t defined in a number of headers within the VS installation directory, so I'm not sure which of those I should use, and I can't find offset_t anywhere.
You are probably referring to off_t, not offset_t. off_t is a POSIX type, not a C type, and it is used to denote file offsets (allowing 64-bit file offsets even on 32-bit systems). C99 has superceded that with fpos_t.
size_t is meant to count bytes or array elements. It matches the address space.
Instead of offset_t do you mean ptrdiff_t? This is the type returned by such routines as std::distance. My understanding is that size_t is unsigned (to match the address space as previously mentioned) whereas ptrdiff_t is signed (to theoretically denote "backwards" distances between pointers, though this is very rarely used).
offset_t isn't mentioned at all in my copy of the C++ standard.
size_t on the other hand, is meant simply to denote object or array sizes. A size_t is guaranteed to be big enough to store the size of any in-memory object, so it should be used for that.
You use size_t whenever the C++ language specification indicates that it is used in a particular expression. For example, you'd use size_t to store the return value of sizeof, or to represent the number of elements in an array (new[] takes size_t).
I've no idea what offset_t is - it's not mentioned once in ISO C++ spec.
Dragging an old one back up... offset_t (which is a long long) is used by Solaris' llseek() system call, and is to be used anywhere you're likely to run into real 64-bit file offsets... meaning you're working with really huge files.