I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC)
However, I am curious how strict I should be about it and what could possibly break.
The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application.
Or another case I'm thinking of is the use of some SmartPointer class which may do some extra stuff in global memory when being in debug mode. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different).
Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that).
So, to summarize, I want to mark functions as pure/const which are not pure/const in a strict sense. An easy example:
int foo(int) __attribute__((const));
int bar(int x) {
int sum = 0;
for(int i = 0; i < 100; ++i)
sum += foo(x);
return sum;
}
int foo_callcounter = 0;
int main() {
cout << "bar 42 = " << bar(42) << endl;
cout << "foo callcounter = " << foo_callcounter << endl;
}
int foo(int x) {
cout << "DEBUG: foo(" << x << ")" << endl;
foo_callcounter++;
return x; // or whatever
}
Note that the function foo is not const in a strict sense. Though, it doesn't matter what foo_callcounter is in the end. It also doesn't matter if the debug statement is not made (in case the function is not called).
I would expect the output:
DEBUG: foo(42)
bar 42 = 4200
foo callcounter = 1
And without optimisation:
DEBUG: foo(42) (100 times)
bar 42 = 4200
foo callcounter = 100
Both cases are totally fine because what only matters for my usecase is the return value of bar(42).
How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?
Note that I know that some compilers might not support this attribute at all. (BTW., I am collecting them here.) I also know how to make use of thes attributes in a way that the code stays portable (via #defines). Also, all compilers which are interesting to me support it in some way; so I don't care about if my code runs slower with compilers which do not.
I also know that the optimised code probably will look different depending on the compiler and even the compiler version.
Very relevant is also this LWN article "Implications of pure and constant functions", especially the "Cheats" chapter. (Thanks ArtemGr for the hint.)
I'm thinking of using pure/const functions more heavily in my C++ code.
That’s a slippery slope. These attributes are non-standard and their benefit is restricted mostly to micro-optimizations.
That’s not a good trade-off. Write clean code instead, don’t apply such micro-optimizations unless you’ve profiled carefully and there’s no way around it. Or not at all.
Notice that in principle these attributes are quite nice because they state implied assumptions of the functions explicitly for both the compiler and the programmer. That’s good. However, there are other methods of making similar assumptions explicit (including documentation). But since these attributes are non-standard, they have no place in normal code. They should be restricted to very judicious use in performance-critical libraries where the author tries to emit best code for every compiler. That is, the writer is aware of the fact that only GCC can use these attributes, and has made different choices for other compilers.
You could definitely break the portability of your code. And why would you want to implement your own smart pointer - learning experience apart? Aren't there enough of them available for you in (near) standard libraries?
I would expect the output:
I would expect the input:
int bar(int x) {
return foo(x) * 100;
}
Your code actually looks strange for me. As a maintainer I would think that either foo actually has side effects or more likely rewrite it immediately to the above function.
How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?
If the code is all correct then no. But the chances that your code is correct are small. If your code is incorrect then this feature can mask out bugs:
int foo(int x) {
globalmutex.lock();
// complicated calculation code
return -1;
// more complicated calculation
globalmutex.unlock();
return x;
}
Now given the bar from above:
int main() {
cout << bar(-1);
}
This terminates with __attribute__((const)) but deadlocks otherwise.
It also highly depends on the implementation. For example:
void f() {
for(;;)
{
globalmutex.unlock();
cout << foo(42) << '\n';
globalmutex.lock();
}
}
Where the compiler should move the call foo(42)? Is it allowed to optimize this code? Not in general! So unless the loop is really trivial you have no benefits of your feature. But if your loop is trivial you can easily optimize it yourself.
EDIT: as Albert requested a less obvious situation, here it comes:
F
or example if you implement operator << for an ostream, you use the ostream::sentry which locks the stream buffer. Suppose you call pure/const f after you released or before you locked it. Someone uses this operator cout << YourType() and f also uses cout << "debug info". According to you the compiler is free to put the invocation of f into the critical section. Deadlock occurs.
I would examine the generated asm to see what difference they make. (My guess would be that switching from C++ streams to something else would yield more of a real benefit, see: http://typethinker.blogspot.com/2010/05/are-c-iostreams-really-slow.html )
I think nobody knows this (with the exception of gcc programmers), simply because you rely on undefined and undocumented behaviour, which can change from version to version. But how about something like this:
#ifdef NDEBUG \
#define safe_pure __attribute__((pure)) \
#else \
#define safe_pure \
#endif
I know it's not exactly what you want, but now you can use the pure attribute without breaking the rules.
If you do want to know the answer, you may ask in the gcc forums (mailing list, whatever), they should be able to give you the exact answer.
Meaning of the code: When NDEBUG (symbol used in assert macros) is defined, we don't debug, have no side effects, can use pure attribute. When it is defined, we have side effects, so it won't use pure attribute.
Related
Consider the following code snippet
#include <vector>
#include <cstdlib>
void __attribute__ ((noinline)) calculate1(double& a, int x) { a += x; };
void __attribute__ ((noinline)) calculate2(double& a, int x) { a *= x; };
void wrapper1(double& a, int x) { calculate1(a, x); }
void wrapper2(double& a, int x) { calculate2(a, x); }
typedef void (*Func)(double&, int);
int main()
{
std::vector<std::pair<double, Func>> pairs = {
std::make_pair(0, (rand() % 2 ? &wrapper1 : &wrapper2)),
std::make_pair(0, (rand() % 2 ? &wrapper1 : &wrapper2)),
};
for (auto& [a, wrapper] : pairs)
(*wrapper)(a, 5);
return pairs[0].first + pairs[1].first;
}
With -O3 optimization the latest gcc and clang versions do not optimize the pointers to wrappers to pointers to underlying functions. See assembly here at line 22:
mov ebp, OFFSET FLAT:wrapper2(double&, int) # tmp118,
which results later in call + jmp, instead of just call had the compiler put a pointer to the calculate1 instead.
Note that I specifically asked for no-inlined calculate functions to illustrate; doing it without noinline results in another flavour of non-optimization where compiler will generate two identical functions to be called by pointer (so still won't optimize, just in a different fashion).
What am I missing here? Is there any way to guide the compiler short of manually plugging in the correct functions (without wrappers)?
Edit 1. Following suggestions in the comments, here is a disassembly with all functions declared static, with exactly the same result (call + jmp instead of call).
Edit 2. Much simpler example of the same pattern:
#include <vector>
#include <cstdlib>
typedef void (*Func)(double&, int);
static void __attribute__ ((noinline)) calculate(double& a, int x) { a += x; };
static void wrapper(double& a, int x) { calculate(a, x); }
int main() {
double a = 5.0;
Func f;
if (rand() % 2)
f = &wrapper; // f = &calculate;
else
f = &wrapper;
f(a, 0);
return 0;
}
gcc 8.2 successfully optimizes this code by throwing pointer to wrapper away and storing &calculate directly in its place (https://gcc.godbolt.org/z/nMIBeo). However changing the line as per comment (that is, performing part of the same optimization manually) breaks the magic and results in pointless jmp.
You seem to be suggesting that &calculate1 should be stored in the vector instead of &wrapper1. In general this is not possible: later code might try to compare the stored pointer against &calculate1 and that must compare false.
I further assume that your suggestion is that the compiler might try to do some static analysis and determine that the function pointers values in the vector are never compared for equality with other function pointers, and in fact that none of the other operations done on the vector elements would produce a change in observable behaviour; and therefore in this exact program it could store &calculate1 instead.
Usually the answer to "why does the compiler not perform some particular optimization" is that nobody has conceived of and implemented that idea. Another common reason is that the static analysis involved is, in the general case, quite difficult and might lead to a slowdown in compilation with no benefit in real programs where the analysis could not be guaranteed to succeed.
You are making a lot of assumptions here. Firstly, your syntax. The second is that compilers are perfect in the eye of the beholder and catch everything. The reality is that it is easy to find and hand optimize compiler output, it is not difficult to write small functions to trip up a compiler that you are well in tune with or write a decent size application and there will be places where you can hand tune. This is all known and expected. Then opinion comes in where on my machine my blah is faster than blah so it should have made these instructions instead.
gcc is not a great compiler for performance, on some targets it has been getting worse for a number of major revs. It is pretty good at what it does, better than pretty good, it deals with a number of pre processors/languages has a common middle and a number of backends. Some backends get better optimization applied front to back others are just hanging on for the ride.There were a number of other compilers that could produce code that could easily outperform gcc.
These were mostly pay-for compilers. More than an individual would pay out of pocket: used car prices, sometimes recurring annually.
There are things that gcc can optimize that are simply amazing and times that it totally goes in the wrong direction. Same goes for clang, often they do similar jobs with similar output, sometimes do some impressive things sometimes just go off into the weeds. I now find it more fun to manipulate the optimizer to make it do good or bad things rater than worry about why didn't it do what I "think" it should have done on a particular occasion. If I need that code faster I take the compiled output and hand fix it and use it as an assembly function.
You get what you pay for with gcc, if you were to look deep in its bowels you will find it is barely held together with duct tape and bailing wire (llvm is catching up). But for a free tool it does a simply amazing job, it is so widely used that you can get free support just about anywhere. We are sadly well into a time where folks think that because gcc interprets the language in a certain way that is how the language is defined and sadly that is not remotely true. But so many folks don't try other compilers to find out what "implementation defined" really means.
Last and most important, it's open source, if you want to "fix" an optimization then just do it. Keep that fix for yourself, post it, or try to push it upstream.
I found this example at cppreference.com, and it seems to be the defacto example used through-out StackOverflow:
template<int N>
struct S {
int a[N];
};
Surely, non-type templatization has more value than this example. What other optimizations does this syntax enable? Why was it created?
I am curious, because I have code that is dependent on the version of a separate library that is installed. I am working in an embedded environment, so optimization is important, but I would like to have readable code as well. That being said, I would like to use this style of templating to handle version differences (examples below). First, am I thinking of this correctly, and MOST IMPORTANTLY does it provide a benefit or drawback over using a #ifdef statement?
Attempt 1:
template<int VERSION = 500>
void print (char *s);
template<int VERSION>
void print (char *s) {
std::cout << "ERROR! Unsupported version: " << VERSION << "!" << std::endl;
}
template<>
void print<500> (char *s) {
// print using 500 syntax
}
template<>
void print<600> (char *s) {
// print using 600 syntax
}
OR - Since the template is constant at compile time, could a compiler consider the other branches of the if statement dead code using syntax similar to:
Attempt 2:
template<int VERSION = 500>
void print (char *s) {
if (VERSION == 500) {
// print using 500 syntax
} else if (VERSION == 600) {
// print using 600 syntax
} else {
std::cout << "ERROR! Unsupported version: " << VERSION << "!" << std::endl;
}
}
Would either attempt produce output comparable in size to this?
void print (char *s) {
#if defined(500)
// print using 500 syntax
#elif defined(600)
// print using 600 syntax
#else
std::cout << "ERROR! Unsupported version: " << VERSION << "!" << std::endl;
#endif
}
If you can't tell I'm somewhat mystified by all this, and the deeper the explanation the better as far as I'm concerned.
Compilers find dead code elimination easy. That is the case where you have a chain of ifs depending (only) on a template parameter's value or type. All branches must contain valid code, but when compiled and optimized the dead branches evaporate.
A classic example is a per pixel operation written with template parameters that control details of code flow. The body can be full of branches, yet the compiled output branchless.
Similar techniques can be used to unroll loops (say scanline loops). Care must be taken to understand the code size multiplication that can result: especially if your compiler lacks ICF (aka comdat folding) such as the gold gcc linker and msvc (among others) have.
Fancier things can also be done, like manual jump tables.
You can do pure compile time type checks with no runtime behaviour at alll stuff like dimensional analysis. Or distinguish between points and vectors in n-space.
Enums can be used to name types or switches. Pointers to functions to enable efficient inlining. Pointers to data to allow 'global' state that is mockable, or siloable, or decoupled from implementation. Pointers to strings to allow efficient readable names in code. Lists of integral values for myriads of purposes, like the indexes trick to unpack tuples. Complex operations on static data, like compile time sorting of data in multiple indexes, or checking integrity of static data with complex invariants.
I am sure I missed some.
An obvious optimization is when using an integer, the compiler has a constant rather than a variable:
int foo(size_t); // definition not visible
// vs
template<size_t N>
size_t foo() {return N*N;}
With the template, there's nothing to compute at runtime, and the result may be used as a constant, which can aid other optimizations. You can take this example further by declaring it constexpr, as 5gon12eder mentioned below.
Next example:
int foo(double, size_t); // definition not visible
// vs
template<size_t N>
size_t foo(double p) {
double r(p);
for (size_t i(0) i < N; ++i) {
r *= p;
}
return r;
}
Ok. Now the number of iterations of the loop is known. The loop may be unrolled/optimized accordingly, which can be good for size, speed, and eliminating branches.
Also, basing off your example, std::array<> exists. std::array<> can be much better than std::vector<> in some contexts, because std::vector<> uses heap allocations and non-local memory.
There's also the possibility that some specializations will have different implementations. You can separate those and (potentially) reduce other referenced definitions.
Of course, templates<> can also work against you unnecessarily duplication of your programs.
templates<> also require longer symbol names.
Getting back to your version example: Yes, it's certainly possible that if VERSION is known at compilation, the code which is never executed can be deleted and you may also be able to reduce referenced functions. The primary difference will be that void print (char *s) will have a shorter name than the template (whose symbol name includes all template parameters). For one function, that's counting bytes. For complex programs with many functions and templates, that cost can go up quickly.
There is an enormous range of potential applications of non-typename template parameters. In his book The C++ Programming Language, Stroustrup gives an interesting example that sketches out a type-safe zero-overhead framework for dealing with physical quantities. Basically, the idea is that he writes a template that accepts integers denoting the powers of fundamental physical quantities such as length or mass and then defines arithmetic on them. In the resulting framework, you can add speed with speed or divide distance by time but you cannot add mass to time. Have a look at Boost.Units for an industry-strength implementation of this idea.
For your second question. Any reasonable compiler should be able to produce exactly the same machine code for
#define FOO
#ifdef FOO
do_foo();
#else
do_bar();
#endif
and
#define FOO_P 1
if (FOO_P)
do_foo();
else
do_bar();
except that the second version is much more readable and the compiler can catch errors in both branches simultaneously. Using a template is a third way to generate the same code but I doubt that it will improve readability.
Please refer to section 41.2.2 Instruction Reordering of "TCPL" 4th edition by B.Stroustrup, which I transcribe below:
To gain performance, compilers, optimizers, and hardware reorder
instructions. Consider:
// thread 1:
int x;
bool x_init;
void init()
{
x = initialize(); // no use of x_init in initialize()
x_init = true;
// ...
}
For this piece of code there is no stated reason to assign to x before
assigning to x_init. The optimizer (or the hardware instruction
scheduler) may decide to speed up the program by executing x_init =
true first. We probably meant for x_init to indicate whether x had
been initialized by initializer() or not. However, we did not say
that, so the hardware, the compiler, and the optimizer do not know
that.
Add another thread to the program:
// thread 2:
extern int x;
extern bool x_init;
void f2()
{
int y;
while (!x_init) // if necessary, wait for initialization to complete
this_thread::sleep_for(milliseconds{10});
y = x;
// ...
}
Now we have a problem: thread 2 may never wait and thus will assign an
uninitialized x to y. Even if thread 1 did not set x_init and x in
‘‘the wrong order,’’ we still may have a problem. In thread 2, there
are no assignments to x_init, so an optimizer may decide to lift the
evaluation of !x_init out of the loop, so that thread 2 either never
sleeps or sleeps forever.
Does the Standard allow the reordering in thread 1? (some quote from the Standard would be forthcoming) Why would that speed up the program?
Both answers in this discussion on SO seem to indicate that no such optimization occurs when there are global variables in the code, as x_init above.
What does the author mean by "to lift the evaluation of !x_init out of the loop"? Is this something like this?
if( !x_init ) while(true) this_thread::sleep_for(milliseconds{10});
y = x;
This is not so much a issue of the C++ compiler/standard, but that of modern CPUs. Have a look here. The compiler isn't going to emit memory barrier instructions between the assignments of x and x_init unless you tell it to.
For what it is worth, prior to C++11, the standard had no notion of multi threading in it's abstract machine model. Things are a bit nicer these days.
The C++11 standard does not "allow" or "prevent" reordering. It specifies some way to force a specific "barrier" that, it turns, prevent the compiler to move instructions before/after them. A compiler, in this example might reorder the assignment because it might be more efficient on a CPU with multiple computing unit (ALU/Hyperthreading/etc...) even with a single core. Typically, if your CPU has 2 ALU that can work in parallel, there is no reason the compiler would not try to feed them with as much work as it can.
I'm not speaking of out-of-order reordering of the CPU instructions that's done internally in Intel CPU (for example), but compile time ordering to ensure all the computing resources are busy doing some work.
I think it depends on the compiler compilation flags. Typically unless you tell it to, the compiler must assume that another compilation unit (say B.cpp, which is not visible at compile time) can have a "extern bool x_init", and can change it at any time. Then, the re-ordering optimization would break with the expected behavior (B can define the initialize() function). This example is trivial and unlikely to break. The linked SO answer are not related to this "optimization", but simply that, in their case, the compiler can not make the assumption that the global array is not modified externally, and as such can not make the optimization. This is not like your example.
Yes. It's a very common optimization trick, instead of:
// test is a bool
for (int i = 0; i < 345; i++) {
if (test) do_something();
}
The compiler might do:
if (test) for(int i = 0; i < 345; i++) { do_something(); }
And save 344 useless tests.
... approximately compared to a typical std::string::operator==()? I give some more details below, I'm not sure if they are of any relevance. Answer with complexity or approximation is good enough. Thanks!
Details: I will use it inside a for loop over a list to find some specific instances. I estimate my average level of inheritance to 3.5 classes. The one I'm looking for has a parent class, a grandparent and above that two "interfaces", i.e. to abstract classes with a couple of virtual void abc() = 0;.
There is no sub-class to the one I'll be looking for.
It depends hugely on your compiler, your particular class hierarchy, the hardware, all sorts of factors. You really need to measure it directly inside your particular application. You can use rdtsc or (on Windows) QueryPerformanceCounter to get a relatively high-precision timer for that purpose. Be sure to time loops or sleds of several thousand dynamic_cast<>s, because even QPC only has a ¼μs resolution.
In our app, a dynamic_cast<> costs about 1 microsecond, and a string comparison about 3ns/character.
Both dynamic_cast<> and stricmp() are at the top of our profiles, which means the performance cost of using them is significant. (Frankly in our line of work it's unacceptable to have those functions so high on the profile and I've had to go and rewrite a bunch of someone else's code that uses them.)
The best answer is to measure, my guess would be that dynamic_cast is faster than comparing any but the shortest strings (or perhaps even than short strings).
That being said, trying to determine the type of an object is usually a sign of bad design, according to Liskov's substitution principle you should just treat the object the same and have the virtual functions behave the correct way without examining the type from the outside.
Edit: After re-reading your question I'll stick to There is no sub-class to the one I'll be looking for.
In that case you can use typeid directly, I believe it should be faster than either of your options (although looking for a specific type is still a code smell in my opinion)
#include <iostream>
#include <typeinfo>
struct top {
virtual ~top() {}
};
struct left : top { };
struct right : top { };
int main()
{
left lft;
top &tp = lft;
std::cout << std::boolalpha << (typeid(lft) == typeid(left)) << std::endl;
std::cout << std::boolalpha << (typeid(tp) == typeid(left)) << std::endl;
std::cout << std::boolalpha << (typeid(tp) == typeid(right)) << std::endl;
}
Output:
true
true
false
I'm in a position where this design would greatly improve the clarity and maintenance-needs for my code.
What I'm looking for is something like this:
#define MY_MACRO(arg) #if (arg)>0 cout<<((arg)*5.0)<<endl; #else cout<<((arg)/5.0)<<endl; #endif
The idea here:
The pre-processor substitutes different lines of code depending on the compile-time (constant) value of the macro argument. Of course, I know this syntax doesn't work, as the # is seen as the string-ize operator instead of the standard #if, but I think this demonstrates the pre-processor functionality I am trying to achieve.
I know that I could just put a standard if statement in there, and the compiler/runtime would be left to check the value. But this is unnecessary work for the application when arg will always be passed a constant value, like 10.8 or -12.5 that only needs to be evaluated at compile-time.
Performance needs for this number-crunching application require that all unnecessary runtime conditions be eliminated whenever possible, and many constant values and macros have been used (in place of variables and functions) to make this happen. The ability to continue this trend without having to mix pre-processor code with real if conditions would make this much cleaner - and, of course, code-cleanliness is one of the biggest concerns when using macros, especially at this level.
As far as I know, you cannot have #if (or anything similar) inside your macro.
However, if the condition is known at compile-time, you can safetly use a normal if statement. The compiler will optimise it (assuming you have optimisions turned on).
It's called "Dead code elimination"
Simple, use real C++:
template <bool B> void foo_impl (int arg) { cout << arg*5.0 << endl; }
template < > void foo_impl<false>(int arg) { cout << arg/5.0 << endl; }
template <int I> void foo ( ) { foo_impl< (I>0) >(I); }
[edit]
Or in modern C++, if constexpr.