C++11 atomic: is std::memory_order code portable? - c++

In functions like std::atomic::compare_exchange, there's runtime parameters like std::memory_order_release, std::memory_order_relaxed.
(http://en.cppreference.com/w/cpp/atomic/atomic/compare_exchange)
I'm not sure is these memory order flags are guaranteed to exist in all kinds of CPU/architects, if some CPU don't support a flag, does this flag lead to crash or? Seems some of these flags are designed for intel Itanium, etc, not sure if std::memory_order related code is portable or not.
Would you give some suggestions?

The C++ standard does have the concept of so-called "freestanding" implementations, which can support a subset of the standard library. However, it also defines bare-minimum functionality that even a freestanding implementation must support. Within that list is the entirety of the <atomic> header.
So yes, implementations must support these.
However, that doesn't mean that a particular flag will do exactly and only what that flag describes. The flags represent the minimum memory barrier, the specific things that are guaranteed to be visible. The implementation could issue a full memory barrier even for flags that don't require it, if the hardware implementation doesn't have lower-tier memory barriers.
So you should write code against what the standard says, and let the compiler sort out the details. If it proves to be inefficient on a platform, you can inspect the assembly to see if you might be able to improve matters.
But to answer your main question, yes, atomic-based code is portable (modulo compiler bugs).

In general, compilers are free to only provide the strongest memory guarantees regardless of what you ask for.
On some platforms, there are relaxed guarantees that are sufficient. Not all platforms support these relaxed guarantees. On those platforms, compilers must provide strictly stronger guarantees.
So they are portable, in that comforming compilers must provide that guarantee or better when you ask for a paticular guarantee.
Note that it isn't just the hardware that is of concern. Certain optimizations and reorderings may be legal or not depending on what memory order guarantee you ask for. I am unaware of any compiler that relies on that, but I am not a compiler expert.

In practice, consume semantic is always strengthened to acquire by current compilers because it turned out to be very hard to implement safely without doing that. Either:
the architecture provides acquire on all loads, and then load consume does the same thing as load acquire which does the same as load relaxed: nothing special (like x86);
the architecture requires an acquire barrier even on dependent accesses (very very rare, maybe only DEC Alpha): then the compiler will use an acquire barrier on consume;
The ISA guarantees dependency ordering for loads in asm, but full acquire needs a barrier. (This is what consume is trying to expose to the programmer). The compiler should provide you the benefit of avoiding the barrier with logical (not crazy) uses of consume
either the compiler tries to do that, but it's tricky and fails in some corner cases that the compiler back end optimizations break (the front end often does not communicate with its back end enough to avoid these just for consume);
or you don't trust the compiler, set optimization to zero, which doesn't even guarantee that no trivial optimization done implicitly by the back end will break consume (and make performance very bad);
or the compiler writers did not care about efficiency or knew they couldn't do a reliable job providing consume so they provide acquire instead, which is semantically correct but really less efficient, and not the intent of the standard.
(And C++'s consume semantic is crazy. It's one of the most inconsistent part of C++ which tells you a lot.)

Related

Can I control what gets copied into CPU cache in C++?

I read about cache optimization in C++ and the mechanisms, modern CPUs use to predict what data is needed next, to copy that into cache. But is there a direct way in C++ for the programmers, who know what actually is needed next, to determine what data gets copied into CPU cache?
This varies with the processor and compiler you're using.
Assuming you're using an Intel x86/x64 or compatible (e.g., AMD) processor, the processor provides a number of prefetch instructions, and most compilers include intrinsics to invoke them. With VC++ you use _m_prefetch or _m_prefetchw. With gcc you use __builtin_prefetch.
Likewise, VC++ on an ARM provides a __prefetch intrinsic for the same purpose (no, I really don't know why they couldn't have used the same name as on x86; the signature and effect appear identical).
Most other reasonably modern, higher-end processors probably provide similar instructions, and
I'd guess most compilers provide intrinsics to make them available, but just as with these, the names of the intrinsics will vary. For that matter, even though the functions are intrinsic to the compiler, most require that you include some header to use them -- and the name of the header will also vary.
The prefetch intrinsics Jerry provided would do the trick. keep in mind that there are several flavors controlled by an argument to that function, determining which levels of the cache (if any) would be used to keep the line. A prefetch_NTA for e.g. would not pollute the caches, but rather provide the line only for immediate use (and is used in cases where you're going to use it soon and once only)
Also keep in mind that these instructions are basically hints to the CPU (which also does quite well by itself trying to guess which lines to prefetch). As such, they are not guaranteed to work, they might fail in many cases (if the memory subsystem is loaded, or the address got swapped out of memory).

Are the Optimization Keywords in C and C++ Reasonable?

So we've all heard the don't-use-register line, the reasoning being that trying to out-optimize a compiler is a fool's errand.
register, from what I know, doesn't actually state anything about CPU registers, just that a given variable can't be referenced indirectly. I'll hazard a guess that it's often referred to as obsolete because compilers can detect a lack of addressing automatically thus making such optimizations transparent.
But if we're firm on that argument, can't it be levelled at every optimization-driven keyword in C? Why do we use inline and C99's restrict for example?
I suppose that some things like aliasing make deducing some optimizations hard or even impossible, so where is the line drawn before we start venturing into Sufficiently Smart Compiler territory?
Where should the line should be drawn in C and C++ between spoon-feeding a compiler optimization information and assuming it knows what it's doing?
EDIT: Jens Gustedt pointed out that my conflating of C and C++ isn't right since two of the keywords have semantic differences and one doesn't exist in standard C++. I had a good link about register in C++ which I'll add if I find it...
I would agree that register and inline are somewhat similar in this respect. If the compiler can see the body of the callee while compiling a call site, it should be able to make a good decision on inlining. The use of the inline keyword in both C and C++ has more to do with the mechanics of making the body of the function visible than with anything else.
restrict, however, is different. When compiling a function, the compiler has no idea of what the call sites are going to be. Being able to assume no aliasing can enable optimizations that would otherwise be impossible.
inline is used in the scenario where you implement a non-templated function within the header then include it from multiple compilation units.
This ensures that the compiler should create just one instance of the function as though it were inlined, so you do not get a link error for multiply defined symbol. It does not however require the compiler to actually inline it.
There are GNU flags I think force-inline or similar but that is a language extension.
register doesn't even say that you can't reference the
variable indirectly (at least in C++). It said that in the
original C, but that has been dropped.
Whether trying to out-optimize the compiler is a fool's errand
depends on the optimization. Not many compilers, for example,
will convert sin(x) * sin(x) + cos(x) * cos(x) into 1.
Today, most compilers ignore register, and no one uses it,
because compilers have become good enough at register allocation
to do a better job than you can with register. In fact,
respecting register would typically make the generated code
slower. This is not the case for inline or restrict: in
both cases, there exist techniques, at least theoretically,
which could result in the compiler doing a better job than you
can. Such techniques are not widespread, however, and (as far
as I know, at least), have a very high compile time overhead,
with in some cases compile times which grow exponentially with
the size of the program (which makes them more or less unusable
on most real programs—compile times which are measured in
years really aren't acceptable).
As to where to draw the line... it changes in time. When
I first started programming in C, register made a significant
difference, and was widely used. Today, no. I imagine that in
time, the same may happen with inline or restrict—some
experimental compilers are very close with inline already.
This is a flame-bait question but I will dive in anyway.
Compilers are a lot better at optimising that your average programmer. There was a time I programmed on a 25MHz 68030 and I got some advantage from the use of register because the compiler's optimizer was so poor. But that was back in 1990.
I see inline as just as bad as register.
In general, measure first before you modify. If you find that you code performs so poorly you want to use register or inline, take a deep breath, stand back and look for a better algorithm first.
In recent times (i.e. the last 5 years) I have gone through code bases and removed inline functions galore with no perceptible change in performance being visible. Code size, however, always benefits from the removal of inline methods. That isn't a big issue for your standard x86-style monster multicore marvel of the modern age but it does matter if you work in the embedded space.
It is a moving target, because compiler technology is improving. (Well, sometimes it is more changing than improving, but that has some of the same effect of rendering your optimization attempts moot, or worse.)
Generally, you should not guess at whether an optimization keyword or other optimization technique is good or not. One has to learn quite a bit about how computers work, including the particular platform you are targeting, and how compilers work.
So a rule about using various optimization techniques is to ask do I know the compiler will not do the best job here? Am I willing to commit to that for a while—will the compiler remain stable while this code is in use, am I willing to rewrite the code when the compiler changes this situation? Typically, you have to be an experienced and knowledgeable software engineer to know when you can do better than the compiler. It also helps if you can talk to the compiler developers.
This means people cannot give you an answer here that has a definite guideline. It depends on what compiler you are using, what your project is, what your resources are, and what your goals are, and so on.
Although some people say not to try to out-optimize the compiler, there are various areas of software engineering where people do better than a compiler and in which it is worth the expense of paying people for this.
The difference is as follows:
register is very local optimization (i.e. inside one function). The register allocation is a relatively solved problem both by smarter compilers and by larger number of register (mostly the former but say x86-64 have more registers then x86 and both have larger number then say 8-bit processor)
inline is harder as it is inter-procedure optimization. However as it involves relatively small depth of recursion and small number of procedures (if inlined procedure is too big there is no sense of inlining it) it may be safely left to the compiler.
restrict is much harder. To fully know the that two pointers don't alias you would need to analyse whole program (including libraries, system, plug-ins etc.) - and even then run into problems. However the information is clearer for programmer AND it is part of specification.
Consider very simple code:
void my_memcpy(void *dst, const void *src, size_t size) {
for (size_t i = 0; i < size; i++) {
((char *)dst)[i] = ((const char *)str)[i];
}
}
Is there a benefit to making this code efficient? Yes - memcpy tend to be very useful (say for copying GC). Can this code be vectorized (here - moved by words - say 128b instead of 8b)? Compiler would have to deduce that dst and src does not alias in any way and regions pointed by them are independent. size may depend on user input or runtime behaviour or other elements which makes the analysis practically impossible - similar problems to Halting Problem - in general we cannot analyse everything without running it. Or it might be part of C library (I assume shared libraries) and is called by program hence all call sites are not even known at compile time. Without such analysis the program would exhibit different behaviour with optimization on. On the other hand programmer might ensure that they are different objects simply by knowing the (even higher-level) design instead of need for bottom-up analysis.
restrict can also be part of documentation as it might be programmer who wrote the procedure in a way that it cannot handle 2 aliasing pointers. For example if we want to copy memory from aliasing locations the above code is incorrect.
So to sum up - Sufficiently Smart Compiler would not be able to deduce the restrict (unless we move to compilers understending the meaning of code) without knowing the whole program. Even then the it would be close to undecidability. However for local optimization the compilers are already sufficiently smart. My guess it that Sufficiently Smart Compiler with whole program analysis would be able to deduce in many interesting cases however.
PS. By local I mean single function. So local optimization cannot assume anything about arguments, global variables etc.
One thing that hasn't been mentioned is that many non-x86 compilers aren't nearly as good at optimizing as gcc and other "modern" C-compilers are.
For instance, the compilers for PIC are absolutely terrible at optimizing. Also, the optimizer for cicc (the CUDA compiler), though much better, still seems to miss a lot of fairly simple optimizations.
For these cases, I've found optimization hints like register, inline, and #pragma unroll to be extremely useful.
From what I have seen back in the days I was more involved with C/C++, these are merely orders directly given to the compiler. Compiler may try to inline a function even if it is not given the direct order to do so. That really depends on the compiler and may even raise some cross-compiler issues. As an example, visual studio provides different levels of optimization which correspond to the different intelligence levels of the compiler. I have read that all class functions are implicitly inline to give compiler a hint to minimize function call overhead. In any case, these directives are extremely helpful when you are using a less intelligent compiler while in intelligent cases, they may be very obvious for the compiler to do some optimization.
Also, be sure that these keywords are guaranteed to be safe. Some compiler optimizations may not work with some libraries such as OpenGL (as I have seen it myself). So in cases where you feel that compiler optimization may be harmful, you can use these keywords to make sure it is done the way you want it to.
The compilers such as g++ these days optimize the code very well. You might as well search for optimization elsewhere, maybe in the methods and algorithm you use or by using TBB or CUDA to make your code parallel.

How much bad can be done using register variables in C++

I just came to know that we can use registers, explicitly in C++ programs. I wonder what if i declare and use all available registers in a single C++ program and run it for considerable amount of time. How badly will my system behave and what (if any) measures will be taken by the os to come out of the situation.
The compiler will simply ignore the register keyword, so you are not going to run out of registers. It may well ignore it anyway - compilers are typically much better at register allocation than humans.
The register keyword indicates to the compiler that the variable does not need to be addressable in main memory. Thus the compiler can be sure that there are no pointers to the value and optimize accordingly.
A common misconsception: the register keyword
register Keyword (see the non-Microsoft specific part)
"register" keyword in C?
Excessive use of the register keyword is unlikely to have serious negative impact on modern systems. Every thread maintains its own register values during execution, and its register usage will not have any direct impact on other threads. The compiler will either reject or ignore register usage that cannot result in a viable program. Poor register use will at most merely reduce performance and the OS will take no special action.
Only specific number of Registers are available for your C++ program.
Also, it is just a suggestion for the compiler mostly compilers can do this optimization themselves so there is not really much use of using register keyword more because compilers may or may not follow the suggestion.
So the only thing register keyword does with modern compilers is prevent you from using & to take the address of the variable.
To Quote Herb Sutter on this:
Never write register. It's exactly as meaningful as whitespace
The register keyword is only a suggestion to the compiler and can be ignored. Let the compiler do the optimization for you.
The register keyword is only a polite suggestion to the compiler that you think this variable will be heavily used and could it pretty-please just keep it in a register. The compiler is free to ignore this suggestion and, in fact, will usually do so in a modern environment.
register is basically a vestigial remnant of the old, grossly-inefficient C compilers that were available way back when. (The same compilers that led to things like the execrable Duff's Device and other monstrosities, in fact.) Modern compilers are far more capable than you are of keeping track of which variables should be placed into which registers at which points in execution. They will, thus, politely ignore you without saying a word.
Als posted a link to Herb Sutter's article on keywords. I agree with Sutter that one should never use register. I disagree with him on whether register is meaningless.
It is worse than meaningless.
I have seen code where a variable qualified with register is later used with "&". Code with dozens and dozens of variables qualified with register. And the ultimate doozy, "register volatile foo;"
Never use "register".
All the CPU registers are at the disposal of your program anyway, so there is nothing exceptional in using all of them. OS won't even notice it.

When can optimizations done by the compiler destroy my C++ code?

When can optimizations done by the compiler cause my C++ code to exhibit wrong behaviour which would not be present had those optimizations not been performed? For example, not using volatile in certain circumstances can cause the program to behave incorrectly (e.g. not re-reading the value of a variable from memory and instead only reads it once and stores it in register). But are there other pitfalls which one should know about before turning on the most aggressive optimization flag and afterwards wondering why the program doesn't work anymore?
Compiler optimizations should not affect the observable behavior of your program, so in theory, you don't need to worry. In practice, if your program strays in to undefined behavior, anything could already happen, so if your program breaks when you enable optimizations, you've merely exposed existing bugs - it wasn't optimization that broke it.
One common optimization point is the return value optimisation (RVO) and named return value optimization (NRVO) which basically means objects returned by value from functions get constructed directly in the object which is receiving them, rather than making a copy. This adjusts the order and number of constructor, copy constructor and destructor calls - but usually with those functions correctly written, there's still no observable difference in the behavior.
Besides the case you mentioned, timing can change in multi-threaded code such that what appears to be working no longer does. Placement of local variables can vary such that harmful behaviour like a memory buffer overrun occurs in debug but not release, optimized or non-optimized, or vice versa. But all of these are bugs that were there already, just exposed by compiler option changes.
This is assuming the compiler has no bugs in its optimizer.
I've only run into it with floating point math. Sometimes the optimizations for speed can change the answer a little. Of course with floating point math, the definition of "right" is not always easy to come up with so you have to run some tests and see if the optimizations are doing what you're expecting. The optimizations don't necessarily make the result wrong, just different.
Other than that, I've never seen any optimizations break correct code. Compiler writers are pretty smart and know what they're doing.
Bugs caused by compiler optimizations that are not rooted in bugs in your code are not predictable and hard to determine (I managed to find one once when examining the assembly code a compiler had created when optimizing a certain area in my code once). The common case is that if an optimization makes your program unstable, it just reveals a flaw in your program.
Just don't work from the assumption that the optimizer ever destroys your code. It's just not what it was made to do. If you do observe problems then automatically consider unintentional UB.
Yes, threading can play havoc with the kind of assumptions you are used to. You get no help from either the language or the compiler, although that's changing. What you do about that is not piss around with volatile, you use a good threading library. And you use one of its synchronization primitives wherever two or more threads can both touch variables. Trying to take short-cuts or optimizing this yourself is a one-way ticket into threading hell.
Failing to include the volatile keyword when declaring access to a volatile memory location or IO device is a bug in your code; even if the bug is only evident when your code gets optimized.
Your compiler will document any "unsafe" optimizations where it documents the command-line switches and pragmas that turn them on and off. Unsafe optimizations usually related to assumptions about floating point math (rounding, edge cases like NAN) or aliasing as others have already mentioned.
Constant folding can create aliasing making bugs in your code appear. So, for example, if you have code like:
static char *caBuffer = " ";
...
strcpy(caBuffer,...)
Your code is basically an error where you scribble over a constant (literal). Without constant folding, the error won't really effect anything. But much like the volatile bug you mentioned, when your compiler folds constants to save space, you might scribble over another literal like the spaces in:
printf("%s%s%s",cpName," ",cpDescription);
because the compiler might point the literal argument to the printf call at the last 4 characters of the literal used to initialize caBuffer.
As long as your code does not rely on specific manifestations of undefined/unspecified behavior, and as long as the functionality of your code is defined in terms of observable behavior of C++ program, a C++ compiler optimizations cannot possibly destroy the functionality of your code with only one exception:
When a temporary object is created with the only purpose of being immediately copied and destroyed, the compiler is allowed to eliminate the creation of such temporary object even if the constructor/destructor of the object has side-effects affecting the observable behavior of the program.
In the newer versions of C++ standard that permission is extended to cover named object in so called Named Return Value Optimization (NRVO).
That's the only way the optimizations can destroy the functionality of conforming C++ code. If your code suffers from optimizations in any other way, it is either a bug in your code or a bug in the compiler.
One can argue though, that relying on this behavior is actually nothing else than relying on a specific manifestation of unspecified behavior. This is a valid argument, which can be used to support the assertion that under the above conditions optimizations can never break the functionality of the program.
Your original example with volatile is not a valid example. You are basically accusing the compiler of breaking guarantees that never existed in the first place. If your question should be interpreted in that specific way (i.e. what random fake non-existent imaginary guarantees can optimizer possibly break), then the number of possible answers is virtually infinite. The question simply wouldn't make much sense.
Strict aliasing is an issue you might run into with gcc. From what I understand, with certain versions of gcc (gcc 4.4) it gets automatically enabled with optimizations. This site http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html does a very good job at explaining strict aliasing rules.
I just recently saw that (in C++0x) the compiler is allowed to assume that certain classes of loops will always terminate (to allow optimizations). I can't find the reference right now but I'll try to link it if I can track it down. This can cause observable program changes.
At a meta level, if your code uses relies on behavior that is based on undefined aspects of the C++ standard, a standards conforming compiler is free to destroy your C++ code (as you put it). If you don't have a standards conforming compiler, then it can also do non-standard things, like destroy your code anyway.
Most compilers publish what subset of the C++ standard they conform to, so you can always write your code to that particular standard and mostly assume you are safe. However, you can't really guard against bugs in the compiler without having encountered them in the first place, so you still aren't really guaranteed anything.
I do not have the exact details (maybe someone else can chime in), but I have heard tell of a bug caused by loop unrolling/optimization if the loop counter variable is of char/uint8_t type(in a gcc context i.e.).

C++0x atomic template implementation

I know that similar template exits in Intel's TBB, besides that I can't find any implementation on google or in Boost library.
You can find discussions about this feature implementation in boost there : http://lists.boost.org/Archives/boost/2008/11/144803.php
> Can the N2427 - C++ Atomic Types and Operations be implemented
> without the help of the compiler?
No.
They don't need to be intrinsics if you can write inline assembler (or separately-compiled assembler for
that matter) then you can write the
operations themselves directly. You
might even be able to use simple C++
(e.g. just plain assignment for load
or store). The reason you need
compiler support is preventing
inappropriate optimizations: atomic
operations can't be optimized out, and
generally must not be reordered
before or after any other operations.
This means that even non-atomic
stores performed before an atomic
store have to be complete, and can't
be cached in a register (for example).
Also, loads that occur after an
atomic operation cannot be hoisted
before the atomic op. On some
compilers, just using inline assembler
is enough. On others, calling an
external function is enough. MSVC
provides
_ReadWriteBarrier() to provide the compiler ordering. Other compilers
need other flags.