pragma optimize vs pragma target what is difference - c++

What is the difference between #pragma GCC optimize() and #pragma GCC target() and which one to choose when, what are the other options as well?

At a high level, optimize() is used to control whether certain optimisation techniques are used when compiling code: the big-picture idea is that the compiler can spend more time to produce either faster-executing code or - occasionally - more compact code that needs less memory to run. Optimised code can sometimes be harder to profile or debug, so you may want most of your program un- or less-optimised, but some specific functions that are performance critical to be highly optimised. The pragma gives you that freedom to vary optimisation on a function-by-function basis.
Individual optimisations are listed and explained at https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#Optimize-Options
A general overview of the optimise attribute notation and purpose from here:
optimize (level, …)
optimize (string, …)
The optimize attribute is used to specify that a function is to be compiled with different optimization options than specified on the command line. Valid arguments are constant non-negative integers and strings. Each numeric argument specifies an optimization level. Each string argument consists of one or more comma-separated substrings. Each substring that begins with the letter O refers to an optimization option such as -O0 or -Os. Other substrings are taken as suffixes to the -f prefix jointly forming the name of an optimization option. See Optimize Options.
‘#pragma GCC optimize’ can be used to set optimization options for more than one function. See Function Specific Option Pragmas, for details about the pragma.
Providing multiple strings as arguments separated by commas to specify multiple options is equivalent to separating the option suffixes with a comma (‘,’) within a single string. Spaces are not permitted within the strings.
Not every optimization option that starts with the -f prefix specified by the attribute necessarily has an effect on the function. The optimize attribute should be used for debugging purposes only. It is not suitable in production code.
target() indicates that the compiler can use machine code instructions for specific CPUs. Normally, if you know people running your code might use a different generations of a specific CPU, and each is backwards compatible, then you'd compile for the earliest of those generations so everyone can run your program. With target() you can compile different functions making use of the more advanced features (machine code instructions, or just optimised for the specific cache sizes or pipelining features etc) of later generations of CPU, then your code can selectively call the faster version on CPUs that can support it.
Further details from here
target (string, …)
Multiple target back ends implement the target attribute to specify that a function is to be compiled with different target options than specified on the command line. One or more strings can be provided as arguments. Each string consists of one or more comma-separated suffixes to the -m prefix jointly forming the name of a machine-dependent option. See Machine-Dependent Options.
The target attribute can be used for instance to have a function compiled with a different ISA (instruction set architecture) than the default. ‘#pragma GCC target’ can be used to specify target-specific options for more than one function. See Function Specific Option Pragmas, for details about the pragma.
For instance, on an x86, you could declare one function with the target("sse4.1,arch=core2") attribute and another with target("sse4a,arch=amdfam10"). This is equivalent to compiling the first function with -msse4.1 and -march=core2 options, and the second function with -msse4a and -march=amdfam10 options. It is up to you to make sure that a function is only invoked on a machine that supports the particular ISA it is compiled for (for example by using cpuid on x86 to determine what feature bits and architecture family are used).
int core2_func (void) __attribute__ ((__target__ ("arch=core2")));
int sse3_func (void) __attribute__ ((__target__ ("sse3")));
Providing multiple strings as arguments separated by commas to specify multiple options is equivalent to separating the option suffixes with a comma (‘,’) within a single string. Spaces are not permitted within the strings.
The options supported are specific to each target; refer to x86 Function Attributes, PowerPC Function Attributes, ARM Function Attributes, AArch64 Function Attributes, Nios II Function Attributes, and S/390 Function Attributes for details.

Related

How does reordering numerical code in order to avoid temporary variables make the code faster?

I made the experience (this is not the question but a statement), that avoiding non-constant local variables in favor of const variables or avoiding local variables at all, enables the c++ compiler to generate faster code.
I assume, that this gives the compiler more freedom to interleave calculation of expressions, whereas assignments force the compiler to insert a sync point.
Is this assumption in fact the case?
Any other explanation? e.g. Compiler giving up on certain optimization levels, as soon as the code gets too complex in order to avoid astronomical compile times?
No, assignments don't force the compiler to insert a sync point. If the variables are local, and don't affect anything visible outside your function, compiler will remove all unneeded variables, as part of the usual "register allocation" optimization it does.
If your code is so complex it approaches the limit of what the compiler can keep in memory, additional local variables can make the compiler give up and produce unoptimized code. However, this is a very rare edge-case; and it can be triggered on any change in code, not only regarding local variables.
Generally, compiler optimization is hard to reason about, outside of well-known problems (aliasing, loop-carried dependencies, etc). You might feel like you found some related consideration, but it could disappear when you upgrade your compiler or switch to a different one.
Assignments to local variables that you don't subsequently modify allow the compiler to assume that that value in that variable won't change. It might therefore decide (for example) to store it in a register for the 'usage-span' of the variable. This is a simple optimisation, and no self-respecting compiler is going to miss it (unless perhaps register pressure means it is forced to spill).
An example of where this might speed up the code (and maybe reduce code size a little also) is to assign a member variable to a local and then subsequently use that instead of the member variable. If you are confident that the value is not going to change, this might help the compiler generate better code. But then again, it might be a good way of introducing bugs, you do have to be careful playing games like this.
As Thomas Matthews said in the comments, another advantage of doing what you might consider to be a redundant assignment is to help with debugging. It allows the variable to be inspected (and perhaps adjusted) during a debugging run and that can be really handy. I'm not proud, I make mistakes, so I do it a lot.
Just my $0.02
It's unusual that temp vars hurt optimization; usually they're optimized away, or they help the compiler do a load or calculation once instead of repeating it (common subexpression elimination).
Repeated access to arr[i] might actually load multiple times if the compiler can't prove that no other assignments to other pointers to the same type couldn't have modified that array element. float *__restrict arr can help the compiler figure it out, or float ai = arr[i]; can tell the compiler to read it once and keep using the same value, regardless of other stores.
Of course, if optimization is disabled, more statements are typically slower than using fewer large expressions, and store/reload latency bottlenecks are usually the main bottleneck. See How to optimize these loops (with compiler optimization disabled)? . But -O0 (no optimization) is supposed to be slow. If you're compiling without at least -O2, preferably -O3 -march=native -ffast-math -flto, that's your problem.
I assume, that this gives the compiler more freedom to interleave calculation of expressions, whereas assignments force the compiler to insert a sync point.
Is this assumption in fact the case?
"Sync point" isn't the right technical term for it, but ISO C++ rules for FP math do distinguish between optimization within one expression vs. across statements / expressions.
Contraction of a * b + c into fma(a,b,c) is only allowed within one expression, if at all.
GCC defaults to -ffp-contract=fast, allowing it across expressions. clang defaults to strict or no, but supports -ffp-contract=fast. See How to use Fused Multiply-Add (FMA) instructions with SSE/AVX . If fast makes the code with temp vars run as fast as without, strict FP-contraction rules were the reason why it was slower with temp vars.
(Legacy x87 80-bit FP math, or other unusual machines with FLT_EVAL_METHOD!=0 - FP math happens at higher precision, and rounding to float or double costs extra). Strict ISO C++ semantics require rounding at expression boundaries, e.g. on assignments. GCC defaults to ignoring that, -fno-float-store. But -std=c++11 or whatever (instead of -std=gnu++11) will enforce that extra rounding work (a store/reload which costs throughput and latency).
This isn't a problem for x86 with SSE2 for scalar math; computation happens at either float or double according to the type of the data, with instructions like mulsd (scalar double) or mulss (scalar single). So it implements FLT_EVAL_METHOD == 0 instead of x87's 2. Hopefully nobody in 2023 is building number crunching code for 32-bit x87 and caring about the performance, especially without mentioning that obscure build choice. I mention this mostly for completeness.

How can I utilize the 'red' and 'atom' PTX instructions in CUDA C++ code?

The CUDA PTX Guide describes the instructions 'atom' and 'red', which perform atomic and non-atomic reductions. This is news to me (at least with respect to non-atomic reductions)... I remember learning how to do reductions with SHFL a while back. Are these instructions reflected or wrapped somehow in CUDA runtime APIs? Or some other way accessible with C++ code without actually writing PTX code?
Are these instructions reflected or wrapped somehow in CUDA runtime APIs? Or some other way accessible with C++ code without actually writing PTX code?
Most of these instructions are reflected in atomic operations (built-in intrinsics) described in the programming guide. If you compile any of those atomic intrinsics, you will find atom or red instructions emitted by the compiler at the PTX or SASS level in your generated code.
The red instruction type will generally be used when you don't explicitly use the return value from from one of the atomic intrinsics. If you use the return value explicitly, then the compiler usually emits the atom instruction.
Thus, it should be clear that this instruction by itself does not perform a complete classical parallel reduction, but certainly could be used to implement one if you wanted to depend on atomic hardware (and associated limitations) for your reduction operations. This is generally not the fastest possible implementation for parallel reductions.
If you want direct access to these instructions, the usual advice would be to use inline PTX where desired.
As requested, to elaborate using atomicAdd() as an example:
If I perform the following:
atomicAdd(&x, data);
perhaps because I am using it for a typical atomic-based reduction into the device variable x, then the compiler would emit a red (PTX) or RED (SASS) instruction taking the necessary arguments (the pointer to x and the variable data, i.e. 2 logical registers).
If I perform the following:
int offset = atomicAdd(&buffer_ptr, buffer_size);
perhaps because I am using it not for a typical reduction but instead to reserve a space (buffer_size) in a buffer shared amongst various threads in the grid, which has an offset index (buffer_ptr) to the next available space in the shared buffer, then the compiler would emit a atom (PTX) or ATOM (SASS) instruction, including 3 arguments (offset, &buffer_ptr, and buffer_size, in registers).
The red form can be issued by the thread/warp which may then continue and not normally stall due to this instruction issue which will normally have no dependencies for subsequent instructions. The atom form OTOH will imply modification of one of its 3 arguments (one of 3 logical registers). Therefore subsequent use of the data in that register (i.e. the return value of the intrinsic, i.e. offset in this case) can result in a thread/warp stall, until the return value is actually returned by the atomic hardware.

What does a dangerous relocation error mean?

I am getting a linking error:
dangerous relocation: l32r: Literal placed after use:
I am still trying to debug; however, I want to better understand this error. I understand what relocation is; however, I am not sure how it can be dangerous and was looking for some clarification. Also, a small code snippet that could generate this type of error would be helpful.
In short, what is "a dangerous relocation"?
This is a two-part answer, as there are really two questions here, one general ("what's a dangerous relocation?") and one specific to the Xtensa ("why can't you have a literal placed after where it's used in the code?").
What's all this dangerous relocation stuff about, anyway?
To understand what a 'dangerous relocation' is, we must first understand what a relocation is. As a compiler is generating an object file from some piece of code, it will need to reference symbols that are defined somewhere else: perhaps in another object file in the link, or perhaps in a shared library. However, the compiler does not know the addresses of external symbols when compiling a given object file. It must emit a relocation to serve as a named placeholder, telling the linker "OK, shove the address of foobar into this spot, and oh, you have to do X, Y, and Z to it to make it fit into the instructions there."
Most of the time, this works without a hitch, you get a binary out of your linker, and Bob's your uncle. When this process breaks down, and the linker cannot make the address of the symbol the compiler gave it fit into the instructions at the site of the relocation, it gives up and tosses out a 'dangerous relocation' message (among others -- the all-too-common 'relocation truncated to fit' pops out of this process as well) to inform the programmer that something has gone terribly wrong.
What's wrong with a literal placed after where it's used?
Now that we know what a generic 'dangerous relocation' is, we can move on to the second half of the error message, namely "l32r: Literal placed after use". The Xtensa uses an instruction known as L32R to load constant values from memory that don't fit into the Xtensa's MOVI immediate load instruction, which has a 12-bit signed immediate field. The L32R instruction is described in the Xtensa ISA reference as follows:
L32R is a PC-relative 32-bit load from memory. It is typically used to load constant
values into a register when the constant cannot be encoded in a MOVI instruction.
L32R forms a virtual address by adding the 16-bit one-extended constant value encoded
in the instruction word shifted left by two to the address of the L32R
plus three with the two least significant bits cleared. Therefore, the offset can always
specify 32-bit aligned addresses from -262141 to -4 bytes from the address of the L32R
instruction. 32 bits (four bytes) are read from the physical address. This data is then
written to address register at.
Given the restrictions on L32R quoted above, the error message breaks down quite nicely: the compiler generated a L32R to load a constant (which could be a value or an address) somewhere in your code, but either the constant's value was not available to the compiler (think extern const), or the address needed to be filled in by the linker (this is the likely case). So, it emitted this L32R relocation to tell the linker to 'fill in the blank' in the L32R instruction with the address of a constant value or constant address somewhere in your program. However, the linker couldn't find anywhere in the previous 256KB of code -- or literal pool, depending on how your compiler and Xtensa core are configured -- to shove a constant, so it gave up and spat out the error message you asked about.
How does one fix this?
Unfortunately, a 'dangerous relocation' of this sort depends on code size, so unless you have a bona fide compiler or linker bug on your hands, reproducing it with a small snippet of code will be impossible. There are two possible causes you can try to address, though.
There's no room for my literal pool!
If you are compiling with -mno-text-section-literals (which is the default), the linker gets fed the literal pools as separate sections which it then has to interleave with the code sections. If you have a particularly large object file in your link, it may have over 256KB of code in its .text section, leaving nowhere in the range of a L32R instruction for the linker to place the associated literal pool section at. Compiling with -mtext-section-literals should eliminate the error; if it does not work, you have that flag on already, or if you are using -ffunction-sections (which places each function into its own section; it is sometimes used in embedded work to allow the linker to throw out unused code), read on.
The linker (or assembler) still can't find a place to put my literals!
When the compiler and assembler are told to emit literals into the text section, they restrict placement of the literal pools to before the functions that use them (i.e. before the ENTRY instruction of the function) in order to minimize the risk that the literal pools will be executed as code, with obviously bad results. If you have an extremely long function in your code -- I shudder to think what sort of function could generate more than 256KB of code -- the 'default' literal pool placed before the ENTRY instruction can wind up out of range of L32R instructions near the end of the function. Normally, the compiler will emit an assembler directive known as .literal_position, as well as a jump around the mid-function literal pool, to provide the assembler and linker with an extra place to shove literals into. You can tell the compiler to output an assembler listing using -save-temps and then search it for .literal_position directives; if one isn't present in a function that has L32R instructions past the 256KB mark, congratulations! You just found a compiler bug!
What else could happen to produce this?
The only other circumstance I see that can provoke such a problem is if there is nowhere before the ENTRY instruction that the compiler or linker can put a literal pool, and the compiler can't figure this out on its own -- this can occur with interrupt handlers, or functions that are explicitly placed at the beginning of a physical memory boundary by the linker script. In this case, you will need to insert the .literal_position directive and its associated jump & label by hand in an asm statement at the top of the culprit function in order to provide the assembler with a place to put the culprit function's literals. As the GAS manual puts it:
The assembler will automatically place text section literal pools before ENTRY
instructions, so the .literal_position directive is only needed to specify some other
location for a literal pool. You may need to add an explicit jump instruction to skip
over an inline literal pool.
For example, an interrupt vector does not begin with an ENTRY instruction so the
assembler will be unable to automatically find a good place to put a literal pool.
Moreover, the code for the interrupt vector must be at a specific starting address, so
the literal pool cannot come before the start of the code. The literal pool for the
vector must be explicitly positioned in the middle of the vector (before any uses of the
literals, due to the negative offsets used by PC-relative L32R instructions).
Wait, I'm using the absolute literal option!
If you have the LITBASE option enabled in your Xtensa core and are getting this error, this is a sign that your literal pool has overflowed. The compiler should generate the 'glue' needed to switch literal pools in this case, though: if it doesn't, congratulations! You have just found a compiler bug!
Here's http://www.mail-archive.com/mspgcc-users#lists.sourceforge.net/msg11488.html
This might be helpful for you.
Good luck :)

g++ compiler flag to minimize binary size

I'm have an Arduino Uno R3. I'm making logical objects for each of my sensors using C++. The Arduino has very limited on-board memory 32KB*, and, on average, my compiled objects are coming out around 6KB*.
I am already using the smallest possible data types required, in an attempt to minimize my memory footprint. Is there a compiler flag to minimize the size of the binary, or do I need to use shorter variable and function names, less functions, etc. to minimize my code base?
Also, any other tips or words of advice for minimizing binary size would be appreciated.
*It may not be measured in KB (as I don't have it sitting in front of me), but 1 object is approximately 1/5 of my total memory size, which is prompting my concern.
There are lots of techniques to reduce binary size in addition to what us2012 and others mentioned in the comments, summing them up with some points of my own:
Use -Os to make gcc/g++ optimize for size.
Use -ffunction-sections -fdata-sections to separate each function or data into distinct sections within the translation unit. Combine it with the linker option -Wl,--gc-sections to get rid of any unreferenced sections.
Run strip with at least the following options: -s -R .comment -R .gnu.version. It can be combined with --strip-unneeded to remove all symbols that are not necessary for relocation processing.
If your code does not contain c++-exception-handling you can save a lot of space (up to 30k after all optimize steps mentioned by Tuxdude).
Therefore you have to provide the following flag:
-fno-exceptions
But even if you don't use exceptions, the exception handling can be included!
Check the following steps:
no usage of new, delete. If you really need it replace them by malloc/free wrappers. For an example search for "tinynew.cpp"
provide function for pure virtual call, e.g.extern "C" void __cxa_pure_virtual() { while(1); }
overwrite __gnu_cxx::__verbose_terminate_handler(). It handles unhandled exceptions and does name demangling, which is quite large! (e.g d_print_comp.part.10 with 9.5k or d_type with 1.8k)
Cheers
Flo

Can I guarantee the C++ compiler will not reorder my calculations?

I'm currently reading through the excellent Library for Double-Double and Quad-Double Arithmetic paper, and in the first few lines I notice they perform a sum in the following way:
std::pair<double, double> TwoSum(double a, double b)
{
double s = a + b;
double v = s - a;
double e = (a - (s - v)) + (b - v);
return std::make_pair(s, e);
}
The calculation of the error, e, relies on the fact that the calculation follows that order of operations exactly because of the non-associative properties of IEEE-754 floating point math.
If I compile this within a modern optimizing C++ compiler (e.g. MSVC or gcc), can I be ensured that the compiler won't optimize out the way this calculation is done?
Secondly, is this guaranteed anywhere within the C++ standard?
You might like to look at the g++ manual page: http://gcc.gnu.org/onlinedocs/gcc-4.6.1/gcc/Optimize-Options.html#Optimize-Options
Particularly -fassociative-math, -ffast-math and -ffloat-store
According to the g++ manual it will not reorder your expression unless you specifically request it.
Yes, that is safe (at least in this case). You only use two "operators" there, the primary expression (something) and the binary something +/- something (additive).
Section 1.9 Program execution (of C++0x N3092) states:
Operators can be regrouped according to the usual mathematical rules only where the operators really are associative or commutative.
In terms of the grouping, 5.1 Primary expressions states:
A parenthesized expression is a primary expression whose type and value are identical to those of the enclosed expression. ... The parenthesized expression can be used in exactly the same contexts as those where the enclosed expression can be used, and with the same meaning, except as otherwise indicated.
I believe the use of the word "identical" in that quote requires a conforming implementation to guarantee that it will be executed in the specified order unless another order can give the exact same results.
And for adding and subtracting, section 5.7 Additive operators has:
The additive operators + and - group left-to-right.
So the standard dictates the results. If the compiler can ascertain that the same results can be obtained with different ordering of the operations then it may re-arrange them. But whether this happens or not, you will not be able to discern a difference.
This is a very valid concern, because Intel's C++ compiler, which is very widely used, defaults to performing optimizations that can change the result.
See http://software.intel.com/sites/products/documentation/hpc/compilerpro/en-us/cpp/lin/compiler_c/copts/common_options/option_fp_model.htm#option_fp_model
I would be quite surprised if any compiler wrongly assumed associativity of arithmetic operators with default optimising options.
But be wary of extended precision of FP registers.
Consult compiler documentation on how to ensure that FP values do not have extended precision.
If you really need to, I think you can make a noinline function no_reorder(float x) { return x; }, and then use it instead of parenthesis. Obviously, it's not a particularly efficient solution though.
In general, you should be able to -- the optimizer should be aware of the properties of the real operations.
That said, I'd test the hell out of the compiler I was using.
Yes. The compiler will not change the order of your calculations within a block like that.
Between compiler optimizations and out-of-order execution on the processor, it is almost a guarantee that things will not happen exactly as you ordered them.
HOWEVER, it is also guaranteed that this will NEVER change the result. C++ follows standard order of operations and all optimizations preserve this behavior.
Bottom line: Don't worry about it. Write your C++ code to be mathematically correct and trust the compiler. If anything goes wrong, the problem was almost certainly not the compiler.
As per the other answers you should be able to rely on the compiler doing the right thing -- most compilers allow you to compile and inspect the assembler (use -S for gcc) -- you may want to do that to make sure you get the order of operation you expect.
Different optimization levels (in gcc, -O _O2 etc) allows code to be re-arranged (however sequential code like this is unlikely to be affected) -- but I would suggest you should then isolate that particular part of code into a separate file, so that you can control the optimization level for just the calculation.
The short answer is: the compiler will probably change the order of your calculations, but it will never change the behavior of your program (unless your code makes use of expression with undefined behavior: http://blog.regehr.org/archives/213)
However, you can still influence this behavior by deactivating all compiler optimizations (option "-O0" with gcc). If you still needs the compiler to optimize the rest of your code, you may put this function in a separate ".c" which you can compile with "-O0".
Additionally, you can use some hacks. For instance, if you interleaves your code with extern function calls the compiler may consider that it is unsafe to re-order your code as the function may have unknown side-effect. Calling "printf" to print the value of your intermediate results will conduct to similar behavior.
Anyway, unless you have any very good reason (e.g. debugging) you typically don't want to care about that, and you should trust the compiler.