Prevent Intel compiler from over-optimizing un-used variables? - c++

Is there a way to tell the Intel compiler to not optimize-away un-used variables? I am trying to time some code and I currently prevent the optimization by using a cout statement on the variables.
Ideally the solution would tell the compiler to not remove the variable via a pragma/hint, otherwise I would have to use a program-wise argument?

Use the volatile keyword when assigning your variable to let the compiler know not to optimize it. As far as I know, this is a C/C++ standard so it should work on any compiler. See the MSDN link for more info.

Related

In which cases will the restrict qualifier applied to a return value have an effect?

If I have a member function declared like so:
double* restrict data(){
return m_data; // array member variable
}
can the restrict keyword do anything?
Apparently, with g++ (x86 architecture) it cannot, but are there other compilers/architectures where this type of construction makes sense, and would allow for optimized machine code generation?
I'm asking because the Blitz library (Blitz++) has a whole slew of functions declared in this manner, and it doesn't make sense that someone would go in and add the restrict keyword unless it actually does something. So before I go in and remove the restrict's (to get rid of compiler warnings) I'd like to know how I'm abusing the code.
WHAT restrict ARE WE TALKING ABOUT?
restrict is, as it currently stands, non-standard.. which means that it's a compiler extension; it's non-portable in the sense that the C++ Standard doesn't mandate its existance, nor is there any formal text in it that tells us what it is supposed to do.
restrict is currently compiler specific in C++, and one has to resort to the compiler documentation of their choice to see exactly what it is doing.
SOME THOUGHTS
There are many papers about the usage of restrict, among them:
Restricted Pointers - Using the GNU Compiler Collection
restrict - wikipedia.org
Demystifying The Restrict Keyword - CellPerformance
It's hinted at several places that the purpose of restrict is to qualify pointers so that the compiler knows that two pointers in the same scope doesn't refer to the same memory location.
With this in mind we can easily see that the return-type has no potential collision with other pointers, so using it in such context will generally not gain any optimization opportunities. However; one must refer to the documented behaviour of the used implementation to know for sure.. as stated: restrict is not standard, yet.
I also found the following thread where the developers of Blitz++ discusses the removal of strict applied to the return-type of a function, since it doesn't do anything:
Re: [Blitz-devel] type qualifiers ignored on function return type
A LITTLE NOTE
As a further note, here's what the LLVM Documentation says about noalias vs restrict:
For function return values, C99’s restrict is not meaningful, while LLVM’s noalias is.
Generaly restrict qualifier can only help to better optimize code. By removing 'restrict' you don't break anything, but when you add it without care you can get some errors. A great example is the difference between memcpy and memmove. You can always use slower memmove, but you can use faster memcpy only if you know that src and dst aren't overlaping.

How to prevent g++ from optimizing out a loop controlled by a variable that can be changed by an IRQ?

Consider the following piece of code:
unsigned global;
while(global);
global is modified in a function which is invoked by an IRQ. However, g++ removes the "is-not-zero" test and translates the while loop into an endless loop.
Disabling the compiler optimization solves the problem, but does C++ offer a language construct for it?
Declare the variable as volatile:
volatile unsigned global;
This is the keyword that tells the compiler that global can be modified in different threads and all optimizations should be turned off for it.
Since you're using GCC and you say that making the variable volatile does not work, you can trick the optimizer into thinking that the loop changes the variable by lying to the compiler:
while(global)
asm volatile("" : "+g"(global));
This is an inline assembly statement which says that it modifies the variable (it's passed as an input-output operand). But it's empty, so obviously it doesn't do anything at runtime. Still, the optimizer thinks it modifies the variable - the programmers said so, and the compiler, barring operand substitution (which means simply replacing one text with another), doesn't really care about the body of inline assembly and won't do any funny things to it.
And because the body is empty and the constraint used it the most generic one available, it should work reliably on all platforms where GCC supports inline assembly.
You could use GCC attributes on the function declaration to disable optimisation on a per function basis:
void myfunc() __attribute__((optimize(0)));
See the GCC Function Attributes page for more information.

can compiler reorganize instructions over sleep call?

Is there a difference if it is the first use of the variable or not. For example are a and b treated differently?
void f(bool&a, bool& b)
{
...
a=false;
boost::this_thread::sleep...//1 sec sleep
a=true;
b=true;
...
}
EDIT: people asked why I want to know this.
1. I would like to have some way to tell the compiler not to optimize(swap the order of the execution of the instructions) in some function, and using atomic and or mutexes is much more complicated than using sleep(and in my case sleeping is not a performance problem).
2. Like I said this is generally important to know.
We can't really tell. On scenario could be that the compiler has full introspection to your function at the calling site (and possibly does inline it), in which case it can jumble your function with the caller, and then do optimizations appropriately.
It could then e.g. completely optimize away a and b because there is no code that depends on a and b. Or it might see that you violate aliasing rules so that a and b refer to the same entity, and then merge them according to your program flow.
But it could also be that you tell the compiler to not optimize at all, e.g. with g++'s -O0 flag, in which case not much will happen.
The only proof for your particular platform *, can be made by looking at the generated assembly, or by telling the compiler to please output some log about what it optimizes (g++ has many flags for that).
* compiler+flags used to compile compiler+version+add-ons, hardware, operating system; even the weather might be relevant if your compiler omits some optimizations if it takes to long [which would actually be cool feature for debug builds, imho]
They are not local (because they are references), so it can't, because it can't tell whether the called function sees them or not and has to assume that it does. If they were local variables, it could, because local variables are not visible to the called function unless pointer or reference to them was created.

Way to find out allocation type of variables in function

I want to find out storage type of variables in a function block. How to check if compiler has elevated auto variable storage to register storage or if variables declared with register storage are honored by compiler? I am assuming by seeing the assembly code of the obj file after optimization would give us an idea. Please list the switch that I need to use with gcc or cl.exe to get this information?
The -S switch in gcc is the one you are looking for.
See §3.2 Options Controlling the Kind of Output (GCC manual)
You can look at the generated assembly, but there's no way to programmatically determine this from within your program. Generally be aware that GCC ignores the register keyword except to issue errors if you try to take the address of a register-storage variable, and when used in common with GCC-specific extensions to force a variable into a particular register for use in conjunction with inline asm. No idea what MSVC does.

What's the use of C4711 "function selected for inline expansion" Visual C++ warning?

According to MSDN Visual C++ can emit C4711 warning: function X selected for inline expansion if the compiler decides to inline a function that was not marked inline.
I don't see how this warning can be useful. Suppose I compile my code and see this warning. Now what? Why would I care?
It isn't on by default. You can turn it on if for some reason you'd like to know when functions are inlined. This can be relevant if, say, code size is at a severe premium, or you were expecting to jump into the function from outside the module, or you need the assembly to look a certain way. It can help track down code generation bugs as well.
It's purely informational.