glsl -= mad optimization - opengl

Question about GLSL MAD ("multiply and add") optimization.
According to this http://www.opengl.org/wiki/GLSL_Optimizations we should help GLSL compiler to optimize mad expressions. It's all clear form me with
result += x*y
It should looks like:
result = x*y + result
But what to do with -= ?
result -= x*y
If I wrote:
result = result - x*y
This will not be "multiply and add"
And if:
result = -x*y + result
Does it optimize it? I worry because of -x
Just want to clarify this thing to myself.

Just to add another resource:
http://www.humus.name/Articles/Persson_LowLevelThinking.pdf
is a good run-down of ways you can steer the compiler towards more optimal code.
The advice is not glsl-specific, but I thought of it when I saw your question because he does stress that you should write code which has a good chance of boiling down to MAD instuctions.

It is really hard to guess what a particular compiler/optimizer will do in any specific situation. With GLSL, you have the situation that there are lots of different implementations (and versions of them) out in the wild.
In general, I would expect that result += x*y would never lead to another optimization result than result = result + x*y - it is just syntactic sugar after all and not some different operation.
If you want to see what some compiler does for your code, I recommend you to have a look at AMD's shader analyzer which will show you the compiler results. Also, you can use nvidia's command line compiler from their CG toolkit, which also compiles GLSL. It will only output ARB assembly level vertex/fragment programs and not shouw you real instruction level code, but it will still allow you to see where the optimizer made a MAD out of your GLSL construct.

Related

How much do C/C++ compilers optimize conditional statements?

I recently ran into a situation where I wrote the following code:
for(int i = 0; i < (size - 1); i++)
{
// do whatever
}
// Assume 'size' will be constant during the duration of the for loop
When looking at this code, it made me wonder how exactly the for loop condition is evaluated for each loop. Specifically, I'm curious as to whether or not the compiler would 'optimize away' any additional arithmetic that has to be done for each loop. In my case, would this code get compiled such that (size - 1) would have to be evaluated for every loop iteration? Or is the compiler smart enough to realize that the 'size' variable won't change, thus it could precalculate it for each loop iteration.
This then got me thinking about the general case where you have a conditional statement that may specify more operations than necessary.
As an example, how would the following two pieces of code compile:
if(6)
if(1+1+1+1+1+1)
int foo = 1;
if(foo + foo + foo + foo + foo + foo)
How smart is the compiler? Will the 3 cases listed above be converted into the same machine code?
And while I'm at, why not list another example. What does the compiler do if you are doing an operation within a conditional that won't have any effect on the end result? Example:
if(2*(val))
// Assume val is an int that can take on any value
In this example, the multiplication is completely unnecessary. While this case seems a lot stupider than my original case, the question still stands: will the compiler be able to remove this unnecessary multiplication?
Question:
How much optimization is involved with conditional statements?
Does it vary based on compiler?
Short answer: the compiler is exceptionally clever, and will generally optimise those cases that you have presented (including utterly ignoring irrelevant conditions).
One of the biggest hurdles language newcomers face in terms of truly understanding C++, is that there is not a one-to-one relationship between their code and what the computer executes. The entire purpose of the language is to create an abstraction. You are defining the program's semantics, but the computer has no responsibility to actually follow your C++ code line by line; indeed, if it did so, it would be abhorrently slow as compared to the speed we can expect from modern computers.
Generally speaking, unless you have a reason to micro-optimise (game developers come to mind), it is best to almost completely ignore this facet of programming, and trust your compiler. Write a program that takes the inputs you want, and gives the outputs you want, after performing the calculations you want… and let your compiler do the hard work of figuring out how the physical machine is going to make all that happen.
Are there exceptions? Certainly. Sometimes your requirements are so specific that you do know better than the compiler, and you end up optimising. You generally do this after profiling and determining what your bottlenecks are. And there's also no excuse to write deliberately silly code. After all, if you go out of your way to ask your program to copy a 50MB vector, then it's going to copy a 50MB vector.
But, assuming sensible code that means what it looks like, you really shouldn't spend too much time worrying about this. Because modern compilers are so good at optimising, that you'd be a fool to try to keep up.
The C++ language specification permits the compiler to make any optimization that results in no observable changes to the expected results.
If the compiler can determine that size is constant and will not change during execution, it can certainly make that particular optimization.
Alternatively, if the compiler can also determine that i is not used in the loop (and its value is not used afterwards), that it is used only as a counter, it might very well rewrite the loop to:
for(int i = 1; i < size; i++)
because that might produce smaller code. Even if this i is used in some fashion, the compiler can still make this change and then adjust all other usage of i so that the observable results are still the same.
To summarize: anything goes. The compiler may or may not make any optimization change as long as the observable results are the same.
Yes, there is a lot of optimization, and it is very complex.
It varies based on the compiler, and it also varies based on the compiler options
Check
https://meta.stackexchange.com/questions/25840/can-we-stop-recommending-the-dragon-book-please
for some book recomendations if you really want to understand what a compiler may do. It is a very complex subject.
You can also compile to assembly with the -S option (gcc / g++) to see what the compiler is really doing. Use -O3 / ... / -O0 / -O to experiment with different optimization levels.

divide-zero error

a divide expression in my code is " a / b ". when it is compiled to assemble, there is a trap-testing instruction(teq in MIPS assemble) following the normal divide instruction.
will all compiler add this kind of trap instruction behind the normal divide instruction? I'm not familar with this situation. Thanks very much.
Most compilers won't specify the results when you do a divide by zero. Since you didn't indicate what language or compiler you're using, it's impossible to be more specific than that.
P.S. Being able to read the assembly output from the compiler is a huge advantage in cases like this.

long lines of integer arithmetic

Two Parts two my question. Which is more efficient/faster:
int a,b,c,d,e,f;
int a1,b1,c1,d1,e1,f1;
int SumValue=0; // oops forgot zero
// ... define all values
SumValue=a*a1+b*b1+c*c1+d*d1+e*e1*f*f1;
or
Sumvalue+=a*a1+b*b1+c*c1;
Sumvalue+=d*d1+e*e1*f*f1;
I'm guessing the first one is. My second question is why.
I guess a third question is, at any point would it be necessary to break up an addition operation (besides compiler limitations on number of line continuations etc...).
Edit
Is the only time I would see a slow down when then entire arithmetic operation could not fit in the cache? I think this is impossible - compiler probably gets mad about two many line continuations before this could happen. Maybe I'll have to play tomorrow and see.
Did you measure that? The optimized machine code for both approaches will probably be very similar, if not the same.
EDIT: I just tested this, the results are what I expected:
$ gcc -O2 -S math1.c # your first approach
$ gcc -O2 -S math2.c # your second approach
$ diff -u math1.s math2.s
--- math1.s 2010-10-26 19:35:06.487021094 +0200
+++ math2.s 2010-10-26 19:35:08.918020954 +0200
## -1,4 +1,4 ##
- .file "math1.c"
+ .file "math2.c"
.section .rodata.str1.1,"aMS",#progbits,1
.LC0:
.string "%d\n"
That's it. Identical machine code.
There is no arbitrary limit to the number of operations you can combine on one line... practically, the compiler will accept any number you care to throw at it. The compilers consideration of the operations happens long after the newlines are stipped - it is dealing with lexical symbols and grammar rules, then an abstract syntax tree, by then. Unless your compiler is very badly written, both statements will perform equally well for int data.
Note that in result = a*b + c*d + e*f etc., the compiler has no sequence points and knows precedence, so has complete freedom to evaluate and combine the subexpressions in parallel (given capable hardware). With a result += a*b; result += c*d; approach, you are inserting sequence points so the compiler is asked to complete one expression before the other, but is free to - and should - realise the result is not used elsewhere in between increments, so it is free to optimise as in the first case.
More generally: the best advice I can give for such performance queries is 1) dont worry about it being a practical problem unless your program is running too slow, then profile to find out where 2) if curious or profiling indicates a problem, then try both/all approaches you can think of and measure real performance.
Aside: += can be more efficient sometimes, e.g. for concatenating to an existing string, as + on such objects can involve creating temporaries and more memory allocation - template expressions work around this problem but are rarely used as theyre very complex to implement and slower to compile.
This is why it helps to be familiar with assembly language. In both cases, assembly instructions will be generated that load operand pairs into registers and perform addition/multiplication, and store the result in a register. Instructions to store the final result in the memory address represented by SumValue may also be generated, depending on how you use SumValue.
In short, both constructs are likely to perform the same, especially with optimization flags. And even if they don't perform the same on some platform, there's nothing intrinsic to either approach that would really help to explain why at the C++ level. At best, you'd be able to understand the reason why one performs better than the other by looking at how your compiler translates C++ constructs into assembly instructions.
I guess a third question is, at any
point would it be necessary to break
up an addition operation (besides
compiler limitations on number of line
continuations etc...).
It's not really necessary to break up an addition operation. But it might help for readability.
They're most likely going to be converted into the same amount of machine instructions, so they'd take the same length of time.

rate ++a,a++,a=a+1 and a+=1 in terms of execution efficiency in C.Assume gcc to be the compiler [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Is there a performance difference between i++ and ++i in C++?
In terms of usage of the following, please rate in terms of execution time in C.
In some interviews i was asked which shall i use among these variations and why.
a++
++a
a=a+1
a+=1
Here is what g++ -S produces:
void irrelevant_low_level_worries()
{
int a = 0;
// movl $0, -4(%ebp)
a++;
// incl -4(%ebp)
++a;
// incl -4(%ebp)
a = a + 1;
// incl -4(%ebp)
a += 1;
// incl -4(%ebp)
}
So even without any optimizer switches, all four statements compile to the exact same machine code.
You can't rate the execution time in C, because it's not the C code that is executed. You have to profile the executable code compiled with a specific compiler running on a specific computer to get a rating.
Also, rating a single operation doesn't give you something that you can really use. Todays processors execute several instructions in parallel, so the efficiency of an operation relies very much on how well it can be paired with the instructions in the surrounding code.
So, if you really need to use the one that has the best performance, you have to profile the code. Otherwise (which is about 98% of the time) you should use the one that is most readable and best conveys what the code is doing.
The circumstances where these kinds of things actually matter is very rare and few in between. Most of the time, it doesn't matter at all. In fact I'm willing to bet that this is the case for you.
What is true for one language/compiler/architecture may not be true for others. And really, the fact is irrelevant in the bigger picture anyway. Knowing these things do not make you a better programmer.
You should study algorithms, data structures, asymptotic analysis, clean and readable coding style, programming paradigms, etc. Those skills are a lot more important in producing performant and manageable code than knowing these kinds of low-level details.
Do not optimize prematurely, but also, do not micro-optimize. Look for the big picture optimizations.
This depends on the type of a as well as on the context of execution. If a is of a primitive type and if all four statements have the same identical effect then these should all be equivalent and identical in terms of efficiency. That is, the compiler should be smart enough to translate them into the same optimized machine code. Granted, that is not a requirement, but if it's not the case with your compiler then that is a good sign to start looking for a better compiler.
For most compilers it should compile to the same ASM code.
Same.
For more details see http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf
I can't see why there should be any difference in execution time, but let's prove me wrong.
a++
and
++a
are not the same however, but this is not related to efficiency.
When it comes to performance of individual lines, context is always important, and guessing is not a good idea. Test and measure is better
In an interview, I would go with two answers:
At first glance, the generated code should be very similar, especially if a is an integer.
If execution time was definitely a known problem - you have to measure it using some kind of profiler.
Well, you could argue that a++ is short and to the point. It can only increment a by one, but the notation is very well understood. a=a+1 is a little more verbose (not a big deal, unless you have variablesWithGratuitouslyLongNames), but some might argue it's more "flexible" because you can replace the 1 or either of the a's to change the expression. a+=1 is maybe not as flexible as the other two but is a little more clear, in the sense that you can change the increment amount. ++a is different from a++ and some would argue against it because it's not always clear to people who don't use it often.
In terms of efficiency, I think most modern compilers will produce the same code for all of these but I could be mistaken. Really, you'd have to run your code with all variations and measure which performs best.
(assuming that a is an integer)
It depends on the context, and if we are in C or C++. In C the code you posted (except for a-- :-) will cause a modern C compiler to produce exactly the same code. But by a very high chance the expected answer is that a++ is the fastest one and a=a+1 the slowest, since ancient compilers relied on the user to perform such optimizations.
In C++ it depends of the type of a. When a is a numeric type, it acts the same way as in C, which means a++, a+=1 and a=a+1 generate the same code. When a is a object, it depends if any operator (++, + and =) is overloaded, since then the overloaded operator of the a object is called.
Also when you work in a field with very special compilers (like microcontrollers or embedded systems) these compilers can behave very differently on each of these input variations.

Which compiles to faster code: "n * 3" or "n+(n*2)"?

Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"?
Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box.
Would this be different if there was some dereferencing involved, that is, which of these would be faster?
long a;
long *pn;
long ans;
...
*pn = some_number;
ans = *pn * 3;
Or
ans = *pn+(*pn*2);
Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case?
IMO such micro-optimization is not necessary unless you work with some exotic compiler. I would put readability on the first place.
It doesn't matter. Modern processors can execute an integer MUL instruction in one clock cycle or less, unlike older processers which needed to perform a series of shifts and adds internally in order to perform the MUL, thereby using multiple cycles. I would bet that
MUL EAX,3
executes faster than
MOV EBX,EAX
SHL EAX,1
ADD EAX,EBX
The last processor where this sort of optimization might have been useful was probably the 486. (yes, this is biased to intel processors, but is probably representative of other architectures as well).
In any event, any reasonable compiler should be able to generate the smallest/fastest code. So always go with readability first.
As it's easy to measure it yourself, why don't do that? (Using gcc and time from cygwin)
/* test1.c */
int main()
{
int result = 0;
int times = 1000000000;
while (--times)
result = result * 3;
return result;
}
machine:~$ gcc -O2 test1.c -o test1
machine:~$ time ./test1.exe
real 0m0.673s
user 0m0.608s
sys 0m0.000s
Do the test for a couple of times and repeat for the other case.
If you want to peek at the assembly code, gcc -S -O2 test1.c
This would depend on the compiler, its configuration and the surrounding code.
You should not try and guess whether things are 'faster' without taking measurements.
In general you should not worry about this kind of nanoscale optimisation stuff nowadays - it's almost always a complete irrelevance, and if you were genuinely working in a domain where it mattered, you would already be using a profiler and looking at the assembly language output of the compiler.
It's not difficult to find out what the compiler is doing with your code (I'm using DevStudio 2005 here). Write a simple program with the following code:
int i = 45, j, k;
j = i * 3;
k = i + (i * 2);
Place a breakpoint on the middle line and run the code using the debugger. When the breakpoint is triggered, right click on the source file and select "Go To Disassembly". You will now have a window with the code the CPU is executing. You will notice in this case that the last two lines produce exactly the same instructions, namely, "lea eax,[ebx+ebx*2]" (not bit shifting and adding in this particular case). On a modern IA32 CPU, it's probably more efficient to do a straight MUL rather than bit shifting due to pipelineing nature of the CPU which incurs a penalty when using a modified value too soon.
This demonstrates what aku is talking about, namely, compilers are clever enough to pick the best instructions for your code.
It does depend on the compiler you are actually using, but very probably they translate to the same code.
You can check it by yourself by creating a small test program and checking its disassembly.
Most compilers are smart enough to decompose an integer multiplication into a series of bit shifts and adds. I don't know about Windows compilers, but at least with gcc you can get it to spit out the assembler, and if you look at that you can probably see identical assembler for both ways of writing it.
It doesn't care. I think that there are more important things to optimize. How much time have you invested thinking and writing that question instead of coding and testing by yourself?
:-)
As long as you're using a decent optimising compiler, just write code that's easy for the compiler to understand. This makes it easier for the compiler to perform clever optimisations.
You asking this question indicates that an optimising compiler knows more about optimisation than you do. So trust the compiler. Use n * 3.
Have a look at this answer as well.
Compilers are good at optimising code such as yours. Any modern compiler would produce the same code for both cases and additionally replace * 2 by a left shift.
Trust your compiler to optimize little pieces of code like that. Readability is much more important at the code level. True optimization should come at a higher level.