Halide 'select' without evaluating both arguments - c++

As you can know if you tried Halide select(x,y,z); is something similar to the ternary operator on C++ where x is the conditional y if true and z if false.
Imagine that y is just return 0 and z is a really costly function, it could have sense to skip evaluating z where x is false, unfortunatly Halide evaluates both terms even if I set select(x,likely(y),z); or at least it happens if I use compile_to_file (.h + .lib)
Any idea about this?
Thank you!

The effect of the likely intrinsic is limited to loop peeling, not to anywhere a select might be used. That is, this only has an effect if the condition is closely related to the coordinates of the function definition in which the select appears (as in boundary conditions on an image, where the select is predicated on the x and y coordinates of the function). It does not turn arbitrary select expressions into full branching if/else statements.
You can see some examples in the tests for the intrinsic.
If you share an actual running piece of code it will be easier to discuss why loop peeling and the likely intrinsic do or don't apply in your particular case.

Related

How to determine if a BasicBlock is controled by a `if`

I want to use LLVM to analyze if a basic block is affected by a control flow of an if(i.e., br instruction). "A basic block BB is NOT affected by br" means that no matter which of two blocks the br goes to BB will be executed for sure. I use an example to briefly show what I want:
My current rule to determine if a basic block BB is affected is (if true, affected.)
¬(postDominate(BB, BranchInst->leftBB) && postDominate(BB, BranchInst->rightBB))
Since I can not exhaustively test the rule above to all possible CFGs, I want to know if this rule is sound and complete.
Thanks!
Further question
I am also confused if I should use dominate rather than postDominate like (I know the difference between post-dominance and dominance, but which should I use? Both rules seems work in this example, but I am not sure which one will/won't work in other cases):
Dominate(BranchInst->leftBB, BB) || Dominate(BranchInst->rightBB, BB)
A block Y is control dependent on block X if and only if Y postdominates at least one but not all successors of X.
llvm::ReverseIDFCalculator in llvm/Analysis/IteratedDominanceFrontier.h will calculate the post-dominance frontier for you, which is exactly what you need. The "iterated" part isn't relevant to your use case, ignore the setLiveInBlocks() method.
I have not tested this at all, but I expect that something like this should do the trick:
// PDT is the llvm::PostDominatorTree
SmallPtrSet<BasicBlock *, 1> BBSet{block_with_branch};
SmallVector<BasicBlock *, 32> IDFBlocks;
ReverseIDFCalculator IDFs(PDT);
IDFs.setDefiningBlocks(BBSet);
IDFs.calculate(IDFBlocks);
The relation control dependent is transitive. Applying the definition to all the control-dependent-impacted block(s) iteratively is the right way.

MARIE Assembly if else

Struggle with MARIE Assembly.
Needing to write a code that has x=3 and y=5, is x>y then it needs to output 1, if x<y it needs to output one,
I have the start but don't know how to do if else statements in MARIE
LOAD X
SUBT Y
SKIPCOND 800
JUMP ELSE
OUTPUT
HALT
Structured statements have a pattern, and each one has an equivalent pattern in assembly language.
The if-then-else statement, for example, has the following pattern:
if ( <condition> )
<then-part>
else
<else-part>
// some statement after if-then-else
Assembly language uses an if-goto-label style.  if-goto is a conditional test & branch; and goto alone is an unconditional branch.  These forms alter the flow of control and can be composed to do the same job as structure statements.
The equivalent pattern for the if-then-else in assembly (but written in pseudo code) is as follows:
if ( <condition> is false ) goto if1Else;
<then-part>
goto if1Done;
if1Else:
<else-part>
if1Done:
// some statement after if-then-else
You will note that the first conditional branch (if-goto) needs to branch on condition false.  For example, let's say that the condition is x < 10, then the if-goto should read if ( x >= 10 ) goto if1Else;, which branches on x < 10 being false.  The point of the conditional branch is to skip the then-part (to skip ahead to the else-part) when the condition is false — and when the condition is true, to simply allow the processor to run the then-part, by not branching ahead.
We cannot allow both the then-part and the else-part to execute for the same if-statement's execution.  The then-part, once completed, should make the processor move on to the next statement after the if-then-else, and in particular, to avoid the else-part, since the then-part just fired.  This is done using an unconditional branch (goto without if), to skip ahead around the else-part — if the then-part just fired, then we want the processor to unconditionally skip the else-part.
The assembly pattern for if-then-else statement ends with a label, here if1Done:, which is the logical end of the if-then-else pattern in the if-goto-label style.  Many prefer to name labels after what comes next, but these labels are logically part of the if-then-else, so I choose to name them after the structured statement patterns rather than about subsequent code.  Hopefully, you follow the assembly pattern and see that whether the if-then-else runs the then-part or the else-part, the flow of control comes back together to run the next line of code after the if-then-else, whatever that is (there must be a statement after the if-then-else, because a single statement alone is just a snippet: an incomplete fragment of code that would need to be completed to actually run).
When there are multiple structured statements, like if-statements, each pattern translation must use its own set of labels, hence the numbering of the labels.
(There are optimizations where labels can be shared between two structured statements, but doing that does not optimize the code in any way, and makes it harder to change.  Sometimes nested statements can result in branches to unconditional branches — since these actual machine code and have runtime costs, they can be optimized, but such optimizations make the code harder to rework so should probably be held off until the code is working.)
When two or more if-statements are nested, the pattern is simply applied multiple times.  We can transform the outer if statement first, or the inner first, as long as the pattern is properly applied, the flow of control will work the same in assembly as in the structured statement.
In summary, first compose a larger if-then-else statement:
if ( x < y )
Output(1)
else
Output(one)
(I'm not sure this is what you need, but it is what you said.)
Then apply the pattern transformation into if-goto-label: since, in the abstract, this is the first if-then-else, let's call it if #1, so we'll have two labels if1Done and if1Else.  Place the code found in the structured pattern into the equivalent locations of the if-goto-label pattern, and it will work the same.
MARIE uses SkipCond to form the if-goto statement.  It is typical of machine code to have separate compare and branch instructions (as for a many instruction set architectures, there are too many operands to encode an if goto in a single instruction (if x >= y goto Label; has x, y, >=, and Label as operands/parameters).  MARIE uses subtract and branch relative to 0 (the SkipCond).  There are other write-ups on the specific ways to use it so I won't go into that here, though you have a good start on that already.

Generating branch instructions from an AST for a conditional expression

I am trying to write a compiler for a domain-specific language, targeting a stack-machine based VM that is NOT a JVM.
I have already generated a parser for my language, and can readily produce an AST which I can easily walk. I also have had no problem converting many of the statements of my language into the appropriate instructions for this VM, but am facing an obstacle when it comes to the matter of handling the generation of appropriate branching instructions when complex conditionals are encountered, especially when they are combined with (possibly nested) 'and'-like or 'or' like operations which should use short-circuiting branching as applicable.
I am not asking anyone to write this for me. I know that I have not begun to describe my problem in sufficient detail for that. What I am asking for is pointers to useful material that can get me past this hurdle I am facing. As I said, I am already past the point of converting about 90% of the statements in my language into applicable instructions, but it is the handling of conditionals and generating the appropriate flow control instructions that has got me stumped. Much of the info that I have been able to find so far on generating code from an AST only seems to deal with the generation of code corresponding to simple imperative-like statements, but the handing of conditionals and flow control appears to be much more scarce.
Other than the short-circuiting/lazy-evaluation mechanism for 'and' and 'or' like constructs that I have described, I am not concerned with handling any other optimizations.
Every conditional control flow can be modelled as a flow chart (or flow graph) in which the two branches of the conditional have different targets. Given that boolean operators short-circuit, they are control flow elements rather than simple expressions, and they need to be modelled as such.
One way to think about this is to rephrase boolean operators as instances of the ternary conditional operator. So, for example, A and B becomes A ? B : false and A or B becomes A ? true : B [Note 1]. Note that every control flow diagram has precisely two output points.
To combine boolean expressions, just substitute into the diagram. For example, here's A AND (B OR C)
You implement NOT by simply swapping the meaning of the two out-flows.
If the eventual use of the boolean expression is some kind of conditional, such as an if statement or a conditional loop, you can use the control flow as is. If the boolean expression is to be saved into a variable or otherwise used as a value, you need to fill in the two outflows with code to create the relevant constant, usually a true or false boolean constant, or (in C-like languages) a 1 or 0.
Notes:
Another way to write this equivalence is A and B ⇒ A ? B : A; A or B ⇒ A ? A : B, but that is less useful for a control flow view, and also clouds the fact that the intent is to only evaluate each expression once. This form (modified to reuse the initial computation of A) is commonly used in languages with multiple "falsey" values (like Python).

An explanation about Sequence points

Lately, I have seen a lot of questions being asked about output for some crazy yet syntactically allowed code statements like like i = ++i + 1 and i=(i,i++,i)+1;.
Frankly realistically speaking hardly anyone writes any such code in actual programing.To be frank I have never encountered any such code in my professional experience. So I usually end up skipping such questions here on SO. But lately the sheer volume of such Q's being asked makes me think if I am missing out some important theory by skipping such Q's. I gather that the such Q's revolve around Sequence points. I hardly know anything about sequence points to be frank and I am just wondering if not knowing about it is a handicap in some way. So can someone please explain the theory /concept of Sequence points, or If possible point to a resource which explains about the concept. Also, is it worth to invest time in knowing about this concept/theory?
The simplest answer I can think of is:
C++ is defined in terms of an abstract machine. The output of a program executed on the abstract machine is defined ONLY in terms of the order that "side effects" are performed. And Side effects are defined as calls into IO library functions, and changes to variables marked volatile.
C++ compilers are allowed to do whatever they want internally to optimize code, but they cannot change the order of writes to volatile variables, and io calls.
Sequence points define the c/c++ program's heartbeat - side effects before the sequence point are "complete" and side effects after the sequence point have not yet taken place. But, side effects (or, code that can effect a side effect indirectly( within a sequence point can be re-ordered.
Which is why understanding them is important. Without that understanding, your fundamental understanding of what a c++ program is (And how it might be optimized by an agressive compiler) is flawed.
See http://en.wikipedia.org/wiki/Sequence_point.
It's a quite simple concept, so you don't need to invest much time :)
The exact technical details of sequence points can get hairy, yes. But following these guideline solves almost all the practical issues:
If an expression modifies a value, there must be a sequence point between the modification and any other use of that value.
If you're not sure whether two uses of a value are separated by a sequence point or not, break up your code into more statements.
Here "modification" includes assignment operations on the left-hand value in =, +=, etc., and also the ++x, x++, --x, and x-- syntaxes. (It's usually these increment/decrement expressions where some people try to be clever and end up getting into trouble.)
Luckily, there are sequence points in most of the "expected" places:
At the end of every statement or declaration.
At the beginning and end of every function call.
At the built-in && and || operators.
At the ? in a ternary expression.
At the built-in , comma operator. (Most commonly seen in for conditions, e.g. for (a=0, b=0; a<m && b<n; ++a, ++b).) A comma which separates function arguments is not the comma operator and is not a sequence point.
Overloaded operator&&, operator||, and operator, do not cause sequence points. Potential surprises from that fact is one reason overloading them is usually discouraged.
It is worth knowing that sequence points exist because if you don't know about them you can easily write code which seems to run fine in testing but actually is undefined and might fail when you run it on another computer or with different compile options. In particular if you write for example x++ as part of a larger expression that also includes x you can easily run into problems.
I don't think it is necessary to learn all the rules fully - but you need to know when you need to check the specification, or perhaps better - when to rewrite your code to make it so that you aren't relying on sequence points rules if a simpler design would work too.
int n,n_squared;
for(n=n_squared=0;n<100;n_squared+=n+ ++n)
printf("%i squared might or might not be %i\n",n,n_squared);
... doesn't always do what you think it will do. This can make debugging painful.
The reason is the ++n retrieves, modifies, and stores the value of n, which could be before or after n is retrieved. Therefore, the value of n_squared isn't clearly defined after the first iteration. Sequence points guarantee that the subexpressions are evaluated in order.

Does replacing statements by expressions using the C++ comma operator could allow more compiler optimizations?

The C++ comma operator is used to chain individual expressions, yielding the value of the last executed expression as the result.
For example the skeleton code (6 statements, 6 expressions):
step1;
step2;
if (condition)
step3;
return step4;
else
return step5;
May be rewritten to: (1 statement, 6 expressions)
return step1,
step2,
condition?
step3, step4 :
step5;
I noticed that it is not possible to perform step-by-step debugging of such code, as the expression chain seems to be executed as a whole. Does it means that the compiler is able to perform special optimizations which are not possible with the traditional statement approach (specially if the steps are const or inline)?
Note: I'm not talking about the coding style merit of that way of expressing sequence of expressions! Just about the possible optimisations allowed by replacing statements by expressions.
Most compilers will break your code down into "basic blocks", which are stretches of code with no jumps/branches in or out. Optimisations will be performed on a graph of these blocks: that graph captures all the control flow in the function. The basic blocks are equivalent in your two versions of the code, so I doubt that you'd get different optimisations. That the basic blocks are the same isn't entirely obvious: it relies on the fact that the control flow between the steps is the same in both cases, and so are the sequence points. The most plausible difference is that you might find in the second case there is only one block including a "return", and in the first case there are two. The blocks are still equivalent, since the optimiser can replace two blocks that "do the same thing" with one block that is jumped to from two different places. That's a very common optimisation.
It's possible, of course, that a particular compiler doesn't ignore or eliminate the differences between your two functions when optimising. But there's really no way of saying whether any differences would make the result faster or slower, without examining what that compiler is doing. In short there's no difference between the possible optimisations, but it doesn't necessarily follow that there's no difference between the actual optimisations.
The reason you can't single-step your second version of the code is just down to how the debugger works, not the compiler. Single-step usually means, "run to the next statement", so if you break your code into multiple statements, you can more easily debug each one. Otherwise, if your debugger has an assembly view, then in the second case you could switch to that and single-step the assembly, allowing you to see how it progresses. Or if any of your steps involve function calls, then you may be able to "do the hokey-cokey", by repeatedly doing "step in, step out" of the functions, and separate them that way.
Using the comma operator neither promotes nor hinders optimization in any circumstances I'm aware of, because the C++ standard guarantee is only that evaluation will be in left-to-right order, not that statement execution necessarily will be. (This is the same guarantee you get with statement line order.)
What it is likely to do, though, is turn your code into a confusing mess, since many programmers are unaware that the comma-as-operator even exists, and are apt to confuse it with commas used as parameter separators. (Want to really make your code unreadable? Call a function like my_func((++i, y), x).)
The "best" use of the comma operator I've seen is to work with multiple variables in the iteration statement of a for loop:
for (int i = 0, j = 0;
i < 10 && j < 12;
i += j, ++j) // each time through the loop we're tinkering with BOTH i and j
{
}
Very unlikely IMHO. The thing get's compiled down to assembler/machine code, then further low-level optimizations are done, so it probably turns out to the same thing.
OTOH, if the comma operator is overloaded, the game changes completely. But I'm sure you know that. ;)
The obligatory list:
Don't worry about rewriting almost equivalent code to gain performance
If you have a perf-problem, profile to see what the problem is
If you can't get it faster by algorithmic ops, look at the disassembly and see that the compiler does what you intended
If not, ask here and post source and disassembly for both versions. :)