Find loop temination condition variable - llvm

I want to find the variable which is used to check for termination of the loop,
For example,in the loop below i should get "%n":
for.body8: ; preds = %for.body8.preheader,for.body8
%i.116 = phi i32 [ %inc12, %for.body8 ], [ 0, %for.body8.preheader ]
%inc12 = add nsw i32 %i.116, 1
.....
%6 = load i32* %n, align 4, !tbaa !0
% cmp7 = icmp slt i32 %inc12, %6
br i1 %cmp7, label %for.body8, label %for.end13.loopexit
Is there any direct method to get this value?.
One way I can do is by,Iterating instruction and checking for icmp instruction.But I dont think its a proper method.
Please suggest me a method.
Thanks in advance.

While there is no way to do this for general loops, it is possible to find this out in some cases. In LLVM there is a pass called '-indvars: Canonicalize Induction Variables' which is described as
This transformation analyzes and transforms the induction variables
(and computations derived from them) into simpler forms suitable for
subsequent analysis and transformation.
This transformation makes the following changes to each loop with an
identifiable induction variable:
All loops are transformed to have a single canonical induction variable which starts at zero and steps by one.
The canonical induction variable is guaranteed to be the first PHI node in the loop header block.
Any pointer arithmetic recurrences are raised to use array subscripts.
If the trip count of a loop is computable, this pass also makes the
following changes:
The exit condition for the loop is canonicalized to compare the induction value against the exit value. This turns loops like:
for (i = 7; i*i < 1000; ++i)
into
for (i = 0; i != 25; ++i)
Any use outside of the loop of an expression derived from the indvar is changed to compute the derived value outside of the loop,
eliminating the dependence on the exit value of the induction
variable. If the only purpose of the loop is to compute the exit value
of some derived expression, this transformation will make the loop
dead.
This transformation should be followed by strength reduction after all
of the desired loop transformations have been performed. Additionally,
on targets where it is profitable, the loop could be transformed to
count down to zero (the "do loop" optimization).
and sounds like it does just what you need.

Unfortunately there is no general solution to this. Your question is an instance of the Halting Problem, proven to have no general solution: http://en.wikipedia.org/wiki/Halting_problem
If you're able to cut the problem space down to something extremely simple, using a subset of operations that are not turing complete (http://en.wikipedia.org/wiki/Turing-complete), you may be able to come up with a solution. However there is no general solution.

Related

Checking the top bits of an i64 Value in LLVM IR

I am going to keep this short and to the point, but if further clarifications are needed please let me know.
I have an i64 Value that I want to check the top bits of if they are zeros or not. If they are zeros, I would do something, if they are not, I would do something else. How do I instrument the IR to allow this to happen at runtime?
One thing I found is that LLVM has an intrinsic "llvm.ctlz" that counts the leading zeros and puts them in an i64 Value, but how do I use its return value to do the checking? Or how do I instrument so the checking happens at runtime?
Any help or suggestions would be appreciated. Thanks!
You didn't say how many top bits, so I'll do an example with the top 32 bits. Given i64 %x, I'd check it with %result = icmp uge i64 %x, i64 4294967296 because 4294967296 is 2^32 and that is the first value which has a 1 bit in the top 32-bits. If you want to check the top two bits to be zero, use 2^62 (4611686018427387904) instead.
In order to do two different things based on the value of %result in general you'll want to branch on it. BasicBlock has a method splitBasicBlock that takes an instruction to split at. Use that to split your block into a before and after. Create new blocks for the true side an false side, add a branch on your result to your new blocks, br i1 %result, label %cond_true, label %cond_false. Make sure those two new blocks branch back to the after block.
Depending on what you want to do, you may not need an entire block, for instance if you're only calculating a value and not doing any side-effecting operations you might be able to use a select instruction instead of a branch and separate blocks.

Pre evaluate LLVM IR

Let's suppose we have expressions like:
%rem = srem i32 %i.0, 10
%mul = mul nsw i32 %rem, 2
%i.0 is a llvm::PHINode which I can get the bounds.
The question is: Is there a way to get the value of %mul during compile time? I'm writing a llvm Pass and I need to evaluate some expressions which use %i.0. I'm searching for a function, class or something else which I will give a value to %i.0 and it will evaluate the expression and return the result.
You could clone the code (the containing function or the entire module, depending on how much context you need), then replace %i.0 with a constant value, run the constant propagation pass on the code, and finally check whether %mul is assigned to a constant value and if so, extract it.
It's not elegant, but I think it would work. Just pay attention to:
Make sure %mul is not elided out - for example, return it from the function, or store its value to memory, or something.
Be aware constant propagation assumes some things about the code, in particular that it already passed through mem2reg.

Llvm how to access global array elements

Can someone please explain me what is wrong with this code?
I think this should fetch the second argument from global array, but in fact it silently crushes somewhere inside JIT compilation routine.
My suppositions:
GEP instruction calculates memory address of the element by applying offset and returns pointer.
load instruction loads value referenced by given pointer (it dereferences a pointer, in other words).
ret instruction exits function and passes given value to caller.
Seems like I've missed something basic, but time point from which i should give up looking for answer myself is gone and i have to seek for help.
#arr = common global [256 x i64], align 8
define i64 #iterArray() {
entry:
%0 = load i64* getelementptr inbounds ([256 x i64]* #arr, i32 1, i32 0)
ret i64 %0
}
You requested the 257th item in a 256-item array, and that's a problem.
The first index given to a gep instruction means how many steps are made through the value operand - and here the value operand is not an array but a pointer to an array. That means every step there skips the entire size of the array forward - and that's why the gep actually asks for the 257th item. Using 0 as the first gep index will probably fix the problem. Then using 1 as the 2nd index will get you the 2nd item in the array, which is what you wanted. Read more about it here: http://llvm.org/docs/GetElementPtr.html#what-is-the-first-index-of-the-gep-instruction
Alternatively, it's more appropriate here to use the extractvalue instruction, which is similar to gep with implicitly uses a 0 for the first index (and there are a couple of other differences).
Regarding why the compiler crashes, I'm not sure - I'm guessing that while normally such a memory access would compile fine (and at runtime either generate a segfault or just return a bad value), here you specifically requested the gep to be inbounds, which means that a bounds check is done - and it will fail here - so a poison value is returned, which means your function is now effectively load undef. I'm not sure what LLVM does with load undef - it should probably be optimized out and the function be made to just return undef - but maybe it did something different which led to a rejection of your code.

Using the address from LLVM store instruction to create another

I'm working with LLVM to take a store instruction and replace it with another so that I can take something like
store i64 %0, i64* %a
and replace it with
store i64 <value>, i64* %a
I've used
llvm::Value *str = i->getOperand(1);
to get the address that my old instruction is using, and then I create a new store via (i is the current instruction location, so this store will be created before the store I'm replacing)
StoreInstr *store = new StoreInst(value, str, i);
I then delete the store I've replaced with
i->eraseFromParent();
But I'm getting the error:
While deleting: i64%
Use still stuck around after Def is destroyed: store i64 , i64* %a
and a failure message that Assertion "use empty" && uses remain when a value is destroyed fail.
How could I get around this? I'd love to create a store instruction and then use LLVM's ReplaceInstWithInst, but I can't find a way to create a store instruction without giving it a location to insert itself. I'm also not 100% that will solve my issue.
I'll add that prior to my store replacement, I'm matching an instruction i, then getting the value I need before performing i->eraseFromParent, so I'm not sure if that is part of my problem; I'm assuming that eraseFromParent moves i along to the following store instruction.
eraseFromParent removes an instruction from the enclosing basic block (and consequently, from the enclosing function). It doesn't move it anywhere. Erasing an instruction this way without taking care of its uses first will leave your IR malformed, which is why you're getting the error - it's as if you deleted line 1 from the following C snippet:
1 int x = 3;
2 int y = x + 1;
Obviously you'll get an error on the remaining line, the definition of x is now missing!
ReplaceInstWithInst is probably the best way to replace one instruction with another. You don't need to supply the new instruction with a location to insert it with: just leave the instruction as NULL (or better yet, omit the argument) and it will create a dangling instruction which you can then place wherever you want.
Because of the above, by the way, the key method that ReplaceInstWithInst invokes is Value::replaceAllUsesWith, this ensures that you won't be left with missing values in your IR.

effect of goto on C++ compiler optimization

What are the performance benefits or penalties of using goto with a modern C++ compiler?
I am writing a C++ code generator and use of goto will make it easier to write. No one will touch the resulting C++ files so don't get all "goto is bad" on me. As a benefit, they save the use of temporary variables.
I was wondering, from a purely compiler optimization perspective, the result that goto has on the compiler's optimizer? Does it make code faster, slower, or generally no change in performance compared to using temporaries / flags.
The part of a compiler that would be affected works with a flow graph. The syntax you use to create a particular flow graph will normally be irrelevant as long as you're writing strictly portable code--if you create something like a while loop using a goto instead of an actual while statement, it's not going to produce the same flow graph as if you used the syntax for a while loop. Using non-portable code, however, modern compilers allow you to add annotations to loops to predict whether they'll be taken or not. Depending on the compiler, you may or may not be able to duplicate that extra information using a goto (but most that have annotation for loops also have annotation for if statements, so a likely taken or likely not taken on the if controlling a goto would generally have the same effect as a similar annotation on the corresponding loop).
It is possible, however, to produce a flow graph with gotos that couldn't be produced by any normal flow control statements (loops, switch, etc.), such conditionally jumping directly into the middle of a loop, depending on the value in a global. In such a case, you may produce an irreducible flow graph, and when/if you do, that will often limit the ability of the compiler to optimize the code.
In other words, if (for example) you took code that was written with normal for, while, switch, etc., and converted it to use goto in every case, but retained the same structure, almost any reasonably modern compiler could probably produce essentially identical code either way. If, however, you use gotos to produce the mess of spaghetti like some of the FORTRAN I had to look at decades ago, then the compiler probably won't be able to do much with it.
How do you think that loops are represented, at the assembly level ?
Using jump instructions to labels...
Many compilers will actually use jumps even in their Intermediate Representation:
int loop(int* i) {
int result = 0;
while(*i) {
result += *i;
}
return result;
}
int jump(int* i) {
int result = 0;
while (true) {
if (not *i) { goto end; }
result += *i;
}
end:
return result;
}
Yields in LLVM:
define i32 #_Z4loopPi(i32* nocapture %i) nounwind uwtable readonly {
%1 = load i32* %i, align 4, !tbaa !0
%2 = icmp eq i32 %1, 0
br i1 %2, label %3, label %.lr.ph..lr.ph.split_crit_edge
.lr.ph..lr.ph.split_crit_edge: ; preds = %.lr.ph..lr.ph.split_crit_edge, %0
br label %.lr.ph..lr.ph.split_crit_edge
; <label>:3 ; preds = %0
ret i32 0
}
define i32 #_Z4jumpPi(i32* nocapture %i) nounwind uwtable readonly {
%1 = load i32* %i, align 4, !tbaa !0
%2 = icmp eq i32 %1, 0
br i1 %2, label %3, label %.lr.ph..lr.ph.split_crit_edge
.lr.ph..lr.ph.split_crit_edge: ; preds = %.lr.ph..lr.ph.split_crit_edge, %0
br label %.lr.ph..lr.ph.split_crit_edge
; <label>:3 ; preds = %0
ret i32 0
}
Where br is the branch instruction (a conditional jump).
All optimizations are performed on this structure. So, goto is the bread and butter of optimizers.
I was wondering, from a purely compiler optimzation prespective, the result that goto's have on the compiler's optimizer? Does it make code faster, slower, or generally no change in performance compared to using temporaries / flags.
Why do you care? Your primary concern should be getting your code generator to create the correct code. Efficiency is of much less importance than correctness. Your question should be "Will my use of gotos make my generated code more likely or less likely to be correct?"
Look at the code generated by lex/yacc or flex/bison. That code is chock full of gotos. There's a good reason for that. lex and yacc implement finite state machines. Since the machine goes to another state at state transitions, the goto is arguably the most natural tool for such transitions.
There is a simple way to eliminate those gotos in many cases by using a while loop around a switch statement. This is structured code. Per Douglas Jones (Jones D. W., How (not) to code a finite state machine, SIGPLAN Not. 23, 8 (Aug. 1988), 19-22.), this is the worst way to encode a FSM. He argues that a goto-based scheme is better.
He also argues that there is an even better approach, which is convert your FSM to a control flow diagram using graph theory techniques. That's not always easy. It is an NP hard problem. That's why you still see a lot of FSMs, particularly auto-generated FSMs, implemented as either a loop around a switch or with state transitions implemented via gotos.
I agree heartily with David Hammen's answer, but I only have one point to add.
When people are taught about compilers, they are taught about all the wonderful optimizations that compilers can do.
They are not taught that the actual value of this depends on who the user is.
If the code you are writing (or generating) and compiling contains very few function calls and could itself consume a large fraction of some other program's time, then yes, compiler optimization matters.
If the code being generated contains function calls, or if for some other reason the program counter spends a small fraction of its time in the generated code, it's not worth worrying about.
Why? Because even if that code could be so aggressively optimized that it took zero time, it would save no more than that small fraction, and there are probably much bigger performance issues, that the compiler can't fix, that are happy to be evading your attention.