I am trying to "parse" a part of LLVM IR. More exactly, from
#.str = private unnamed_addr constant [3 x i8] c"DS\00", section "llvm.metadata"
I want to get "DS". It is the single place in the whole bytecode from where I can get it. I have :
...
Value *VV = cast<Value>(LD100->getOperand(1)->getOperand(0));
errs()<<"\n VV "<<*(VV)<<"\n";
RESULT : VV #.str = private unnamed_addr constant [3 x i8] c"DS\00", section "llvm.metadata"
if(VV->getValueID() == Value::GlobalVariableVal){
GlobalVariable* FD = cast<GlobalVariable>(VV);
Value *VVV = cast<Value>(FD->getOperand(0));
errs()<<"\n VVV "<<*(VVV)<<"\n";
RESULT : VVV [3 x i8] c"DS\00"
if(VVV->getValueID() == Value::ConstantDataArrayVal){
ConstantArray *caa = (ConstantArray *)VVV;
errs()<<"\n "<<(caa->getNumOperands())<<"\n";
errs()<<"\n "<<*(caa->getType())<<"\n";
RESULT : 0
[3 x i8]
}
From this point, I tried to cast to every enum llvm::Value::ValueTy in order to try to iterate through [3 X i8], in order to get "DS" (as a StringRef or std::string would be nice), but I cannot. How I can parse this structure?
Thank you for any help !
Solved it by converting caa to ConstantDataArray and then use getAsString() that gives me exactly "DS".
Related
(1) #str = private constant [13 x i8] c"Hello World\0A\00"
(2) define i32 #main(){
(3) %r2 = getelementptr [13 x i8]* #str, i32 0, i32 0
(4) ret i32 0
(5) }
I've got an error error in the line 3: expected comma after getelementptr's type. How to deal with it?
getelemtptr expects the type that you are indexing (without the pointer) as it's first argument. In your case that would be [13 x i8], so you probably want to do something like this:
%r2 = getelementptr [13 x i8], [13 x i8]* #str, i32 0, i32 0
Why does the LLVM pure interpreter performs differently from the JIT-enabled interpreter with respect of array constants? (LLVM 3.8.1)
I have the following code:
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128 :128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
target triple = "x86_64-apple-macosx10.9.0"
%MyType = type { i8* }
define i32 #main(i32 %argc, i8** %argv) nounwind uwtable {
%const.arr = alloca [8 x i8], align 8
store [8 x i8] c"\D0\CFT\15\01\00\00\00", [8 x i8]* %const.arr
%1 = bitcast [8 x i8]* %const.arr to %MyType*
ret i32 0
}
If I compile this with llvm-as test.llvm and lli test.llvm.bc, I get no error.
On the other hand, If I hit lli -force-interpreter test.llvm.bc I get the following error:
LLVM ERROR: ERROR: Constant unimplemented for type: [8 x i8]
Inspecting the code, I don't see anything wrong with it. Why does it perform differently when I force the interpreter?
I'm playing with LLVM and have started with simple Hello World. Here's the code that I'm trying to run:
test.s:
; Declare the string constant as a global constant.
#.str = private unnamed_addr constant [13 x i8] c"Hello world!\00"
; External declaration of the puts function
declare i32 #puts(i8* nocapture) nounwind
; Definition of main function
define i32 #main() { ; i32()*
; Convert [13 x i8]* to i8 *...
%cast210 = getelementptr [13 x i8], [13 x i8]* #.str, i64 0, i64 0
; Call putr function to write out the string to stdout.
call i32 #puts(i8* %cast210)
ret i32 0
}
I took it from here : http://llvm.org/docs/LangRef.html#id610. When I run it I get the following error:
$lli test.s
lli: test.s:10:37: error: expected value token
%cast210 = getelementptr [13 x i8], [13 x i8]* #.str, i64 0, i64 0
^
It's a bit confusing when code from official LLVM website fails. However, it can be fixed by modifying the problematic line as follows:
test_fixed.s:
; Declare the string constant as a global constant.
#.str = private unnamed_addr constant [13 x i8] c"Hello world!\00"
; External declaration of the puts function
declare i32 #puts(i8* nocapture) nounwind
; Definition of main function
define i32 #main() { ; i32()*
; Convert [13 x i8]* to i8 *...
%cast210 = getelementptr [13 x i8]* #.str, i64 0, i64 0
; Call putr function to write out the string to stdout.
call i32 #puts(i8* %cast210)
ret i32 0
}
My question is: what is going on here? When I check the documentation for getelementptr: http://llvm.org/docs/LangRef.html#id937, I get the impression that test.s is indeed correct. Yet it doesn't work. Please help.
Some context info:
$ lli -version
LLVM (http://llvm.org/):
LLVM version 3.3
Optimized build.
Built Jun 18 2013 (05:58:10).
Default target: x86_64-pld-linux-gnu
Host CPU: bdver1
This should be a problem about the version mismatch between your lli and the official LLVM docs. The official LLVM docs is for the latest developing version of LLVM, 3.7.
The LLVM IR code in your question was update on Mar 4 2015. according to this link, after getelementptr instruction format was updated.
However, your version of lli is 3.3, which is released on Jun 18 2013.
Please update your llvm toolchain to the latest version, and try it again.
I'm trying to generate code to box and unbox values in my untyped language. For evaluating a simple integer literal 3, I generate:
define i64 #0() {
entry:
%value = alloca { i64, [10 x i8], <10 x i64> }
%boxptr = getelementptr inbounds { i64, [10 x i8], <10 x i64> }* %value, i32 0, i32 0
store i64 3, i64* %boxptr
%boxptr1 = getelementptr inbounds { i64, [10 x i8], <10 x i64> }* %value, i32 0, i32 0
%load = load i64* %boxptr1
ret i64 %load
}
Seems right, and lli evaluates it to 3, but Llvm_executionengine evaluates the function to 216172782113783808 (junk value). The code from my toplevel.ml looks like:
open Llvm_executionengine
let the_execution_engine = ExecutionEngine.create_interpreter the_module
let print_and_jit se =
let f = sexpr_matcher se in
let result = ExecutionEngine.run_function f [||] the_execution_engine in
print_string "Evaluated to ";
print_int (GenericValue.as_int result);
What's wrong with my interpreter?
Suppose I have an expression like (actually mine is much more complex, thousands of characters)
expr:a+b*c+b*c*d;
and I want to replace an internal sub-expression with a symbol (useful to avoid recomputation of common subexpressions), say k in place of b*c:
subst(b*c=k,expr);
returns
k+b*c*d+a
How I can make Maxima calculate the "right" substitution so to return (apart from obviuos simplification, here)
k+k*d+a
?
Take a look at let and letsimp. E.g.:
(%i2) expr : a + b*c + b*c*d;
(%o2) b*c*d+b*c+a
(%i3) let (b*c, k);
(%o3) b*c --> k
(%i4) letsimp (expr);
(%o4) d*k+k+a
letsimp differs from subst and tellsimp or defrule in that those other functions make only formal substitutions, i.e., replacing subexpressions which are exactly the same as some pattern.
You can try optimize
http://maxima.sourceforge.net/docs/manual/en/maxima_6.html#IDX219
(%i14) example(optimize);
(%i15) diff(exp(y+x^2)/(y+x),x,2)
2 2 2 2
2 y + x y + x y + x y + x
4 x %e 2 %e 4 x %e 2 %e
(%o15) ------------- + ---------- - ------------ + ----------
y + x y + x 2 3
(y + x) (y + x)
(%i16) optimize(%)
2 y + %2 1
(%o16) block([%1, %2, %3, %4], %1 : y + x, %2 : x , %3 : %e , %4 : --,
%1
4 x %3 2 %3
4 %2 %4 %3 + 2 %4 %3 - ------ + ----)
2 3
%1 %1