When looking at the LLVM IR that the julia compiler generates (using code_llvm) I noticed something strange in the function signature when using arrays as arguments. Let me give an example:
function test(a,b,c)
return nothing
end
(This is a useless example, but the results are the same with other functions, the resulting IR of this example is just less cluttered)
Using code_llvm(test, (Int,Int,Int)), I get the following output:
; Function Attrs: sspreq
define void #julia_test14855(i64, i64, i64) #2 {
top:
ret void, !dbg !366
}
Using code_llvm(test, (Array{Int},Array{Int},Array{Int})), I get an (at least for me) unexpected result:
; Function Attrs: sspreq
define %jl_value_t* #julia_test14856(%jl_value_t*, %jl_value_t**, i32) #2 {
top:
%3 = icmp eq i32 %2, 3, !dbg !369
br i1 %3, label %ifcont, label %else, !dbg !369
else: ; preds = %top
call void #jl_error(i8* getelementptr inbounds ([26 x i8]* #_j_str0, i64 0, i64 0)), !dbg !369
unreachable, !dbg !369
ifcont: ; preds = %top
%4 = load %jl_value_t** inttoptr (i64 36005472 to %jl_value_t**), align 32, !dbg !370
ret %jl_value_t* %4, !dbg !370
}
Why is the signature of the llvm function not just listing the 3 variables as i64* or something like that? And why doesn't the function return void anymore?
Why is the signature of the llvm function not just listing the 3 variables as i64*
This signature is the generic Julia calling convention (because, as #ivarne mentioned, the types are incomplete).
#julia_test14856(%jl_value_t*, %jl_value_t**, i32) arguments are:
pointer to the function closure
pointers to boxed arguments (jl_value_t is basic box type)
number of arguments
The signature #ivarne shows is the specialized calling convention. Arguments are still passed boxed, but argument type and count are known already (and the function closure is unnecessary because it is already specialized).
About the output of your example function, this section checks the number of arguments (if not 3 -> goto label else:):
top:
%3 = icmp eq i32 %2, 3, !dbg !369
br i1 %3, label %ifcont, label %else, !dbg !369
This section returns the error:
else: ; preds = %top
call void #jl_error(i8* getelementptr inbounds ([26 x i8]* #_j_str0, i64 0, i64 0)), !dbg !369
unreachable, !dbg !369
Finally, the default case goes to this line which pulls the value for nothing stored in address 36005472 (in #ivarne version, this is guaranteed, so can return void directly).
%4 = load %jl_value_t** inttoptr (i64 36005472 to %jl_value_t**), align 32, !dbg !370
I would assume that it is because Array{Int, N} is a partially initialized type, and that it does not match the patterns the code generation looks for.
Try also
julia> code_llvm(test, (Array{Int,1},Array{Int,1},Array{Int,1}))
define void #julia_test15626(%jl_value_t*, %jl_value_t*, %jl_value_t*) {
top:
ret void, !dbg !974
}
This might be considered a bug in the code generation, but I do not know.
Related
I'm learning LLVM these days via observing how clang deal with complex situations. I wrote (top level, not in a function):
int qaq = 666;
int tat = 233;
auto hh = qaq + tat;
And I use the command:
clang-4.0 003.cpp -emit-llvm -S -std=c++11
And clang generates codes like this:
#qaq = global i32 666, align 4
#tat = global i32 233, align 4
#hh = global i32 0, align 4
#llvm.global_ctors = appending global [1 x { i32, void ()*, i8* }] [{ i32, void ()*, i8* } { i32 65535, void ()* #_GLOBAL__sub_I_003.cpp, i8* null }]
; Function Attrs: noinline uwtable
define internal void #__cxx_global_var_init() #0 section ".text.startup" {
%1 = load i32, i32* #qaq, align 4
%2 = load i32, i32* #tat, align 4
%3 = add nsw i32 %1, %2
store i32 %3, i32* #hh, align 4
ret void
}
; Function Attrs: noinline uwtable
define internal void #_GLOBAL__sub_I_003.cpp() #0 section ".text.startup" {
call void #__cxx_global_var_init()
ret void
}
I'm confused with _GLOBAL__sub_I_003.cpp: why does clang generate a function that actually only invoke another function (and not doing anything else)? Even both of them have no parameters?
Disclaimer: This is my interpretation of the logic, I'm not part of the LLVM team.
In order to understand the reasoning behind this, you have to understand a fundamental concept in software engineering: Complexity creates bugs, and makes testing harder.
But first, let's make your example a little more interesting:
int qaq = 666;
int tat = 233;
auto hh = qaq + tat;
auto ii = qaq - tat;
Which leads to:
; Function Attrs: noinline uwtable
define internal void #__cxx_global_var_init() #0 section ".text.startup" !dbg !16 {
%1 = load i32, i32* #qaq, align 4, !dbg !19
%2 = load i32, i32* #tat, align 4, !dbg !20
%3 = add nsw i32 %1, %2, !dbg !21
store i32 %3, i32* #hh, align 4, !dbg !21
ret void, !dbg !20
}
; Function Attrs: noinline uwtable
define internal void #__cxx_global_var_init.1() #0 section ".text.startup" !dbg !22 {
%1 = load i32, i32* #qaq, align 4, !dbg !23
%2 = load i32, i32* #tat, align 4, !dbg !24
%3 = sub nsw i32 %1, %2, !dbg !25
store i32 %3, i32* #ii, align 4, !dbg !25
ret void, !dbg !24
}
; Function Attrs: noinline uwtable
define internal void #_GLOBAL__sub_I_example.cpp() #0 section ".text.startup" !dbg !26 {
call void #__cxx_global_var_init(), !dbg !28
call void #__cxx_global_var_init.1(), !dbg !29
ret void
}
So we see that CLANG emits a single function for each non-trivial initialization, and calls each of them one after the other in _GLOBAL__sub_I_example.cpp(). That makes sense and is sensible, as things are neatly organized this way, and could become a garbled mess in larger/more complicated files otherwise.
Notice how that's the exact same logic that is being applied in your example.
Doing otherwise would imply an algorithm of the type: "if there is a single non-trivial global initialization, then put the code directly in the translation unit's global constructor".
Note the following:
The current logic handles that case correctly already.
In optimized code, the end result would be the exact same.
So what would that logic get us, really?
More branches to test.
More opportunities to accidentaly insert a bug.
More code to maintain in the long run.
Removal of a single function call in the global initialization of some translation units in non-optimized builds.
Keeping things the way they are is just the right decision.
For my source code, I have the following IR:
; ModuleID = '<stdin>'
#.str = private unnamed_addr constant [9 x i8] c"SOME_ENV_VAR\00", align 1
#.str1 = private unnamed_addr constant [26 x i8] c"Need to set $ENV_Variable.\0A\00", align 1
; Function Attrs: nounwind
define void #foo(i8* %bar) #0 {
entry:
%bar.addr = alloca i8*, align 4
%baz = alloca i8*, align 4
store i8* %bar, i8** %bar.addr, align 4
%call = call i8* #getenv(i8* getelementptr inbounds ([9 x i8]* #.str, i32 0, i32 0)) #2
store i8* %call, i8** %baz, align 4
%0 = load i8** %baz, align 4
%cmp = icmp eq i8* %0, null
br i1 %cmp, label %if.then, label %if.else
if.then: ; preds = %entry
%call1 = call i32 (i8*, ...)* #printf(i8* getelementptr inbounds ([26 x i8]* #.str1, i32 0, i32 0))
br label %if.end
if.else: ; preds = %entry
%1 = load i8** %bar.addr, align 4
%2 = load i8** %baz, align 4
%call2 = call i8* #strcpy(i8* %1, i8* %2) #2
br label %if.end
if.end: ; preds = %if.else, %if.then
ret void
}
; Function Attrs: nounwind
declare i8* #getenv(i8*) #0
declare i32 #printf(i8*, ...) #1
; Function Attrs: nounwind
declare i8* #strcpy(i8*, i8*) #0
I intend to write a pass, which when compiled (using LLVM), produces bitcode where the call to strcpy(dest,src) is replaced with strncpy(dest,src,n).
I've written the following code so far:
#include <stdlib.h>
#include <stdio.h>
#include "llvm/Pass.h"
#include "llvm/IR/Function.h"
#include "llvm/Support/raw_ostream.h"
#include "llvm/IR/Module.h"
#include "llvm/PassManager.h"
#include "llvm/Analysis/Verifier.h"
#include "llvm/Assembly/PrintModulePass.h"
#include "llvm/IR/IRBuilder.h"
using namespace llvm;
namespace
{
Module* makeLLVMModule() {
Module* mod = new Module(llvm::StringRef("CustomPass"),getGlobalContext());
Constant* c = mod->getOrInsertFunction(llvm::StringRef("foo"),Type::getInt32Ty(getGlobalContext()),NULL);
Function* foo = cast<Function>(c);
Function::arg_iterator args =foo->arg_begin();
Value* bar = args++;
BasicBlock* Entry = BasicBlock::Create(getGlobalContext(),llvm::Twine("Entry"), foo);
BasicBlock* False = BasicBlock::Create(getGlobalContext(),llvm::Twine("False"), foo);
BasicBlock* True = BasicBlock::Create(getGlobalContext(),llvm::Twine("True"), foo);
char* pPath;
pPath = getenv("SOME_ENV_VAR");
IRBuilder<> builder(Entry);
Value* envVarDoesntExist = builder.CreateICmpEQ(llvm::StringRef(pPath),Constant::getNullValue(Value),llvm::Twine("temp"));
//---1
builder.CreateCondBr(envVarDoesntExist, False, True);
builder.SetInsertPoint(True);
builder.CreateCall3(strncpy,bar,llvm::StringRef(pPath),45,llvm::Twine("temp"));
//---2
builder.SetInsertPoint(False);
builder.CreateCall(printf,llvm::StringRef("Need to set $ENV_Variable.\n"),llvm::Twine("temp"));
//---1
return mod;
}
}
char funcP::ID = 0;
static RegisterPass<funcP> X("funcp", "funcP", false, false);
From ---1:How to convert llvm::StringRef to Value* ?
From ---2:How to convert char* to Value*
Could Constant::getNullValue(Value) be used for getting a NULL value?
I intend to write a pass, which when compiled (using LLVM), produces bitcode where the call to strcpy(dest,src) is replaced with strncpy(dest,src,n).
Then what you need to do is to locate the call instruction and change it. There's no need to recreate the entire flow, it's already in your source code.
All you need to do is to create a function pass, iterate over all the instructions in the function, and if the instruction is a call instruction and the callee's name is strcpy then create a new call instruction to your new function, then replace the old instruction with the new instruction.
Also there seems to be some fundamental misunderstanding in your code between values in the compiler (such as 45 and all the StringRefs) and values in the code you are processing (instances of one of the subtypes of llvm::Value). Specifically, you can't just use 45 as a parameter to a function in the code you are processing - you have to create a constant int from that number, and then you can use that constant.
One final note - you can implicitly construct a StringRef from a const char*, you don't need to explicitly call the StringRef's constructor all over the place. Same with Twine.
I'm splitting all Basic Blocks with minimum number of instructions (usually 3-5):
llvm::SplitBlock(BasicBlock, &*BasicBlockiter, Pass);
and trying to get object file from IR
llc -filetype=obj 2.ll
I got the following errors:
Instruction does not dominate all uses!
%1 = alloca i32
%mul = load i32* %1
Instruction does not dominate all uses!
%1 = alloca i32
%99 = load i32* %1
and
While deleting: i32 %
Use still stuck around after Def is destroyed: %var = alloca i32
Assertion failed: use_empty() && "Uses remain when a value is destroyed!"
and
error: expected instruction opcode
invoke.cont2: ; preds = %main_block, %invoke
.cont
IR:
invoke.cont2: ; preds = %main_block, %invoke.cont
%call4 = invoke i32 #_ZStorSt13_Ios_OpenmodeS_(i32 8, i32 16)
to label %invoke.cont3 unwind label %lpad1
store i32 %call4, i32* %var4
I think that after splitting, instructions are located in different basic blocks.
If I split the block into 10-15 instructions, all is OK.
How can I predict/check and avoid this errors?
In your first version, you had instruction after a terminator instruction, which was incorrect since this instruction is never executed.
In your second version (not mentioned here, please use stackoverflow instead of private emails...) are using %call (in the store inst) before defining it (%call = ...), so clearly your definition does not precede every use...
But as I said, the store should not be after the invoke, because invoke is a terminatorinst.
The solution is to put your store in the next basic block (you can create a new one) :
%invoke.cont
%call = invoke i8* #_ZNKSs5c_strEv(%"class.std::basic_string"* #loadedFile)
to label %invoke.cont2_before unwind label %lpad1
invoke.cont2_before: ; preds = %invoke.cont
store i8* %call, i8** %reduced_var
br label %invoke.cont2
invoke.cont2: ; preds = %main_block, %invoke.cont2_before
%call4 = invoke i32 #_ZStorSt13_Ios_OpenmodeS_(i32 8, i32 16)
to label %invoke.cont3_before unwind label %lpad1
etc...
I have an external (C) function that I am calling in my LLVM IR. The IR gets JITed and everything works fine, but the generated code is performance sensitive, and I want to remove duplicate calls to my external function if possible. The function has no side effects. Is there a FunctionPass that eliminates redundant calls to the function? Is there something I have to do to mark the function as having no side effects?
Thanks!
According to http://llvm.org/docs/LangRef.html#function-attributes you can specify the attributes readonly or readnone for a function:
declare i32 #fn(i32 %i);
declare i32 #readonly_fn(i32 %i) readonly;
declare i32 #readnone_fn(i32 %i) readnone;
readonly means that the function doesn't write memory,
readnone means that it doesn't even read memory (for example sin() could be readnone)
If a function doesn't write memory, it should return the result only based on the parameters, and therefor be a pure function (if the global state doesn't change). In case of a readnone function, even the global state could change.
The llvm optimizer can optimize calls to readonly and readnone functions with the EarlyCSE pass (common subexpression elimination), as shown in the following example:
using the following test functions
define i32 #test_no_readonly()
{
%1 = call i32 #fn(i32 0)
%2 = call i32 #fn(i32 0)
%add = add i32 %1, %2
ret i32 %add
}
define i32 #test_readonly()
{
%1 = call i32 #readonly_fn(i32 0)
%2 = call i32 #readonly_fn(i32 0)
%add = add i32 %1, %2
ret i32 %add
}
define i32 #test_readnone()
{
%1 = call i32 #readnone_fn(i32 0)
%2 = call i32 #readnone_fn(i32 0)
%add = add i32 %1, %2
ret i32 %add
}
and running opt -early-cse -S readonly_fn.ll > readonly_fn_opt.ll optimizes away the second call for the readonly and readnone functions, resulting in
define i32 #test_no_readonly() {
%1 = call i32 #fn(i32 0)
%2 = call i32 #fn(i32 0)
%add = add i32 %1, %2
ret i32 %add
}
define i32 #test_readonly() {
%1 = call i32 #readonly_fn(i32 0)
%add = add i32 %1, %1
ret i32 %add
}
define i32 #test_readnone() {
%1 = call i32 #readnone_fn(i32 0)
%add = add i32 %1, %1
ret i32 %add
}
The readonly_fn and readnone_fn functions are only called once, thus eleminating redundand calls.
The -functionattrs pass can also add these attributes to defined functions
My pass in LLVM generates an IR like this:
%5 = icmp eq i32 %4, 0
%7 = or i1 %5, %5
...
Since the or instruction is actually not needed(dead code), I replaced all occurrences of %7 with %5. Now, the or instruction should get deleted. Can I call Dead Code Elimination pass of LLVM from my pass, or is there any method to remove that or instruction?
A solution that is more aligned with LLVM's design philosophy is, instead of doing the substitution in your pass, let InstCombine do the job. Then you will not need to worry about running DCE.
For example:
>cat foo.ll
define i32 #foo(i32 %a, i32 %b) #0 {
entry:
%or = or i32 %a, %a
ret i32 %or
}
> opt -S -instcombine < foo.ll
define i32 #foo(i32 %a, i32 %b) #0 {
entry:
ret i32 %a
}
Why don't you just schedule DCE to run after your pass in the pass manager. Let it do its analysis and decide what it wants to throw away.