Function optimization pass - c++

I am trying to use llvm::PassBuilder and FunctionPassManager to optimize a function in a module, what I have done is:
mod = ...load module from LLVM IR bitcode file...
auto lift_func = mod->getFunction("go_back");
if (not lift_func) {
llvm::errs() << "Error: cannot get function\n";
return 0;
}
auto pass_builder = llvm::PassBuilder{};
auto fa_manager = llvm::FunctionAnalysisManager{};
pass_builder.registerFunctionAnalyses(fa_manager);
auto fp_manager = pass_builder.buildFunctionSimplificationPipeline(llvm::PassBuilder::OptimizationLevel::O2);
fp_manager.run(*lift_func, fa_manager);
but the program crashes always at fp_manager.run. I tried several ways with pass_builder, fa_manager, fp_manager but nothing works.
Strange enough, the LLVM's opt tool (which uses legacy optimization interface) works without any problem, i.e. if I run
opt -O2 go_back.bc -o go_back_o2.bc
then I get a new module where the (single) function go_back is optimized.
Many thanks for any response.
NB. The (disassembled) LLVM bitcode file is given here if anyone wants to take a look.
Update: I've somehow managed to pass the fp_manager.run with:
auto loop_manager = llvm::LoopAnalysisManager{};
auto cgscc_manager = llvm::CGSCCAnalysisManager{};
auto mod_manager = llvm::ModuleAnalysisManager{};
pass_builder.registerModuleAnalyses(mod_manager);
pass_builder.registerCGSCCAnalyses(cgscc_manager);
pass_builder.registerFunctionAnalyses(fa_manager);
pass_builder.registerLoopAnalyses(loop_manager);
pass_builder.crossRegisterProxies(loop_manager, fa_manager, cgscc_manager, mod_manager);
auto fp_manager = pass_builder.buildFunctionSimplificationPipeline(llvm::PassBuilder::OptimizationLevel::O2, llvm::PassBuilder::ThinLTOPhase::None, true);
fp_manager.run(*lift_func, fa_manager);
...print mod...
But the program crashes when the fa_manager object is destroyed, still do not understand why!!!

Well, after debugging and reading LLVM source code, I've managed to make it works, as following
mod = ...load module from LLVM IR bitcode file...
auto lift_func = mod->getFunction("go_back");
if (not lift_func) {
llvm::errs() << "Error: cannot get function\n";
return 0;
}
auto pass_builder = llvm::PassBuilder{};
auto loop_manager = llvm::LoopAnalysisManager{};
auto cgscc_manager = llvm::CGSCCAnalysisManager{};
auto mod_manager = llvm::ModuleAnalysisManager{};
auto fa_manager = llvm::FunctionAnalysisManager{}; // magic: it's must be here
pass_builder.registerModuleAnalyses(mod_manager);
pass_builder.registerCGSCCAnalyses(cgscc_manager);
pass_builder.registerFunctionAnalyses(fa_manager);
pass_builder.registerLoopAnalyses(loop_manager);
pass_builder.crossRegisterProxies(loop_manager, fa_manager, cgscc_manager, mod_manager);
auto fp_manager = pass_builder.buildFunctionSimplificationPipeline(llvm::PassBuilder::OptimizationLevel::O2, llvm::PassBuilder::ThinLTOPhase::None, true);
fp_manager.run(*lift_func, fa_manager);
...anything...
The fa_manager should be initialized as late as possible, I still don't know why!!!

Related

Cannot resolve symbols by ReexportsGenerator in LLVM ORC JIT

I'm not good in LLVM so please forgive me wrong termins. There is an issue with JIT session error: Symbols not found when trying "lookup" functions from one module which depends on functions from another module. So there are two LLJIT instances:
llvm::orc::JITDylib &JD = JIT2->getMainJITDylib();
llvm::orc::JITDylib &SourceJD = JIT1->getMainJITDylib();
I wanted utilize ReexportsGenerator as it sounds applicable for my problem (see addGenerator), however the following generator approach doesn't work
auto gen = std::make_unique<ReexportsGenerator>(JIT1->getMainJITDylib(),
llvm::orc::JITDylibLookupFlags::MatchAllSymbols);
JIT2->getMainJITDylib().addGenerator(std::move(gen));
So when I tried go deeper to the generator's implementation, I've found that lookupFlags cannot match desired symbols while lookup return an address with no exceptions:
llvm::StringRef symName("my_symbol_name");
llvm::orc::SymbolStringPool pool;
llvm::orc::SymbolStringPtr symNamePtr = pool.intern(symName);
// Try just lookup the symbol
auto addr = JIT1->lookup(symName);
if (auto E = addr.takeError()) {
throw E;
}
uint64_t fun_addr = addr->getValue(); // contains correct value so I'm sure that JIT1 has my symbol
But number of matches from lookupFlags is 0:
// Try symbols resolving
llvm::orc::SymbolLookupSet LookupSet;
LookupSet.add(symNamePtr, llvm::orc::SymbolLookupFlags::WeaklyReferencedSymbol);
auto Flags = JD.getExecutionSession().lookupFlags(
llvm::orc::LookupKind::DLSym,
{{&SourceJD, llvm::orc::JITDylibLookupFlags::MatchAllSymbols}},
LookupSet);
if (auto E = Flags.takeError()) {
throw E;
}
std::cout << "Flags.size() " << (*Flags).size() << std::endl;
My question is, what I have not considered while applied such symbol resolving approach? I'm confused why lookup is able to find the symbol while lookupFlags not.

Dump Block Liveness of source code using Clang

I need to dump the block liveness of source code using clang's API. I have tried printing the block liveness but got no success. Below is the code that I have tried
bool MyASTVisitor::VisitFunctionDecl(FunctionDecl *f) {
std::cout<<"Dump Liveness\n";
clang::AnalysisDeclContextManager adcm;
clang::AnalysisDeclContext *adc = adcm.getContext(llvm::cast<clang::Decl>(f));
//clang::LiveVariables *lv = clang::LiveVariables::create(*adc);
//clang::LiveVariables *lv = clang::LiveVariables::computeLiveness(*adc,false);
clang::LiveVariables *lv = adc->getAnalysis<LiveVariables>();
clang::LiveVariables::Observer *obs = new clang::LiveVariables::Observer();
lv->runOnAllBlocks(*obs);
lv->dumpBlockLiveness((f->getASTContext()).getSourceManager());
return true;
}
I have override Visitor Functions and have tried printing the liveness of a function. I have tried using create, computeLiveness and getAnalysis methods to get the LiveVariables object, but all the approaches failed. However no Liveness Information is displayed except the block numbers.
When I use command line arguments of clang to print the liveness it displays the output correctly.
I am using the following source code as test case taken from Live Variable Analysis Wikipedia
.
int main(int argc, char *argv[])
{
int a,b,c,d,x;
a = 3;
b = 5;
d = 4;
x = 100;
if(a>b){
c = a+b;
d = 2;
}
c = 4;
return b * d + c;
}
Could someone please point out where could I be wrong?
Thanks in advance.
I had the same issue, after some debugging of clang -cc1 -analyze -analyzer-checker=debug.DumpLiveVars I finally found the answer !
The issue is that the LiveVariables analysis does not explores sub-expressions (such as DeclRefExpr) by itself. It only relies on the CFG enumeration. By default the CFG only enumerates top-level statements.
You must call adc->getCFGBuildOptions().setAllAlwaysAdd() before getting any analysis from your AnalysisDeclContext. This will create elements for all sub-expressions in the CFGBlocks of the control-flow-graph.

How can I make an object immutable in the Google V8 Javascript engine?

Is it possible to make an object immutable in the V8 Javascript Engine? V8 is embedded in a C++ application.
In my case I've created and populated an Array (code is simplified)
auto arr = v8::Array::New(isolate, 10);
for (auto i = 0; i < 10; ++i)
{
arr->Set(context, i, v8::Integer::New(isolate, i));
}
I'd like to make the resulting object "read-only" (as you might get by calling Object.freeze) before passing it to a script. One of my script authors got themselves in a confusing situation by trying to re-use this object is a convoluted way, and I'd like to make it harder for this to happen by making the object immutable.
I understand that I can do this in Javascript (Object.freeze), but I would like to be able to do it in C++ if possible.
This approach works, although it's a little inelegant. Essentially, I'm calling "Object.freeze" directly in Javascript, as I couldn't find a way to invoke this functionality from C++. I'm less than fluent in V8, so my code may be unnecessarily verbose.
/**
* Make an object immutable by calling "Object.freeze".
* https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze
**/
void ezv8::utility::MakeImmutable(v8::Isolate * isolate, v8::Local<v8::Object> object)
{
ezv8::Ezv8 ezv8(isolate);
auto globalTmpl = v8::ObjectTemplate::New(isolate);
auto context = v8::Context::New(isolate, nullptr, globalTmpl);
v8::Isolate::Scope scope(isolate);
v8::Locker locker(isolate);
v8::HandleScope scope(ezv8.getIsolate());
v8::Context::Scope context_scope(context);
// Define function "deepFreeze" as listed on the "Object.freeze" documentation page cited above.
std::string code(
"function deepFreeze(obj) {\n"
" var propNames = Object.getOwnPropertyNames(obj);\n"
" propNames.forEach(function(name) {\n"
" var prop = obj[name];\n"
" if (typeof prop == 'object' && prop !== null)\n"
" deepFreeze(prop);\n"
" });\n"
" return Object.freeze(obj);\n"
"};");
v8::Local<v8::String> source = v8::String::NewFromUtf8(isolate, code.c_str());
v8::Local<v8::Script> compiled_script(v8::Script::Compile(source));
// Run the script!
v8::Local<v8::Value> result = compiled_script->Run();
v8::Handle<v8::Value> argv[]{ object };
v8::Handle<v8::String> process_name = v8::String::NewFromUtf8(isolate, "deepFreeze");
v8::Handle<v8::Value> process_val = context->Global()->Get(process_name);
v8::Handle<v8::Function> process_fun = v8::Handle<v8::Function>::Cast(process_val);
v8::Local<v8::Function> process = v8::Local<v8::Function>::New(isolate, process_fun);
// Call the script.
v8::Local<v8::Value> rv = process->Call(context->Global(), 1, argv);
}

LLVM API: correct way to create/dispose

I'm attempting to implement a simple JIT compiler using the LLVM C API. So far, I have no problems generating IR code and executing it, that is: until I start disposing objects and recreating them.
What I basically would like to do is to clean up the JIT'ted resources the moment they're no longer used by the engine. What I'm basically attempting to do is something like this:
while (true)
{
// Initialize module & builder
InitializeCore(GetGlobalPassRegistry());
module = ModuleCreateWithName(some_unique_name);
builder = CreateBuilder();
// Initialize target & execution engine
InitializeNativeTarget();
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
DisposeBuilder(builder);
// DisposeModule(module); //--> Commented out: Deleted by execution engine
Shutdown();
}
However, this doesn't seem to be working correctly: the second iteration of the loop I get a pretty bad error...
So to summarize: what's the correct way to destroy and re-create the LLVM API?
Posting this as Answer because the code's too long. If possible and no other constraints, try to use LLVM like this. I am pretty sure the Shutdown() inside the loop is the culprit here. And I dont think it would hurt to keep the Builder outside, too. This reflects well the way I use LLVM in my JIT.
InitializeCore(GetGlobalPassRegistry());
InitializeNativeTarget();
builder = CreateBuilder();
while (true)
{
// Initialize module & builder
module = ModuleCreateWithName(some_unique_name);
// Initialize target & execution engine
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
}
DisposeBuilder(builder);
Shutdown();
/* program init */
LLVMInitializeNativeTarget();
LLVMInitializeNativeAsmPrinter();
LLVMInitializeNativeAsmParser();
LLVMLinkInMCJIT();
ctx->context = LLVMContextCreate();
ctx->builder = LLVMCreateBuilderInContext(ctx->context);
LLVMParseBitcodeInContext2(ctx->context, module_template_buf, &module) // create module
do IR code creation
{
function = LLVMAddFunction(ctx->module, "my_func")
LLVMAppendBasicBlockInContext(ctx->context, ...
LLVMBuild...
...
}
optional optimization
{
LLVMPassManagerBuilderRef pass_builder = LLVMPassManagerBuilderCreate();
LLVMPassManagerBuilderSetOptLevel(pass_builder, 3);
LLVMPassManagerBuilderSetSizeLevel(pass_builder, 0);
LLVMPassManagerBuilderUseInlinerWithThreshold(pass_builder, 1000);
LLVMPassManagerRef function_passes = LLVMCreateFunctionPassManagerForModule(ctx->module);
LLVMPassManagerRef module_passes = LLVMCreatePassManager();
LLVMPassManagerBuilderPopulateFunctionPassManager(pass_builder, function_passes);
LLVMPassManagerBuilderPopulateModulePassManager(pass_builder, module_passes);
LLVMPassManagerBuilderDispose(pass_builder);
LLVMInitializeFunctionPassManager(function_passes);
for (LLVMValueRef value = LLVMGetFirstFunction(ctx->module); value;
value = LLVMGetNextFunction(value))
{
LLVMRunFunctionPassManager(function_passes, value);
}
LLVMFinalizeFunctionPassManager(function_passes);
LLVMRunPassManager(module_passes, ctx->module);
LLVMDisposePassManager(function_passes);
LLVMDisposePassManager(module_passes);
}
optional for debug
{
LLVMVerifyModule(ctx->module, LLVMAbortProcessAction, &error);
LLVMPrintModule
}
if (LLVMCreateJITCompilerForModule(&ctx->engine, ctx->module, 0, &error) != 0)
my_func = (exec_func_t)(uintptr_t)LLVMGetFunctionAddress(ctx->engine, "my_func");
LLVMRemoveModule(ctx->engine, ctx->module, &ctx->module, &error);
LLVMDisposeModule(ctx->module);
LLVMDisposeBuilder(ctx->builder);
do
{
my_func(...);
}
LLVMDisposeExecutionEngine(ctx->engine);
LLVMContextDispose(ctx->context);
/* program finit */
LLVMShutdown();

How to access file given to cilly in my CIL module

I have added a new feature to CIL(C Intermediate Language). I am able to execute my new module using
$cilly --dotestmodule --save-temps -D HAPPY_MOOD -o test test.c
Now, in my testmodule, I want to call Cfg.computeFileCFG for test.c file. But I don't know how to access test.c file in my module.
I tried using Cil.file. but it says "Unbound value Cil.file".
my code:
open Pretty
open Cfg
open Cil
module RD = Reachingdefs
let () = Cfg.computeFileCFG Cil.file
let rec fact n = if n < 2 then 1 else n * fact(n-1)
let doIt n = fact n
let feature : featureDescr =
{ fd_name = "testmodule";
fd_enabled = ref false;
fd_description = "simple test 1240";
fd_extraopt = [];
fd_doit = (function (f: file) -> ignore (doIt 10));
fd_post_check = true;
}
please tell me how to compute the Cfg for test.c file.
I am not a CIL expert, but here are a few remarks:
the CIL online documentation states that Cil.file is an Ocaml type. Passing a type as an argument to a function is probably not what you want to do here;
it seems like the fd_doit function in your feature descriptor takes the file you are looking to process as its argument f;
according to the Cilly manual, the type of f is Cil.file. Conveniently, this seems to be the type of the argument required by the function computeFileCFG.
Hopefully you can take it from here. Good luck!