LLVM API: correct way to create/dispose - llvm

I'm attempting to implement a simple JIT compiler using the LLVM C API. So far, I have no problems generating IR code and executing it, that is: until I start disposing objects and recreating them.
What I basically would like to do is to clean up the JIT'ted resources the moment they're no longer used by the engine. What I'm basically attempting to do is something like this:
while (true)
{
// Initialize module & builder
InitializeCore(GetGlobalPassRegistry());
module = ModuleCreateWithName(some_unique_name);
builder = CreateBuilder();
// Initialize target & execution engine
InitializeNativeTarget();
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
DisposeBuilder(builder);
// DisposeModule(module); //--> Commented out: Deleted by execution engine
Shutdown();
}
However, this doesn't seem to be working correctly: the second iteration of the loop I get a pretty bad error...
So to summarize: what's the correct way to destroy and re-create the LLVM API?

Posting this as Answer because the code's too long. If possible and no other constraints, try to use LLVM like this. I am pretty sure the Shutdown() inside the loop is the culprit here. And I dont think it would hurt to keep the Builder outside, too. This reflects well the way I use LLVM in my JIT.
InitializeCore(GetGlobalPassRegistry());
InitializeNativeTarget();
builder = CreateBuilder();
while (true)
{
// Initialize module & builder
module = ModuleCreateWithName(some_unique_name);
// Initialize target & execution engine
engine = CreateExecutionEngineForModule(...);
passmgr = CreateFunctionPassManagerForModule(module);
AddTargetData(GetExecutionEngineTargetData(engine), passmgr);
InitializeFunctionPassManager(passmgr);
// [... my fancy JIT code ...] --** Will give a serious error the second iteration
// Destroy
DisposePassManager(passmgr);
DisposeExecutionEngine(engine);
}
DisposeBuilder(builder);
Shutdown();

/* program init */
LLVMInitializeNativeTarget();
LLVMInitializeNativeAsmPrinter();
LLVMInitializeNativeAsmParser();
LLVMLinkInMCJIT();
ctx->context = LLVMContextCreate();
ctx->builder = LLVMCreateBuilderInContext(ctx->context);
LLVMParseBitcodeInContext2(ctx->context, module_template_buf, &module) // create module
do IR code creation
{
function = LLVMAddFunction(ctx->module, "my_func")
LLVMAppendBasicBlockInContext(ctx->context, ...
LLVMBuild...
...
}
optional optimization
{
LLVMPassManagerBuilderRef pass_builder = LLVMPassManagerBuilderCreate();
LLVMPassManagerBuilderSetOptLevel(pass_builder, 3);
LLVMPassManagerBuilderSetSizeLevel(pass_builder, 0);
LLVMPassManagerBuilderUseInlinerWithThreshold(pass_builder, 1000);
LLVMPassManagerRef function_passes = LLVMCreateFunctionPassManagerForModule(ctx->module);
LLVMPassManagerRef module_passes = LLVMCreatePassManager();
LLVMPassManagerBuilderPopulateFunctionPassManager(pass_builder, function_passes);
LLVMPassManagerBuilderPopulateModulePassManager(pass_builder, module_passes);
LLVMPassManagerBuilderDispose(pass_builder);
LLVMInitializeFunctionPassManager(function_passes);
for (LLVMValueRef value = LLVMGetFirstFunction(ctx->module); value;
value = LLVMGetNextFunction(value))
{
LLVMRunFunctionPassManager(function_passes, value);
}
LLVMFinalizeFunctionPassManager(function_passes);
LLVMRunPassManager(module_passes, ctx->module);
LLVMDisposePassManager(function_passes);
LLVMDisposePassManager(module_passes);
}
optional for debug
{
LLVMVerifyModule(ctx->module, LLVMAbortProcessAction, &error);
LLVMPrintModule
}
if (LLVMCreateJITCompilerForModule(&ctx->engine, ctx->module, 0, &error) != 0)
my_func = (exec_func_t)(uintptr_t)LLVMGetFunctionAddress(ctx->engine, "my_func");
LLVMRemoveModule(ctx->engine, ctx->module, &ctx->module, &error);
LLVMDisposeModule(ctx->module);
LLVMDisposeBuilder(ctx->builder);
do
{
my_func(...);
}
LLVMDisposeExecutionEngine(ctx->engine);
LLVMContextDispose(ctx->context);
/* program finit */
LLVMShutdown();

Related

How to create a ReplaySubject with RxCpp?

In my c++ project, I need to create Subjects having an initial value, that may be updated. On each subscription/update, subscribers may trigger then data processing... In a previous Angular (RxJS) project, this kind of behavior was handled with ReplaySubject(1).
I'm not able to reproduce this using c++ rxcpp lib.
I've looked up for documentation, snippets, tutorials, but without success.
Expected pseudocode (typescript):
private dataSub: ReplaySubject<Data> = new ReplaySubject<Data>(1);
private init = false;
// public Observable, immediatly share last published value
get currentData$(): Observable<Data> {
if (!this.init) {
return this.initData().pipe(switchMap(
() => this.dataSub.asObservable()
));
}
return this.dataSub.asObservable();
}
I think you are looking for rxcpp::subjects::replay.
Please try this:
auto coordination = rxcpp::observe_on_new_thread();
rxcpp::subjects::replay<int, decltype(coordination)> test(1, coordination);
// to emit data
test.get_observer().on_next(1);
test.get_observer().on_next(2);
test.get_observer().on_next(3);
// to subscribe
test.get_observable().subscribe([](auto && v)
printf("%d\n", v); // this will print '3'
});

Function optimization pass

I am trying to use llvm::PassBuilder and FunctionPassManager to optimize a function in a module, what I have done is:
mod = ...load module from LLVM IR bitcode file...
auto lift_func = mod->getFunction("go_back");
if (not lift_func) {
llvm::errs() << "Error: cannot get function\n";
return 0;
}
auto pass_builder = llvm::PassBuilder{};
auto fa_manager = llvm::FunctionAnalysisManager{};
pass_builder.registerFunctionAnalyses(fa_manager);
auto fp_manager = pass_builder.buildFunctionSimplificationPipeline(llvm::PassBuilder::OptimizationLevel::O2);
fp_manager.run(*lift_func, fa_manager);
but the program crashes always at fp_manager.run. I tried several ways with pass_builder, fa_manager, fp_manager but nothing works.
Strange enough, the LLVM's opt tool (which uses legacy optimization interface) works without any problem, i.e. if I run
opt -O2 go_back.bc -o go_back_o2.bc
then I get a new module where the (single) function go_back is optimized.
Many thanks for any response.
NB. The (disassembled) LLVM bitcode file is given here if anyone wants to take a look.
Update: I've somehow managed to pass the fp_manager.run with:
auto loop_manager = llvm::LoopAnalysisManager{};
auto cgscc_manager = llvm::CGSCCAnalysisManager{};
auto mod_manager = llvm::ModuleAnalysisManager{};
pass_builder.registerModuleAnalyses(mod_manager);
pass_builder.registerCGSCCAnalyses(cgscc_manager);
pass_builder.registerFunctionAnalyses(fa_manager);
pass_builder.registerLoopAnalyses(loop_manager);
pass_builder.crossRegisterProxies(loop_manager, fa_manager, cgscc_manager, mod_manager);
auto fp_manager = pass_builder.buildFunctionSimplificationPipeline(llvm::PassBuilder::OptimizationLevel::O2, llvm::PassBuilder::ThinLTOPhase::None, true);
fp_manager.run(*lift_func, fa_manager);
...print mod...
But the program crashes when the fa_manager object is destroyed, still do not understand why!!!
Well, after debugging and reading LLVM source code, I've managed to make it works, as following
mod = ...load module from LLVM IR bitcode file...
auto lift_func = mod->getFunction("go_back");
if (not lift_func) {
llvm::errs() << "Error: cannot get function\n";
return 0;
}
auto pass_builder = llvm::PassBuilder{};
auto loop_manager = llvm::LoopAnalysisManager{};
auto cgscc_manager = llvm::CGSCCAnalysisManager{};
auto mod_manager = llvm::ModuleAnalysisManager{};
auto fa_manager = llvm::FunctionAnalysisManager{}; // magic: it's must be here
pass_builder.registerModuleAnalyses(mod_manager);
pass_builder.registerCGSCCAnalyses(cgscc_manager);
pass_builder.registerFunctionAnalyses(fa_manager);
pass_builder.registerLoopAnalyses(loop_manager);
pass_builder.crossRegisterProxies(loop_manager, fa_manager, cgscc_manager, mod_manager);
auto fp_manager = pass_builder.buildFunctionSimplificationPipeline(llvm::PassBuilder::OptimizationLevel::O2, llvm::PassBuilder::ThinLTOPhase::None, true);
fp_manager.run(*lift_func, fa_manager);
...anything...
The fa_manager should be initialized as late as possible, I still don't know why!!!

How can I initialize an embedded Python interpreter with bytecode optimization from C++?

I am running an embedded Python 2.7 interpreter within my C++ project, and I would like to optimize the interpreter as much as possible. One way I would like to do this is by disabling my debug statements using the __debug__ variable. I would also like to realize any possible gains in performance due to running Python with bytecode optimizations (i.e. the -O flag).
This Stack Overflow question addresses the use of the __debug__ variable, and notes that it can be turned off by running Python with -O. However, this flag can obviously not be passed for an embedded interpreter that is created with C++ hooks.
Below is my embedded interpreter initialization code. The code is not meant to be run in isolation, but should provide a glimpse into the environment I'm using. Is there some way I can alter or add to these function calls to specify that the embedded interpreter should apply bytecode optimizations, set the __debug__ variable to False, and in general run in "release" mode rather than "debug" mode?
void PythonInterface::GlobalInit() {
if(GLOBAL_INITIALIZED) return;
PY_MUTEX.lock();
const char *chome = getenv("NAO_HOME");
const char *cuser = getenv("USER");
string home = chome ? chome : "", user = cuser ? cuser : "";
if(user == "nao") {
std::string scriptPath = "/home/nao/python:";
std::string swigPath = SWIG_MODULE_DIR ":";
std::string corePath = "/usr/lib/python2.7:";
std::string modulePath = "/lib/python2.7";
setenv("PYTHONPATH", (scriptPath + swigPath + corePath + modulePath).c_str(), 1);
setenv("PYTHONHOME", "/usr", 1);
} else {
std::string scriptPath = home + "/core/python:";
std::string swigPath = SWIG_MODULE_DIR;
setenv("PYTHONPATH", (scriptPath + swigPath).c_str(), 1);
}
printf("Starting initialization of Python version %s\n", Py_GetVersion());
Py_InitializeEx(0); // InitializeEx(0) turns off signal hooks so ctrl c still works
GLOBAL_INITIALIZED = true;
PY_MUTEX.unlock();
}
void PythonInterface::Init(VisionCore* core) {
GlobalInit();
PY_MUTEX.lock();
CORE_MUTEX.lock();
CORE_INSTANCE = core;
thread_ = Py_NewInterpreter();
PyRun_SimpleString(
"import pythonswig_module\n"
"pythonC = pythonswig_module.PythonInterface().CORE_INSTANCE.interpreter_\n"
"pythonC.is_ok_ = False\n"
"from init import *\n"
"init()\n"
"pythonC.is_ok_ = True\n"
);
CORE_MUTEX.unlock();
PY_MUTEX.unlock();
}
I solved this by setting the PYTHONOPTIMIZE environment variable prior to calling Py_InitializeEx(0):
/* ... snip ... */
setenv("PYTHONOPTIMIZE", "yes", 0);
printf("Starting initialization of Python version %s\n", Py_GetVersion());
Py_InitializeEx(0); // InitializeEx(0) turns off signal hooks so ctrl c still works
/* ... snip ... */

Call a Lua function from C++

I have google high and low and found examples, but none of them seems to work (Lua 5.2).
I have a simple function in Lua
function onData ( data )
print ( data )
end
I want to call onData from C++ and tried this:
// Create new Lua state
L = luaL_newstate();
// Load all Lua libraries
luaL_openlibs(L);
// Create co-routine
CO = lua_newthread(L);
// Load and compile script
AnsiString script(Frame->Script_Edit->Text);
if (luaL_loadbuffer(CO,script.c_str(),script.Length(),AnsiString(Name).c_str()) == LUA_OK) {
Compiled = true;
} else {
cs_error(CO, "Compiler error: "); // Print compiler error
Compiled = false;
}
// Script compiled and ready?
if (Compiled == true) {
lua_getglobal(CO, "onData"); // <-------- Doesn't find the function
if( !lua_isfunction(CO,-1)) {
lua_pop(CO,1);
return;
}
lua_pushlstring(CO,data,len);
lua_resume(CO,NULL,0)
}
As you can see I'm starting my script as a co-routine so I can use the lua_yield() function on it. I have tried to look for the function in both the L and CO states.
luaL_loadbuffer loads the script but does not run it. onData will only be defined when the script is run.
Try calling luaL_dostring instead of luaL_loadbuffer.
Or add lua_pcall(CO,0,0,0) before lua_getglobal.
Moreover, you need lua_resume(CO,NULL,1) to pass data to onData.

MediaEngine audio playback on WinRT

I'm trying to add music to my game that runs on WinRT. The music should be in an encoded format (mp3, ogg, etc.) and should be streamable and be decoded by the hardware (for performance reasons).
I've looked through the samples, and found out that MediaEngine can do something like this (I hope).
However, I'm having problems making it work. I keep getting ComExceptions everytime I try to create IMFByteStream from IRandomAccessStream via MFCreateMFByteStreamOnStreamEx().
It might be that I'm not handling tasks correctly, since they are a new paradigm for me.
Here's some code (pretty similar to the sample I mentioned before):
void MyMedia::PlayMusic ()
{
try
{
StorageFolder^ installedLocation = Windows::ApplicationModel::Package::Current->InstalledLocation;
Concurrency::task<StorageFile^> m_pickFileTask = Concurrency::task<StorageFile^>(installedLocation->GetFileAsync("music.mp3"), m_tcs.get_token());
SetURL(StringHelper::toString("music.mp3"));
auto player = this;
m_pickFileTask.then([&player](StorageFile^ fileHandle)
{
Concurrency::task<IRandomAccessStream^> fOpenStreamTask = Concurrency::task<IRandomAccessStream^> (fileHandle->OpenAsync(Windows::Storage::FileAccessMode::Read));
fOpenStreamTask.then([&player](IRandomAccessStream^ streamHandle)
{
try
{
player->SetBytestream(streamHandle);
if (player->m_spMediaEngine)
{
MEDIA::ThrowIfFailed(
player->m_spMediaEngine->Play()
);
}
} catch(Platform::Exception^)
{
MEDIA::ThrowIfFailed(E_UNEXPECTED);
}
}
);
}
);
} catch(Platform::Exception^ ex)
{
Printf("error: %s", ex->Message);
}
}
void MyMedia::SetBytestream(IRandomAccessStream^ streamHandle)
{
HRESULT hr = S_OK;
ComPtr<IMFByteStream> spMFByteStream = nullptr;
//The following line always throws a ComException
MEDIA::ThrowIfFailed(
MFCreateMFByteStreamOnStreamEx((IUnknown*)streamHandle, &spMFByteStream)
);
MEDIA::ThrowIfFailed(
m_spEngineEx->SetSourceFromByteStream(spMFByteStream.Get(), m_bstrURL)
);
return;
}
Bonus: If you know a better solution to my audio needs, please leave a comment.
I managed to fix this. There was two problems I found.
Media Foundation was not initialized
MFStartup(MF_VERSION); needs to be called before Media Foundation can be used. I added this code just before creating the media engine.
Referencing a pointer.
Line m_pickFileTask.then([&player](StorageFile^ fileHandle) should be m_pickFileTask.then([player](StorageFile^ fileHandle). This is already a pointer to the current class, and & provides the address of variable, so I was actually passing the pointer's pointer.