How to invoke an Objective-C Block via the LLVM C++ API? - llvm

Say, for example, I have an Objective-C compiled Module that contains something like the following:
typedef bool (^BoolBlock)(void);
BoolBlock returnABlock(void)
{
return Block_copy(^bool(void){
printf("Block executing.\n");
return YES;
});
}
...then, using the LLVM C++ API, I load that Module and create a CallInst to call the returnABlock() function:
Function *returnABlockFunction = returnABlockModule->getFunction(std::string("returnABlock"));
CallInst *returnABlockCall = CallInst::Create(returnABlockFunction, "returnABlockCall", entryBlock);
How can I then invoke the Block returned via the returnABlockCall object?

Not an easy answer here, I'm afraid. Blocks are lowered by the front-end into calls into the blocks runtime. In the case of clang, the relevant code is at clang/lib/CodeGen/CGBlocks.[h|cpp].
It would be worth asking on the cfe-dev list if there's a way to factor this code out for reuse in other front-ends.

In C, I just act as if the var I assigned the block to was a function pointer. Using your code as an example, after you assign the result of the function to "returnABlockCall", you could just write:
returnABlockCall();
and it should work.
Warning, this is untested in C++, but I see no reason why it wouldn't work.

Related

How to convert function insertion module pass to intrinsic to inline

PROBLEM:
I currently have a traditional module instrumentation pass that
inserts new function calls into a given IR according to some logic
(inserted functions are external from a small lib that is later linked
to given program). Running experiments, my overhead is from
the cost of executing a function call to the library function.
What I am trying to do:
I would like to inline these function bodies into the IR of
the given program to get rid of this bottleneck. I assume an intrinsic
would be a clean way of doing this, since an intrinsic function would
be expanded to its function body when being lowered to ASM (please
correct me if my understanding is incorrect here, this is my first
time working with intrinsics/LTO).
Current Status:
My original library call definition:
void register_my_mem(void *user_vaddr){
... C code ...
}
So far:
I have created a def in: llvm-project/llvm/include/llvm/IR/IntrinsicsX86.td
let TargetPrefix = "x86" in {
def int_x86_register_mem : GCCBuiltin<"__builtin_register_my_mem">,
Intrinsic<[], [llvm_anyint_ty], []>;
}
Added another def in:
otwm/llvm-project/clang/include/clang/Basic/BuiltinsX86.def
TARGET_BUILTIN(__builtin_register_my_mem, "vv*", "", "")
Added my library source (*.c, *.h) to the compiler-rt/lib/test_lib
and added to CMakeLists.txt
Replaced the function insertion with trying to insert the intrinsic
instead in: llvm/lib/Transforms/Instrumentation/myModulePass.cpp
WAS:
FunctionCallee sm_func =
curr_inst->getModule()->getOrInsertFunction("register_my_mem",
func_type);
ArrayRef<Value*> args = {
builder.CreatePointerCast(sm_arg_val, currType->getPointerTo())
};
builder.CreateCall(sm_func, args);
NEW:
Intrinsic::ID aREGISTER(Intrinsic::x86_register_my_mem);
Function *sm_func = Intrinsic::getDeclaration(currFunc->getParent(),
aREGISTER, func_type);
ArrayRef<Value*> args = {
builder.CreatePointerCast(sm_arg_val, currType->getPointerTo())
};
builder.CreateCall(sm_func, args);
Questions:
If my logic for inserting the intrinsic functions shouldnt be a
module pass, where do i put it?
Am I confusing LTO with intrinsics?
Do I put my library function definitions into the following files as mentioned in
http://lists.llvm.org/pipermail/llvm-dev/2017-June/114322.html as for example EmitRegisterMyMem()?
clang/lib/CodeGen/CodeGenFunction.cpp - define llvm::Instrinsic::ID
clang/lib/CodeGen/CodeGenFunction.h - declare llvm::Intrinsic::ID
My LLVM compiles, so it is semantically correct, but currently when
trying to insert this function call, LLVM segfaults saying "Not a valid type for function argument!"
I'm seeing multiple issues here.
Indeed, you're confusing LTO with intrinsics. Intrinsics are special "functions" that are either expanded into special instructions by a backend or lowered to library function calls. This is certainly not something you're going to achieve. You don't need an intrinsic at all, you'd just need to inline the function call in question: either by hands (from your module pass) or via LTO, indeed.
The particular error comes because you're declaring your intrinsic as receiving an integer argument (and this is how the declaration would look like), but:
asking the declaration of variadic intrinsic with invalid type (I'd assume your func_type is a non-integer type)
passing pointer argument
Hope this makes an issue clear.
See also: https://llvm.org/docs/LinkTimeOptimization.html
Thanks you for clearing up the issue #Anton Korobeynikov.
After reading your explanation, I also believe that I have to use LTO to accomplish what I am trying to do. I especially found this link very useful: https://llvm.org/docs/LinkTimeOptimization.html. It seems that I am now on a right path.

exposing internal c++ function to llvm jit'd c++

I'm trying to experiment with llvm right now. I'd like to use languages that can be compiled to llvm bitcode for scripting. I've managed so far to load an llvm bitcode module and call a function defined in it from my 'internal' c++ code. I've next tried to expose a c++ function from my internal code to the jit'd code - so far in this effort I haven't managed to get anything but SEGFAULT.
My code is as follows. I've tried to create a Function and a global mapping in my execution engine that points to a function I'd like to call.
extern "C" void externGuy()
{
cout << "I'm the extern guy" << endl;
}
void ExposeFunction()
{
std::vector<Type*> NoArgs(0);
FunctionType* FT = FunctionType::get(Type::getVoidTy(getGlobalContext()), NoArgs, false);
Function* fnc = Function::Create(FT, Function::ExternalLinkage, "externGuy", StartModule);
JIT->addGlobalMapping(fnc, (void*)externGuy);
}
// ... Create module, create execution engine
ExposeFunction();
Is the problem that I can't add a function to the module after its been loaded from bitcode file?
Update:
I've refactored my code so that it reads like so instead:
// ... Create module, create execution engine
std::vector<Type*> NoArgs(0);
FunctionType* FT = FunctionType::get(Type::getVoidTy(getGlobalContext()), NoArgs, false);
Function* fnc = Function::Create(FT, Function::ExternalLinkage, "externGuy", m);
fnc->dump();
JIT->addGlobalMapping(fnc, (void*)externGuy);
So instead of segfault I get:
Program used external function 'externGuy' which could not be resolved
Also, the result of dump() prints:
declare void #externGuy1()
If I change my c++ script bitcode thing to call externGuy1() instead of externGuy() it will suggest to me that I meant to use the externGuy. The addGlobalMapping just doesn't seem to be working for me. I'm not sure what I'm missing here. I also added -fPIC to my compilation command like I saw suggested in another question - I'm honestly not sure if its helped anything but no harm in trying.
I finally got this to work. My suspicion is to maybe creating the function, as well as defining it in the script is causing perhaps more than 1 declaration of the functions name and the mapping just wasn't working out. What I did was define the function in the script and then use getFunction to use for mapping instead. Using the dump() method on the function output:
declare void #externGuy() #1
Which is why I think that had something to do with the mapping not working originally. I also remember the Kaleidoscope tutorial saying that getPointerToFunction would do the JIT compiling when its called if it isn't already done so I thought that I would have to do the mapping before this call.
So altogether to get this whole thing working it was as follows:
// Get the function and map it
Function* extrn = m->getFunction("externGuy");
extrn->dump();
JIT->addGlobalMapping(extrn, (void*)&::externGuy);
// Get a pointer to the jit compiled function
Function* mane = m->getFunction("hello");
void* fptr = JIT->getPointerToFunction(mane);
// Make a call to the jit compiled function which contains a call to externGuy
void (*FP)() = (void(*)())(intptr_t)fptr;
FP();

Running Function Inside Stub. Passing Function Pointer

I'm working on creating a user-level thread library and what I want to do is run a function inside a stub and so I would like to pass the function pointer to the stub function.
Here is my stub function:
void _ut_function_stub(void (*f)(void), int id)
{
(*f)();
DeleteThread(id);
}
This is what the user calls. What I want to do is get pointer of _ut_function_stub to assign to pc and I've tried various different options including casting but the compiler keeps saying "invalid use of void expression".
int CreateThread (void (*f) (void), int weight)
{
... more code
pc = (address_t)(_ut_function_stub(f, tcb->id));
... more code
}
Any help is appreciated. Thanks!
If you're interested in implementing your own user-level-threads library, I'd suggest looking into the (now deprecated) ucontext implementation. Specifically, looking at the definitions for the structs used in ucontext.h will help you see all the stuff you actually need to capture to get a valid snapshot of the thread state.
What you're really trying to capture with the erroneous (address_t) cast in your example is the current continuation. Unfortunately, C doesn't support first-class continuations, so you're going to be stuck doing something much more low-level, like swapping stacks and dumping registers (hence why I pointed you to ucontext as a reference—it's going to be kind of complicated if you really want to get this right).

Embedding Lua in C++: Accessing C++ created through Lua, back in C++ (or returning results back from Lua to C++)

The title probably sounds a bit recursive - but this is what I am trying to do:
I have C++ classes Foo and Foobar;
I am using tolua++ to export them to Lua
In Lua:
function wanna_be_starting_something()
foo = Foo:new()
fb = Foobar:new()
-- do something
foo.setResult(42) -- <- I want to store something back at the C++ end
end
In C++
int main(int argc, char argv[])
{
MyResult res;
LuaEngine * engine = new LuaEngine();
engine->run('wbs-something.lua');
// I now want to be able to access the stored result, in variable res
};
So my question is this: how do I pass data from a C++ object that is being manipulated by Lua, back into a C++ program?
To understand how to exchange data back and forth, you should learn about the Lua stack that is the structure Lua uses to communicate with the host program. I guess tolua++ takes care of this for the classes/methods you exported.
Here there is a good start: http://www.lua.org/pil/24.html is for Lua 5.0 but there are indications on how to make it work with 5.1 (which I assume is the Lua version you're using).
If you don't want to dig into all the details, you can always resort to create an ad-hoc C++ method that sets values into a global object. Not the cleanest way, IMHO, but could work.
I don't know tolua++, but both luabind and luabridge support what you need:
* option 1 is just to have the lua code do return whatever and you'll get the that in C++. This require that you'll have a template based version of run(), which returns a value.
* option 2 is to use the lua engine to define a function and then use the engine's call method with the function name and parameters. There are several implementations of LuaEngine which support such a call:
LuaEngine * engine = new LuaEngine();
engine->run("function a(v) return v . 'a'; end ");
valua = engine->call("a", argument);

LLVM automatic C++ linking

In some of the LLVM tutorials I'm seen where it's fairly easy to bind C function into a custom language based on LLVM. LLVM hands the programmer a pointer to the function that can be then be mixed in with the code being generated by LLVM.
What's the best method to do this with C++ libraries. Let's say I have a fairly complex library like Qt or Boost that I want to bind to my custom language. Do I need to create a stub library (like Python or Lua require), or does LLVM offer some sort of foreign function interface (FFI)?
In my LLVM code, I create extern "C" wrapper functions for this, and insert LLVM function declarations into the module in order to call them. Then, a good way to make LLVM know about the functions is not to let it use dlopen and search for the function name in the executing binary (this is a pain in the ass, since the function names need to be in the .dynsym section, and it is slow too), but to do the mapping manually, using ExecutionEngine::addGlobalMapping.
Just get the llvm::Function* of that declaration and the address of the function as given in C++ by &functionname converted to void* and pass these two things along to LLVM. The JIT executing your stuff will then know where to find the function.
For example, if you wanted to wrap QString you could create several functions that create, destroy and call functions of such an object
extern "C" void createQString(void *p, char const*v) {
new (p) QString(v); // placement-new
}
extern "C" int32_t countQString(void *p) {
QString *q = static_cast<QString*>(p);
return q->count();
}
extern "C" void destroyQString(void *p) {
QString *q = static_cast<QString*>(p);
q->~QString();
}
And create proper declarations and a mapping. Then you can call these functions, passing along a memory region suitably aligned and sized for QString (possibly alloca'ed) and a i8* pointing to the C string data for initialization.
If you compile some code in C++ and some in another language to LLVM bitcode, it should be perfectly possible to link these together and let one call the other... in theory.
In practice, you will need glue code to convert between the different language's types (e.g. there is no equivalent to a Python string in C++ unless you use CPython, so for void reverse(std::string s) to be callable with a str you need a conversion - worse, the whole object model is very different). And Qt specifically has a lot of magic that may require much more effort to expose after compilations. Also, there may be further potential problems I'm not aware of.
And even if that works, it's potentially very ugly to use. There are still get* and set* functions all over PyQt despite Python's very convenient descriptors - and much effort went into PyQt, they didn't just create some stubs.