Segfault in emitted IR For `printf` call - llvm

I would like to use the system printf to be able to print a single integer from the programming language I'm writing a compiler for, as in print(3). I am running into a segfault when executing the compiled IR.
Following this example, my code is
#include "llvm/ADT/APInt.h"
#include "llvm/IR/BasicBlock.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/Module.h"
#include "llvm/Support/raw_ostream.h"
#include <memory>
#include <vector>
using namespace llvm;
static LLVMContext TheContext;
static IRBuilder<> Builder(TheContext);
int main() {
static std::unique_ptr<Module> TheModule;
TheModule = std::make_unique<Module>("inputFile", TheContext);
std::vector<Type *> Nats(1, Type::getInt32Ty(TheContext));
FunctionType *PNFT =
FunctionType::get(Type::getInt32Ty(TheContext), Nats, false);
Function *PNF = Function::Create(PNFT, Function::ExternalLinkage, "printf",
TheModule.get());
for (auto &Arg : PNF->args()) {
Arg.setName("x");
}
FunctionType *mainType = FunctionType::get(Builder.getInt32Ty(), false);
Function *main = Function::Create(mainType, Function::ExternalLinkage, "main",
TheModule.get());
BasicBlock *entry = BasicBlock::Create(TheContext, "entry", main);
Builder.SetInsertPoint(entry);
std::vector<Value *> printArgs;
printArgs.push_back(ConstantInt::get(TheContext, APInt(32, 20)));
Builder.CreateCall(TheModule->getFunction("printf"), printArgs);
Builder.CreateRet(ConstantInt::get(TheContext, APInt(32, 0)));
TheModule->print(llvm::outs(), nullptr);
}
I compile this with clang++ `llvm-config --cxxflags --ldflags --system-libs --libs all` test.cpp
This outputs the LLVM IR
; ModuleID = 'inputFile'
source_filename = "inputFile"
declare i32 #printf(i32)
define i32 #main() {
entry:
%0 = call i32 #printf(i32 20)
ret i32 0
}
which I put into a file test.ll and compile as clang test.ll. I threw the segfaulting code into lldb and found that the code segfaults in strchr:
(lldb) bt
* thread #1, name = 'a.out', stop reason = signal SIGSEGV: invalid address (fault address: 0x14)
* frame #0: 0x00007ffff7f300fc libc.so.6`__strchrnul_avx2 + 28
frame #1: 0x00007ffff7e38a53 libc.so.6`__vfprintf_internal + 163
frame #2: 0x00007ffff7e25a2f libc.so.6`_IO_printf + 175
frame #3: 0x000055555555514b a.out`main + 11
frame #4: 0x00007ffff7df5002 libc.so.6`__libc_start_main + 242
frame #5: 0x000055555555506e a.out`_start + 46
I don't think the problem is with compiling the IR, because in my actual code (that is, not the MVE I've shown above) I'm emitting object code directly using a pass (as suggested in part eight of the kaleidescope tutorial) and still experiencing the same issue. What am I doing wrong here?
Thank you.

Thanks to #arnt's comment, I found this resource which lead me to modify my program to the following:
#include "llvm/ADT/APInt.h"
#include "llvm/IR/BasicBlock.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/Module.h"
#include "llvm/Support/raw_ostream.h"
#include <memory>
#include <vector>
using namespace llvm;
static LLVMContext TheContext;
static IRBuilder<> Builder(TheContext);
int main() {
static std::unique_ptr<Module> TheModule;
TheModule = std::make_unique<Module>("inputFile", TheContext);
std::vector<llvm::Type *> args;
args.push_back(llvm::Type::getInt8PtrTy(TheContext));
llvm::FunctionType *printfType =
llvm::FunctionType::get(Builder.getInt32Ty(), args, true);
llvm::Constant *printfFunc = Function::Create(
printfType, Function::ExternalLinkage, "printf", TheModule.get());
/*begin codegen for `main`*/
FunctionType *mainType = FunctionType::get(Builder.getInt32Ty(), false);
Function *main = Function::Create(mainType, Function::ExternalLinkage, "main",
TheModule.get());
BasicBlock *entry = BasicBlock::Create(TheContext, "entry", main);
Builder.SetInsertPoint(entry);
std::vector<Value *> printArgs;
llvm::Value *formatStr = Builder.CreateGlobalStringPtr("%d\n");
printArgs.push_back(formatStr);
printArgs.push_back(ConstantInt::get(TheContext, APInt(32, 20)));
Builder.CreateCall(TheModule->getFunction("printf"), printArgs);
Builder.CreateRet(ConstantInt::get(TheContext, APInt(32, 0)));
TheModule->print(llvm::outs(), nullptr);
}
This emits the following IR:
; ModuleID = 'inputFile'
source_filename = "inputFile"
#0 = private unnamed_addr constant [4 x i8] c"%d\0A\00", align 1
declare i32 #printf(i8*, ...)
define i32 #main() {
entry:
%0 = call i32 (i8*, ...) #printf(i8* getelementptr inbounds ([4 x i8], [4 x i8]* #0, i32 0, i32 0), i32 20)
ret i32 0
}
which works as expected.

Related

How to get a function pointer (in order to get function address at runtime) in llvm

In C or C++, I can get the address of function func_1 at run time with the following code.
#include <stdio.h>
#include <string.h>
void func_1()
{
printf("this is func_1\n");
}
int main()
{
printf("this is main\n");
func_1();
void *addr = (void*)func_1; // get the addr of func_1
return 0;
}
When I convert this code to IR, I can see that the corresponding statement to get the address of the function is:
store i8* bitcast (void ()* #func_1 to i8*), i8** %addr, align 8
%2 = load i8*, i8** %addr, align 8
%call1 = call i32 (i8*, ...) #printf(i8* getelementptr inbounds ([18 x i8], [18 x i8]* #.str.3, i64 0, i64 0), i8* %2)
Now, I want to use the LLVM compiler to use the pass optimization phase to add one such instruction for each function in the target program to get the function address. How do I achieve this? Maybe a storeInst and a loadInst, but how to set the parameters? In particular, void ()* #func_1 to i8*

<badref> when using CallInst::CreateMalloc for a struct type

I am trying to generate LLVM IR for code (in a toy language) similar to the following C++ code:
struct test {
int a;
int b;
int c;
};
int main() {
tempVar *a;
a = new test();
}
Unfortunately, when I run verifyModule, I receive the following error messages:
Instruction referencing instruction not embedded in a basic block!
%malloccall = tail call i8* #malloc(i64 mul nuw (i64 ptrtoint (i32* getelementptr (i32, i32* null, i32 1) to i64), i64 3))
<badref> = bitcast i8* %malloccall to %test*
The following MWE reproduces the issue:
#include "llvm/IR/DerivedTypes.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/Verifier.h"
#include "llvm/Support/raw_ostream.h"
#include <map>
#include <vector>
using namespace llvm;
std::map<std::string, StructType *> allocatedClasses;
std::map<std::string, std::vector<Type *>> classSizes;
static LLVMContext TheContext;
static IRBuilder<> Builder(TheContext);
static std::map<std::string, AllocaInst *> NamedValues;
int main() {
static std::unique_ptr<Module> TheModule;
TheModule = std::make_unique<Module>("inputFile", TheContext);
allocatedClasses["test"] = StructType::create(TheContext, "test");
classSizes["test"] = std::vector<Type *>(3, Type::getInt32Ty(TheContext));
allocatedClasses["test"]->setBody(classSizes["test"]);
FunctionType *mainType = FunctionType::get(Builder.getInt32Ty(), false);
Function *main = Function::Create(mainType, Function::ExternalLinkage, "main",
TheModule.get());
BasicBlock *entry = BasicBlock::Create(TheContext, "entry", main);
Builder.SetInsertPoint(entry);
std::string tV = "tempVar";
NamedValues[tV] = Builder.CreateAlloca(
PointerType::get(allocatedClasses["test"], 0), nullptr, tV);
auto typeSize = ConstantExpr::getSizeOf(allocatedClasses["test"]);
typeSize =
ConstantExpr::getTruncOrBitCast(typeSize, Type::getInt64Ty(TheContext));
CallInst::CreateMalloc(Builder.GetInsertBlock(), Type::getInt64Ty(TheContext),
allocatedClasses["test"], typeSize, nullptr, nullptr,
"");
Builder.CreateRet(ConstantInt::get(TheContext, APInt(32, 0)));
TheModule->print(outs(), nullptr);
Module *test = TheModule.get();
verifyModule(*test, &errs());
}
I compile this with
clang++ `llvm-config --cxxflags --ldflags --system-libs --libs all` -g ex.cpp, using clang version 10.0.0-4ubuntu1, on x86. When executed, the program outputs:
; ModuleID = 'inputFile'
source_filename = "inputFile"
%test = type { i32, i32, i32 }
define i32 #main() {
entry:
%tempVar = alloca %test*
%malloccall = tail call i8* #malloc(i64 mul nuw (i64 ptrtoint (i32* getelementptr (i32, i32* null, i32 1) to i64), i64 3))
ret i32 0
}
declare noalias i8* #malloc(i64)
and the error message from above.
What am I doing wrong here?
The documentation for the function you call says "Note: This function does not add the bitcast to the basic block, that is the responsibility of the caller."
I have no idea why not, but the caller is you.

How to check if a target of an LLVM AllocaInst is a function pointer

%pointer = alloca void (i32)*, align 8
How to check if %pointer is a function pointer?Can I get the parameter list of the function pointer?
Let Create a function that check if an Alloca Instruction Type is a function pointer.
bool isFunctionPointerType(Type *type){
// Check the type here
if(PointerType *pointerType=dyn_cast<PointerType>(type)){
return isFunctionPointerType(pointerType->getElementType());
}
//Exit Condition
else if(type->isFunctionTy()){
return true;
}
return false;
}
In your runOnModule/runOnFunction Pass
if(AllocaInst *allocaInst=dyn_cast<AllocaInst>(inst)){
if(isFunctionPointerType(allocaInst->getType())){
errs()<<"Funtion Pointer Type\n";
}
}
The above pass are tested on the following source.c code
#include <stdio.h>
void fun(int a)
{
printf("Value of a is %d\n", a);
}
int main()
{
void (*fun_ptr)(int) = &fun;
(*fun_ptr)(10);
return 0;
}
Corresponding LLVM Bitcode without any optimization
entry:
%retval = alloca i32, align 4
%fun_ptr = alloca void (i32)*, align 8
store i32 0, i32* %retval, align 4
call void #llvm.dbg.declare(metadata void (i32)** %fun_ptr, metadata !11,
... metadata !15), !dbg !16
store void (i32)* #_Z3funi, void (i32)** %fun_ptr, align 8, !dbg !16
%0 = load void (i32)*, void (i32)** %fun_ptr, align 8, !dbg !17
call void %0(i32 10), !dbg !18
ret i32 0, !dbg !19
Successfully detect func_ptr as a function pointer.
Note that the code use recursion to find the type recursively
Another way is to track the used of func_ptr using def-use chain in LLVM, ie by tracking the StoreInst and check if the source operand is a pointer to function : haven't try yet.
Hope this helps...
If it help please mark it as correct solution or upvote.. Thanks..

Determining and setting host target triple and instruction extensions in LLVM-C API

The following lengthy C program generates a simple LLVM module containing a function that merely calls llvm.x86.sse41.round.ps. It emits the bitcode file and then runs the code generated by LLVM. My question is how do I find out target triple and instruction extensions like SSE or AVX of the host machine and how do I add this information to the LLVM module or how do I otherwise tell it to the LLVM execution engine. Here is, what I do:
$ cat ctest/avx-instruction-selection.c
#include <llvm-c/Core.h>
#include <llvm-c/Target.h>
#include <llvm-c/ExecutionEngine.h>
#include <llvm-c/BitWriter.h>
#include <llvm-c/Transforms/Scalar.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#if 1
const int vectorSize = 4;
const char* roundName = "llvm.x86.sse41.round.ps";
#else
const int vectorSize = 8;
const char* roundName = "llvm.x86.avx.round.ps.256";
#endif
int main ()
{
LLVMModuleRef module;
LLVMExecutionEngineRef execEngine;
LLVMTargetDataRef targetData;
LLVMTypeRef floatType, vectorType, ptrType, voidType, funcType, roundType, int32Type;
LLVMValueRef func, roundFunc;
LLVMValueRef param, loaded, const1, callRound;
LLVMBuilderRef builder;
LLVMBasicBlockRef block;
const int false = 0;
LLVMInitializeX86TargetInfo();
LLVMInitializeX86Target();
LLVMInitializeX86TargetMC();
module = LLVMModuleCreateWithName("_module");
LLVMSetTarget(module, "x86_64-unknown-linux-gnu");
floatType = LLVMFloatType();
vectorType = LLVMVectorType(floatType, vectorSize);
ptrType = LLVMPointerType(vectorType, 0);
voidType = LLVMVoidType();
LLVMTypeRef roundParams[] = { ptrType };
roundType = LLVMFunctionType(voidType, roundParams, 1, false);
func = LLVMAddFunction(module, "round", roundType);
LLVMSetLinkage(func, LLVMExternalLinkage);
builder = LLVMCreateBuilder();
block = LLVMAppendBasicBlock(func, "_L1");
LLVMPositionBuilderAtEnd(builder, block);
param = LLVMGetParam(func, 0);
loaded = LLVMBuildLoad(builder, param, "");
int32Type = LLVMIntType(32);
LLVMTypeRef funcParams[] = { vectorType, int32Type } ;
funcType = LLVMFunctionType(vectorType, funcParams, 2, false);
roundFunc = LLVMAddFunction(module, roundName, funcType);
LLVMSetLinkage(roundFunc, LLVMExternalLinkage);
const1 = LLVMConstInt(int32Type, 1, false);
LLVMValueRef callParams [] = { loaded, const1 } ;
callRound = LLVMBuildCall(builder, roundFunc, callParams, 2, "");
LLVMSetInstructionCallConv(callRound, 0);
LLVMAddInstrAttribute(callRound, 0, 0);
LLVMBuildStore(builder, callRound, param);
LLVMBuildRetVoid(builder);
LLVMWriteBitcodeToFile(module, "round-avx.bc");
char *errorMsg;
LLVMCreateExecutionEngineForModule(&execEngine, module, &errorMsg);
targetData = LLVMGetExecutionEngineTargetData(execEngine);
size_t vectorSize0 = LLVMStoreSizeOfType(targetData, vectorType);
size_t vectorAlign = LLVMABIAlignmentOfType(targetData, vectorType);
float vector[vectorSize];
printf("%lx, size %lx, align %lx\n", (size_t)vector, vectorSize0, vectorAlign);
LLVMGenericValueRef genericVector = LLVMCreateGenericValueOfPointer(vector);
LLVMGenericValueRef runParams[] = { genericVector } ;
LLVMRunFunction(execEngine, func, 1, runParams);
return 0;
}
$ gcc -Wall -o ctest/avx-instruction-selection ctest/avx-instruction-selection.c `/usr/lib/llvm-3.4/bin/llvm-config --cflags --ldflags` -lLLVM-3.4
$ ctest/avx-instruction-selection
7fff590431c0, size 10, align 10
$ ls round-avx.bc
round-avx.bc
$ llvm-dis -o - round-avx.bc
; ModuleID = 'round-avx.bc'
target triple = "x86_64-unknown-linux-gnu"
define void #round(<4 x float>*) {
_L1:
%1 = load <4 x float>* %0
%2 = call <4 x float> #llvm.x86.sse41.round.ps(<4 x float> %1, i32 1)
store <4 x float> %2, <4 x float>* %0
ret void
}
; Function Attrs: nounwind readnone
declare <4 x float> #llvm.x86.sse41.round.ps(<4 x float>, i32) #0
attributes #0 = { nounwind readnone }
$ gcc -Wall -o ctest/avx-instruction-selection ctest/avx-instruction-selection.c `/usr/lib/llvm-3.5/bin/llvm-config --cflags --ldflags` -lLLVM-3.5
$ ctest/avx-instruction-selection
7ffed6170350, size 10, align 10
LLVM ERROR: Cannot select: intrinsic %llvm.x86.sse41.round.ps
$ gcc -Wall -o ctest/avx-instruction-selection ctest/avx-instruction-selection.c `/usr/lib/llvm-3.6/bin/llvm-config --cflags --ldflags` -lLLVM-3.6
$ ctest/avx-instruction-selection
7ffeae91eb40, size 10, align 10
LLVM ERROR: Target does not support MC emission!
$ gcc -Wall -o ctest/avx-instruction-selection ctest/avx-instruction-selection.c `/usr/lib/llvm-3.7/bin/llvm-config --cflags --ldflags` -lLLVM-3.7
$ ctest/avx-instruction-selection
7fffb6464ea0, size 10, align 10
LLVM ERROR: Target does not support MC emission!
$ gcc -Wall -o ctest/avx-instruction-selection ctest/avx-instruction-selection.c `/usr/lib/llvm-3.8/bin/llvm-config --cflags --ldflags` -lLLVM-3.8
$ ctest/avx-instruction-selection
7ffd5e233000, size 10, align 10
LLVM ERROR: Target does not support MC emission!
Summarized: With LLVM-3.4 the example works, with LLVM-3.5 the intrinsic function round.ps cannot be found and LLVM-3.6 and later say something about MC emissions that I do not understand.
As I understand, LLVM-3.5 does not find the round.ps intrinsic and I guess that it cannot find it because I have not told it about the existing SSE extension. When running llc I can add the option -mattr=sse4.1 but how can I tell it to the execution engine?
Second question: How can I find out about the available instruction extensions like SSE of the host machine via the LLVM-C API? On x86 I can call the CPUID instruction but is there a way that works uniformly on all platforms and can LLVM assist detection of extensions?
Third question: I have hard-coded the target triple into the C code. How can I find out the host target-triple via the LLVM-C API?
Last question: What about this MC emission error?
After trying around a lot I think the answer is as follows:
Replace the lines
LLVMInitializeX86TargetInfo();
LLVMInitializeX86Target();
LLVMInitializeX86TargetMC();
by
LLVMInitializeNativeTarget();
LLVMInitializeNativeAsmPrinter();
LLVMInitializeNativeAsmParser();
Replace the call of LLVMCreateExecutionEngineForModule by a call to the custom function LLVMCreateExecutionEngineForModuleCPU. It is the original implementation of LLVMCreateExecutionEngineForModule plus a call of setMCPU.
#define LLVM_VERSION (LLVM_VERSION_MAJOR * 100 + LLVM_VERSION_MINOR)
LLVMBool LLVMCreateExecutionEngineForModuleCPU
(LLVMExecutionEngineRef *OutEE,
LLVMModuleRef M,
char **OutError) {
std::string Error;
#if LLVM_VERSION < 306
EngineBuilder builder(unwrap(M));
#else
EngineBuilder builder(std::unique_ptr<Module>(unwrap(M)));
#endif
builder.setEngineKind(EngineKind::Either)
.setMCPU(sys::getHostCPUName().data())
.setErrorStr(&Error);
if (ExecutionEngine *EE = builder.create()){
*OutEE = wrap(EE);
return 0;
}
*OutError = strdup(Error.c_str());
return 1;
}
I should also add
float vector[vectorSize] __attribute__((aligned(32)));
in order to align the array for AVX vectors.
According to an answer in the thread crash JIT with AVX intrinsics LLVMRunFunction is restricted to main-like prototypes (apparently only in MCJIT). Thus we should also replace the LLVMRunFunction stuff by
void (*funcPtr) (float *);
funcPtr = LLVMGetPointerToGlobal(execEngine, func);
funcPtr(vector);

LLVM JIT: pass C++ exception through JIT code back to host application

I'm working on a project where I use clang to generate some LLVM IR and then JIT-compile and run it from within my host application. The JIT code calls some functions in the host application which may throw an exception. I expect the exception to be thrown through the JIT code and catched back in the host application. AFAIK this is supposed to work with LLVM, but unfortunately my test application always crashes with "terminate called after throwing an instance of 'int'". Let me give some simple example.
I use clang 3.5 to compile the following simple program into LLVM IR:
extern void test() ;
extern "C" void exec(void*) {
test();
}
with
./clang -O0 -S -emit-llvm test.cpp -c
The result is test.ll
; ModuleID = 'test.cpp'
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-unknown-linux-gnu"
; Function Attrs: uwtable
define void #exec(i8*) #0 {
%2 = alloca i8*, align 8
store i8* %0, i8** %2, align 8
call void #_Z4testv()
ret void
}
declare void #_Z4testv() #1
attributes #0 = { uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" }
!llvm.ident = !{!0}
!0 = metadata !{metadata !"clang version 3.5.0 (224841)"}
My host application looks like this:
static void test() {
throw 1;
}
int main(int, const char **) {
llvm::InitializeNativeTarget();
llvm::InitializeNativeTargetAsmPrinter();
llvm::InitializeNativeTargetAsmParser();
llvm::LLVMContext &Context = llvm::getGlobalContext();
llvm::SMDiagnostic Err;
llvm::Module *Mod = llvm::ParseIRFile("test.ll", Err, Context);
llvm::ExecutionEngine* m_EE = llvm::EngineBuilder(Mod)
.setEngineKind(llvm::EngineKind::JIT)
.create();
llvm::Function* f = Mod->getFunction("_Z4testv");
m_EE->addGlobalMapping(f, reinterpret_cast<void*>(test));
f = Mod->getFunction("exec");
void* poi = m_EE->getPointerToFunction(f);
void (*exec)(void*) = reinterpret_cast<void (*)(void*)>(poi);
try {
exec(NULL);
} catch (...) {
std::cout << "catched exception" << std::endl;
}
return 0;
}
I use LLVM 3.5 which I compiled with cmake. I set LLVM_ENABLE_EH=ON and LLVM_ENABLE_RTTI=ON. Did I miss something when compiling LLVM or is my host application code wrong?
Thanks!
Finally it works and here are a few things which are necessary to fix the issue.
First it's important to make sure MCJIT.h has been included, otherwise MCJIT is not linked in. Unfortunately LLVM silently falls back to the old JIT implementation if MCJIT.h has not been included even though MCJIT has been explicitly requested by:
llvm::EngineBuilder factory(Mod);
factory.setEngineKind(llvm::EngineKind::JIT);
factory.setUseMCJIT(true);
Only MCJIT supports propper exception handling.
In the example in the question I used
Execution::Engine::addGlobalMapping()
which does not work with MCJIT. External function must be reqistered via
llvm::sys::DynamicLibrary::AddSymbol()
Following the complete example:
static void test() {
throw 1;
}
int main(int, const char **) {
llvm::InitializeNativeTarget();
llvm::InitializeNativeTargetAsmPrinter();
llvm::InitializeNativeTargetAsmParser();
llvm::LLVMContext &Context = llvm::getGlobalContext();
llvm::SMDiagnostic Err;
llvm::Module *Mod = llvm::ParseIRFile("test.ll", Err, Context);
std::unique_ptr<llvm::RTDyldMemoryManager> MemMgr(new llvm::SectionMemoryManager());
// Build engine with JIT
std::string err;
llvm::EngineBuilder factory(Mod);
factory.setErrorStr(&err);
factory.setEngineKind(llvm::EngineKind::JIT);
factory.setUseMCJIT(true);
factory.setMCJITMemoryManager(MemMgr.release());
llvm::ExecutionEngine *m_EE = factory.create();
llvm::sys::DynamicLibrary::AddSymbol("_Z4testv", reinterpret_cast<void*>(test));
llvm::Function* f = Mod->getFunction("exec");
m_EE->finalizeObject();
void* poi = m_EE->getPointerToFunction(f);
void (*exec)(void*) = reinterpret_cast<void (*)(void*)>(poi);
try {
exec(NULL);
} catch (int e) {
std::cout << "catched " << e << std::endl;
}
return 0;
}
Additionally you can now also get Debug Symbols for the JIT code by adding:
Opts.JITEmitDebugInfo = true;