I'm writing a toy compiler and want my language support virtual methods, but I have no idea how to do it, it seems not as straight forward as other statements which can be easily turn into the IR code without a second thought, the v-table concept in my mind exists as some graphs and lines just like some high level illustrate. This may enough for using a OOP language but seems not enough for writing one.
I tried to write some C++ code and turn it into ir code but sadly I cannot understand the output still. I checked the source code of Clang and couldn't even figure out where this part sits...(well, I got the code, it seems located at lib/CodeGen/CGClass.cpp, but Clang is a complicated project and I, still, cannot understand how it implement the v-table)
So any idea how to do this, or is there some llvm api to help me implement this?
A vtable is an array of function pointers. In a single-inheritance context, you'd have one such array per class where the elements of the array are the class's virtual methods. Each object would then contain a pointer to its class's vtable and each virtual method call would simply invoke the corresponding pointer in the vtable (after casting it to the needed type).
So let's say you're compiling a program that looks like this:
class A {
int x,y;
virtual int foo() { return x+y; }
virtual int bar() { return x*y; }
}
class B inherits A {
int z;
override int bar() { return x*y+z; }
}
int f(A a) {
return a.foo() + a.bar();
}
Then you could define functions named A_foo, A_bar and B_bar taking an A or B pointer and containing the code for A.foo, A.bar and B.bar respectively (the exact naming would depend on your name mangling scheme of course). Then you'd generate two globals A_vtable and B_vtable that'd look like this:
#A_vtable = global [2 x void (...)*] [
void (...)* bitcast (i32 (%struct.A*)* #A_foo to void (...)*),
void (...)* bitcast (i32 (%struct.A*)* #A_bar to void (...)*)
]
#B_vtable = global [2 x void (...)*] [
void (...)* bitcast (i32 (%struct.A*)* #A_foo to void (...)*),
void (...)* bitcast (i32 (%struct.B*)* #B_bar to void (...)*)
]
Which would correspond to this C code (which is hopefully more readable):
typedef void (*fpointer_t)();
fpointer_t A_vtable[] = {(fpointer_t) A_foo, (fpointer_t) A_bar};
fpointer_t B_vtable[] = {(fpointer_t) A_foo, (fpointer_t) B_bar};
f could then be translated like this:
define i32 #f(%struct.A*) {
%2 = getelementptr inbounds %struct.A, %struct.A* %0, i64 0, i32 0
%3 = bitcast %struct.A* %0 to i32 (%struct.A*)***
%4 = load i32 (%struct.A*)**, i32 (%struct.A*)*** %3
%5 = load i32 (%struct.A*)*, i32 (%struct.A*)** %4
%6 = call i32 %5(%struct.A* %0)
%7 = load void (...)**, void (...)*** %2
%8 = getelementptr inbounds void (...)*, void (...)** %7, i64 1
%9 = bitcast void (...)** %8 to i32 (%struct.A*)**
%10 = load i32 (%struct.A*)*, i32 (%struct.A*)** %9
%11 = call i32 %10(%struct.A* %0)
%12 = add nsw i32 %11, %6
ret i32 %12
}
Or in C:
typedef int (*A_int_method_t)(struct A*);
int f(struct A* a) {
return ((A_int_method_t) a->vtable[0])(a) + ((A_int_method_t) a->vtable[1])(a);
}
Related
I'm writing a LLVM IR generator for a pseudo code language. This language should allow redefinition of function.
Here is one case that I have two functions both named "f" but they have different parameters.
function f(int i, float r) returns int { return i; }
function f(float r, float r2) returns int {return i; }
I thought LLVM could distinct that, but I get
error: invalid redefinition of function
And the code I generated is:
define i32 #f(i32 %i, float %r) {
%var.i.0 = alloca i32
store i32 %i, i32* %var.i.0
%var.r.1 = alloca float
store float %r, float* %var.r.1
%int.2 = load i32* %var.i.0
ret i32 %int.2
; -- 0 :: i32
%int.3 = add i32 0, 0
ret i32 %int.3
}
define i32 #f(float %r, float %r2) {
%var.r.2 = alloca float
store float %r, float* %var.r.2
%var.r2.3 = alloca float
store float %r2, float* %var.r2.3
%var.i.4 = alloca i32
%float.3 = load float* %var.r.2
%int.7 = fptosi float %float.3 to i32
store i32 %int.7, i32* %var.i.4
%int.8 = load i32* %var.i.4
ret i32 %int.8
; -- 0 :: i32
%int.9 = add i32 0, 0
ret i32 %int.9
}
So, I think LLVM do not allow function overloading? Then is it a good idea that I generate a sequential counter, and distinct all these functions by adding this sequential counter as a suffix i.e define i32 #f.1() and define i32 #f.2()?
You're correct that LLVM IR doesn't have function overloading.
Using a sequential counter is probably not a good idea depending on how code in your language is organized. If you're just assigning incrementing integers, those may not be deterministic across the compilation of different files. For example, in C++, you might imagine something like
// library.cpp
int f(int i, float r) { ... }
int f(float r, float r2) { ... }
// user.cpp
extern int f(float r, float r2);
int foo() { return f(1.0, 2.0); }
When compiling user.cpp, there would be no way for the compiler to know that the f being referenced will actually be named f.2.
The typical way to implement function overloading is to use name mangling, somehow encoding the type signature of the function into its name so that it'll be unique in the presence of overloads.
My generator was written in java, so every time I parse a function definition, I will increase the counter for the same function name if the function name has already existed in the scope table.
My table is defined by Map with function name as key, and a list of function def as value:
Map<String,ArrayList<functionSymbol>> = new HashMap<>();
and then the constructor will look like:
static int counter = 0;
public FunctionSymbol(String functionName, Type retType, List<Variable> paramList){
this.functionName = functionName+counter;
this.paramList = paramList;
this.retType = retType;
counter++;
}
My llvm ir looks something like this :
call void bitcast (void (%struct.type1*, %opencl.image2d_t addrspace(1)*, i32, %struct.type1*)* #_Z36functype1 to void (%struct.type2*, %opencl.image2d_t addrspace(1)*, i32, %struct.type1*)*)(%struct.type2* sret %19, %opencl.image2d_t addrspace(1)* %237, i32 %238, %struct.type1* byval %sic_payload)
I want to check if the call is an actual function call or the one with the bitcast. Does anyone know how to do this ?
I tried :
const CallInst *pInstCall = dyn_cast<CallInst>(&*it);
if (!pInstCall) continue;
dyn_cast<BitCastInst >(pInstCall->getCalledFunction());
But that doesn't seem to work.
You're looking for
if (auto *CstExpr = dyn_cast<ConstantExpr>(it->getOperand(0))) {
// BitCastInst is an *Instrution*, here you have a *ConstantExpr* Bitcast
if (CstExpr.isCast()) {
// do something...
}
}
I'm trying to build a compiler for my language at the moment. In my language, I want to have implicit pointer usage for objects/structs just like in Java. In the program below, I am testing out this feature. However, the program does not run as I had expected. I do not expect you guys to read through my entire compiler code because that would be a waste of time. Instead I was hoping I could explain what I intended for the program to do and you guys could spot in the llvm ir what went wrong. That way, I can adjust the compiler to generate proper llvm ir.
Flow:
[Function] Main - [Return: Int] {
-> Allocates space for structure of one i32
-> Calls createObj function and stores the returning value inside previous allocated space
-> Returns the i32 of the structure
}
[Function] createObj - [Return: struct { i32 }] {
-> Allocates space for structure of one i32
-> Calls Object function on this space (pointer really)
-> Returns this space (pointer really)
}
[Function] Object - [Return: void] {
-> Stores the i32 value of 5 inside of the struct pointer argument
}
The program is that main keeps returning some random number instead of 5. One such number is 159383856. I'm guessing that this is the decimal representation of a pointer address, but I'm not sure why it is printing out the pointer address.
; ModuleID = 'main'
%Object = type { i32 }
define i32 #main() {
entry:
%0 = call %Object* #createObj()
%o = alloca %Object*
store %Object* %0, %Object** %o
%1 = load %Object** %o
%2 = getelementptr inbounds %Object* %1, i32 0, i32 0
%3 = load i32* %2
ret i32 %3
}
define %Object* #createObj() {
entry:
%0 = alloca %Object
call void #-Object(%Object* %0)
%o = alloca %Object*
store %Object* %0, %Object** %o
%1 = load %Object** %o
ret %Object* %1
}
define void #-Object(%Object* %this) {
entry:
%0 = getelementptr inbounds %Object* %this, i32 0, i32 0
store i32 5, i32* %0
ret void
}
This llvm ir is generated from this syntax.
func () > main > (int) {
Object o = createObj();
return o.id;
}
// Create an object and returns it
func () > createObj > (Object) {
Object o = make Object < ();
return o;
}
// Object decl
tmpl Object {
int id; // Property
// This is run every time an object is created.
constructor < () {
this.id = 5;
}
}
It seems like in createObj you're returning a pointer to a stack variable which will no longer be valid after function return.
If you're doing implicit object pointers like Java at minimum you're going to need a call to a heap allocation like malloc which I don't think you have.
I have the following C++ code:
class Date {
public:
Date(int, int, int);
private:
int year; int month; int day;
};
extern "C" int main(int argc, char *argv[])
{
Date today(1,9,2014);
//....
return 0;
}
Date::Date(int d, int m, int y) { day = d; month = m; year =y; }
The corresponding bytecode is:
#_ZN4DateC1Eiii = alias void (%class.Date*, i32, i32, i32)* #_ZN4DateC2Eiii
define i32 #main(i32 %argc, i8** %argv) {
entry:
%retval = alloca i32, align 4
%argc.addr = alloca i32, align 4
%argv.addr = alloca i8**, align 4
%today = alloca %class.Date, align 4
store i32 0, i32* %retval
store i32 %argc, i32* %argc.addr, align 4
call void #llvm.dbg.declare(metadata !{i32* %argc.addr}, metadata !922), !dbg !923
store i8** %argv, i8*** %argv.addr, align 4
call void #llvm.dbg.declare(metadata !{i8*** %argv.addr}, metadata !924), !dbg !923
call void #llvm.dbg.declare(metadata !{%class.Date* %today}, metadata !925), !dbg !927
call void #_ZN4DateC1Eiii(%class.Date* %today, i32 1, i32 9, i32 1999), !dbg !927
//...
ret i32 0, !dbg !930
}
//...
define void #_ZN4DateC2Eiii(%class.Date* %this, i32 %d, i32 %m, i32 %y) unnamed_addr nounwind align 2 {
entry:
//...
}
I'm doing a parse of this code and I need to extract the class name, in this statement:% today = alloca% class.Date, align 4
Is there any way to see just returned: class.Date???
I also need to know how to get to # _ZN4DateC2Eiii function, starting from the call:
call void # _ZN4DateC1Eiii (class.Date% *% today, i32 1, i32 9, i32 1999)! dbg! 927.
Clang will utilize the class name for naming the LLVM type, as you can see in your example (it used %class.Date as the type name). However, the only reliable way to obtain the name of the type is to query the debug information. To do that:
Identify the alloca you care about.
Iterate the function until you find a call to llvm.dbg.declare where the first argument is a metadata node wrapping the value from (1).
You can use isa<DbgDeclareInst> for that.
Create a new DIVariable instance, passing the metadata node from (2) as the constructor argument.
You can retrieve the object's type - of DIType class - by calling getType on the object from (3). You can use getName on an object of type DIType to get the type name.
I'm writing a compiler using LLVM as a backend, I've written the front-end (parser, etc.) and now I've come to a crossroads.
I have a structure (%Primitive) which contains a single field, an i8* value, a pointer to a character array.
%Primitive = type { i8* }
In the compiler, instances of Primitive are passed around on the stack. I'm trying to write this character array to standard output using the puts function, but it isn't working quite like I was hoping.
declare i32 #puts(i8*) ; Declare the libc function 'puts'
define void #WritePrimitive(%Primitive) {
entry:
%1 = extractvalue %Primitive %0, 0 ; Extract the character array from the primitive.
%2 = call i32 #puts(i8* %1) ; Write it
ret void
}
When I try to run the code (either using an ExecutionEngine or the LLVM interpreter program lli), I get the same error; a segmentation fault.
The error lies in the fact that the address passed to puts is somehow the ASCII character code of the first character in the array. It seems the address passed, rather than being a pointer to an array of 8 bit chars, is instead an 8 bit wide pointer that equals the dereferenced string.
For example, if I call #WritePrimitive with a primitive where the i8* member points to the string "hello", puts is called with the string address being 0x68.
Any ideas?
Thanks
EDIT: You were right, I was initializing my Primitive incorrectly, my new initialization function is:
llvm::Value* PrimitiveHelper::getConstantPrimitive(const std::string& str, llvm::BasicBlock* bb)
{
ConstantInt* int0 = ConstantInt::get(Type::getInt32Ty(getGlobalContext()), 0);
Constant* strConstant = ConstantDataArray::getString(getGlobalContext(), str, true);
GlobalVariable* global = new GlobalVariable(module,
strConstant->getType(),
true, // Constant
GlobalValue::ExternalLinkage,
strConstant,
"str");
Value* allocated = new AllocaInst(m_primitiveType, "allocated", bb);
LoadInst* onStack1 = new LoadInst(allocated, "onStack1", bb);
GetElementPtrInst* ptr = GetElementPtrInst::Create(global, std::vector<Value*>(2,int0), "", bb);
InsertValueInst* onStack2 = InsertValueInst::Create(onStack1, ptr, std::vector<unsigned>(1, 0), "", bb);
return onStack2;
}
I missed that, Thank You!
There's nothing wrong with the code you pasted above; I just tried it myself and it worked fine. I'm guessing the issue is that you did not initialize the pointer properly, or did not set it properly into the struct.
The full code I used is:
#str = private unnamed_addr constant [13 x i8] c"hello world\0A\00"
; Your code
%Primitive = type { i8* }
declare i32 #puts(i8*) ; Declare the libc function 'puts'
define void #WritePrimitive(%Primitive) {
entry:
%1 = extractvalue %Primitive %0, 0 ; Extract the character array from the primitive.
%2 = call i32 #puts(i8* %1) ; Write it
ret void
}
; /Your code
define void #main() {
%allocated = alloca %Primitive
%onstack1 = load %Primitive* %allocated
%onstack2 = insertvalue %Primitive %onstack1, i8* getelementptr ([13 x i8]* #str, i64 0, i64 0), 0
call void #WritePrimitive(%Primitive %onstack2)
ret void
}