using an llvm backend (Mips, Sparc etc) - llvm

I am trying to find some code examples which allow me to hook up a llvm backend for code generation. For example, hooking up the IR to either the Mips or Sparc backend. However, I haven't been able to find any such examples. The only closest thing I could find is the use of the AMD IL & GPU backend, currently in the mesa tree but not yet merged into the llvm backend. I have read the Writing an LLVM Backend tutorial but its not really obvious to me as to how to hook up the backend. I am sure I am missing something from the examples so could someone point me to some examples for this ? I already have code to generate the IR.
Thanks

You might want to look at the LLVM llc command. It reads a bitcode IR file and calls any of the backends on it.

The llc command would compile LLVM IR to target machine assembly.
With argument "march=[your target]", you can get assembly for different targets,
"march=mips" for Mips, for example.

Related

Is it possible to write a web server in LLVM IR?

I wonder if it's technically possible to write a simple web server in LLVM IR.
I've found one which is written in amd64 assembly and it's really working.

LLVM : generating a "ir" file for specifically z80 processor using llvm

I am working on LLVM, and want to generate the files according to specific target architecure e.g-z80. I have downloaded z80 source code and clang. I used --target in clang command to specify z80 but it is not working. can anybody help me out in this problem? Thanks in advance.
LLVM has no official z80 backend/target. There appear to be some third-party projects attempting to implement one, like llvm-z80, so you can try to check them out. The alternative would be writing a new backend.

Using LLVM as virtual machine - multiplatform and multiarchitecture coding

I'm currently working in a pet programming language (for learning purposes), and have gone through a lot of research over the past year, and I think its time to finally start modelling the concepts of such a languague. First of all I want it to compile to some intermediate form, such as JVM or .NET bytecode, the goal being multiplatform/architecture compatibily. Second, I want it to be fast (I also have many other things in mind, but its not the purpose of this topic to discuss those).
The best options that came to my mind were:
Compile to JVM bytecode and use OpenJDK as runtime environment,
Compile to .NET bytecode and use Mono as runtime environment,
Compile to LLVM IR and use LLVM as runtime environment.
As you may have imagined, I've chosen LLVM. Why? because its blazing fast. I did a little benchmark using the C++ N-Body code, and achieved 7s in my machine with lli jitted IR, in contrast with 27s with clang native compiled code (I know clang first make IR then machine code).
So, here is my question: Is there any redistributable version of the LLVM basic toolset (I just need lli) that I can use? Or I must compile my own? If the latter, can you provide me with any hints on how to do it? If I really must do it, I'm thinking is cross-compiling them from my machine (Intel Mac), and generating some installers (say, an .msi for windows, .rpm and .deb for popular linux distros and .pkg for Macs). Remember, I only need a minimal subset of LLVM, such that this subset is capable of acting like a VM, by using "lli ". The real question here is how to use LLVM as a typical virtual machine.
First, I think all 3 options - LLVM IR + LLVM, Java Bytecode + OpenJDK, and .NET CIL + Mono - are excellent options, and I agree deciding between them is not easy.
If you go for LLVM and you just want to use lli, you can compile LLVM to your target platform and pack the resulting lli executable with your distribution, it should work.
Another way to write a JIT compiler via LLVM is to use an execution engine - see the handy examples in the Kaleidoscope tutorial. That means that you write your own program which will JIT-compile your own language, compile it to whatever platform you want while statically linking it with LLVM, and then distribute it.
In any case, since a JIT compiler requires copying an LLVM binary to the client side, make sure to attach a copyright notice with your distribution (you don't have to open-source your distribution, though).

Possible to auto-generate llvm c++ api code from LLVM-IR?

The clang 3.0 online demo page http://llvm.org/demo/index.cgi provides an option to output LLVM C++ API code" representing the LLVM-IR for the input program.
Is "produce LLVM C++ API code" output a clang option (and if so, what is it)?
Or is it an llvm tool option (which one)?
Is it possible to do the same thing but from LLVM-IR input? Basically I'd like to see the proper llvm c++ api calls needed to produce a particular given llvm-ir sequence. I'd like to learn backwards by example rather than forwards from the documentation.
Manual pages and --help and --help-hidden for clang, llvm-as and llvm-dis don't show anything obvious.
edit: OK now I see in the output on that web page, "generated by llvm2cpp". But I can't find that tool in recent llvm releases, only old releases, has a new tool in 2.9 and 3.0 taken over for llvm2cpp?
Yes. C++ backend is the tool which does this. Try "llc -march=cpp foo.bc"
I ran into exactly the same problem and saw the CPPBuilder mentioned a couple of times. This approach unfortunately no longer works on recent LLVM versions as the CPPBackend was removed between 3.8 and 3.9.
If you want the CPP backend you (i) have to configure llvm and add cppbackend to -DLLVM_TARGETS_TO_BUILD during the initial configure and (ii) run an llvm <= 3.8.
The feature was removed because it did not use IRBuilder and almost nobody used it. My solution was to rely on the old version to get inspired, then implement it myself.

What platform can I compile binaries for, using LLVM (Low Level Virtual Machine)?

I am interested in using the LLVM's Clang compiler. LLVM claims to be cross-platform however it is not clear which platforms can be targeted. I have done quite a lot of Googling on this but there doesn't seem to be much information about LLVM's supported platforms. The only thing I did find was "this" which is kinda confusing. I am not sure if it means I can compile binaries for those platforms using LLVM or if it just runs on those platforms (or both). Could someone who knows more about the LLVM/Clang compiler tell me which platforms I can target using Clang or any other LLVM front ends? I want specific information (like "It supports Windows 32bit, Windows 64bit, Linux 32bit, Linux 64bit, etc). Thanks!
EDIT:
Ok, I think I am just confused about what LLVM really is. From what I just figured out LLVM is simply a byte-code interpreter. Since LLVM is interpreted how much slower is LLVM binaries compared to executable binaries? So if performance is important LLVM is not the right choice? "Here" I found the architectures it supports but it did not say what operating systems it supports. Does it run on all operating systems if I avoid platform dependent code? I will look for more articles that explain LLVM in more detail if I can find any.
With llvm installed type
llc -version
and you will see something like
Registered Targets:
alpha - Alpha [experimental]
arm - ARM
bfin - Analog Devices Blackfin [experimental]
c - C backend
cellspu - STI CBEA Cell SPU [experimental]
cpp - C++ backend
mblaze - MBlaze
mips - Mips
mipsel - Mipsel
msp430 - MSP430 [experimental]
ppc32 - PowerPC 32
ppc64 - PowerPC 64
ptx32 - PTX (32-bit) [Experimental]
ptx64 - PTX (64-bit) [Experimental]
sparc - Sparc
sparcv9 - Sparc V9
systemz - SystemZ
thumb - Thumb
x86 - 32-bit X86: Pentium-Pro and above
x86-64 - 64-bit X86: EM64T and AMD64
xcore - XCore
Go to github.com and search for mbed_samples to see llvm and clang being used to cross compile for ARM. Somewhere around blinker03 or 04 is where it comes in. LLVM works the same way for any platform, the llc step is where you choose your target, the compiling merging, optimizing, etc are all platform independent (well you might use -m32 for example to choose the int size) then llc gets you to platform dependent assembler.
There are many possible applications of LLVM in terms of target machine.
GPGPU (GPU). There are approaches like GPUOcelot for retargeting to gpu, here is an example : LLVM to PTX Backend.
According to OpenCL wikipedia at 2011-09: "The Apple,[22] Nvidia,[23] RapidMind[24] and Gallium3D[25] implementations of OpenCL are all based on the LLVM Compiler technology and use the Clang Compiler as its frontend."
Javascript as LLVM Target ! - Yes, tool, that is does it Emscripten - Emscripten GitHub homepage wiki with Demos, blog note about Emscripten 1.0, Emscripten documentation paper,
According to the clang manual clang can target X86, Arm with partial support for PPC, SPARC and MSP430.
Clang can also generate LLVM bytecode though. LLVM can run on quite a few more platforms.
So if you want native machine code then the list is pretty small. If you want LLVM bytecode you have a broader choice of platforms.
I'm only answering the edit's question here (it would probably be more appropriate to make a new question).
This is a good architectural overview of LLVM. This page also contains a ton of documentations on all aspects of LLVM.
The short version is that LLVM is the optimizer and backend of a traditionnal compiler. It operates on a bytecode which is essentially it's intermediate representation of the code and is used to optimize and generate the final binary. The LLVM frontends are independent and uses there own internal ASTs to eventually generate bytecode.
LLVM is actually pretty flexible when it comes to when you want to generate the final binary. You can either do it right away or delay it until the program is being installed. I believe you can even use its JIT to generate the final binary during execution (not 100% sure of this). The main advantage of delaying like this is that it can apply optimizations that are specific to the environment it is executing on.