I have created a toy language that generates IR code and writes that code to a binary file with WriteBitcodeToFile (the C API). The result is a my-file.bc file.
In this file I have defined a main() function that takes no arguments and returns an int64 (should I change return type to byte maybe). How do I make this .bc file an executable. I'm running Linux.
Fredrik
You can generate an object file with llc and then use GCC to create an executable:
llc -filetype=obj my-file.bc
gcc my-file.o
./a.out
You can read more about llc on http://llvm.org/docs/CommandGuide/llc.html.
It is possible to execute a bc-file with lli command. However that doesn't create a stand alone executable product.
There's always the option of using llc to compile to assembly from which you can generate an executable.
http://llvm.org/docs/CommandGuide/llc.html
Related
I have been coding for over 5 years and would now like to take a step away from IDE's and try a project without one. I have the things I need to get started (I think), a HelloWorld.cpp file, the Windows Command Prompt open and Clang installed.
Now that I have these things my question is this - What do I need to type into the Command Prompt to make Clang take my C++ code in the HelloWorld.cpp file and compile it into a separate file containing the assembly code, and then make Clang take my assembly code and assemble it into a separate file containing the object code, and then finally make Clang take my object code and link it into a separate file containing the machine code?
Ultimately meaning at the end I will have 4 files, one with C++ code, one with assembly code, one with object code and finally one with machine code. The point of all of this being the ability to read and understand each stage of the process before running the file containing the machine code.
Being someone who has left the world of IDE's for the first time, I find the official Clang documentation very confusing and cannot find a straight answer to my question.
Same as with GCC, and I'll do you one better by first preprocessing the source file. In principle Clang also can emit LLVM bitcode or LLVM IR as two extra intermediate stages.
clang++ source.cpp -E
clang++ source.ii -S
clang++ source.s -c
clang++ source.o
This last one gives a.out as an executable file. You can define the output file for each command by appending
-o output.file
The extensions may not be 100% correct. Just check what comes out.
Suppose I have a simple, self-contained C++ file (math.cpp) like this:
int add(int x, int y) {
return x + y;
}
How would I compile it to WebAssembly (math.wasm)?
Note: I am using the Clang tool-chain.
I found this gist to be very helpful.
Basically, this are the steps:
(build llvm and clang 5.0.0 or above with -DLLVM_EXPERIMENTAL_TARGETS_TO_BUILD=WebAssembly)
Compile the .cpp soure to llvm bitcode with clang:
clang -emit-llvm --target=wasm32 -Oz math.cpp -c -o math.bc
Compile the bitcode to s-assembly:
llc -asm-verbose=false -o math.s math.bc
Use binaryen's s2wasm tool to create a .wast file
s2wasm math.s > math.wast
Use WABT's wast2wasm tool to translate the textual .wast file into binary .wasm:
wast2wasm -o math.wasm math.wast
Some of the steps feel redundant but I have not yet found a tool that allows shortcuts. (It would be nice if llc could compile directly to .wasm, or if s2wasm actually created binary .wasm files as the name suggests.) Anyway, once you got the toolchain running it's relatively painless. Note, however, that there are no C or C++ standard libraries for web assembly yet.
Alternatively, if you need the .wasm file just for trying out stuff you can get away without all the toolchain trouble. Browse to https://mbebenita.github.io/WasmExplorer/, paste in your C/C++ code, and download the compiled .wasm file.
Thank you #noontz and #LB-- for pointing out that
Actually as the comments in the gist suggest you can skip binaryen and compile straight to wasm from Clang/LLVM. I'm currently using the following command line for C++ :
clang++ test.cpp -ObjC++ --compile --target=wasm32-unknown-unknown-wasm \
--optimize=3 --output test.wasm
Emscripten comes with everything you will need to compile a C++ file to wasm. Emscripten also has an SDK that makes life easy when it comes to installing all the necessary tools.
By default, however, Emscripten will add some framework code to your wasm file as well as generate some html and javascript.
It is possible to create a minimal wasm file with Emscripten that doesn't include any framework code, javascript, or html. Using options -s SIDE_MODULE=1 -Oz -s ONLY_MY_CODE=1 while compiling with emcc or em++ will give you a minimal wasm file.
The following command would export a minimal wasm file using your examples and Emscripten:
em++ math.cpp -o math.wasm -Oz -s SIDE_MODULE=1 -s WASM=1 -s "EXPORTED_FUNCTIONS=['_add']" -s ONLY_MY_CODE=1
As of 2019, Clang (8) supports webassembly out of the box. Here is a repository that contains everything needed to compile, link and run a simple .wasm file.
https://github.com/PetterS/clang-wasm
Currently the easiest way to compile C and C++ is with emscripten. The components you mention are all components, but emscripten is a full toolchain that supports building end-to-end, and includes all the parts you need including libc/libc++, and a variety of other useful libraries. It supports targeting both asm.js and wasm.
Based on the answers in this thread, I've created a little guide.
For me, the easiest way was to compile emscripten (the website is also a great starting point!) on my machine, compile the code to wasm, generate the appropriate bindings and hide all this in a wrapper on the JS-Side to that I get a nice interface.
Because of the name mangling of c++, I've found getting started with C is easier.
A little late for this answer but there are beautiful tools online for compiling your scripts.
By example, I'm using this one. That one giving you minimim option of compiling (C,C++,std99...) but there are sufficient : https://wasdk.github.io/WasmFiddle/
And depending of how you gonna use it, you can choose between differents languages such as x86, code Buffer. You can also share your code, kind of functions that I find cool when you are working with some other buddy : https://wasdk.github.io/WasmFiddle/?gus9d :)
I have been working with two programs llvm's opt and clifford wolf's yosys both have similar interfaces for passes.(they use shared libraries as optimization passes)
I want to use certain data structures and functions from yosys.h to build a design module(which is then written in verilog to file) based on the data generated by my llvm opt pass.
PROBLEM:
I want to make use of functions,data from yosys.h in the pass for llvm-opt.
How do i compile(EDIT: and also execute either on llvm-opt or on yosys or a seperate binary executable) such code?
individually they can be compiled and executed as seperate passes.
COMPILE YOSYS PASS
gcc `yosys-config --cxxflags --ldlibs --ldflags` --shared yosyspass.cpp -o yosyspass.so
and execute it with
yosys -m yosyspass.so verilogfile.v
COMPILE LLVM PASS
gcc `llvm-config --cxxflags --ldlibs` --shared llvmpass.ccp -o llvmpass.so
and execute it with
opt -load ./llvmpass.so -llvmpass Somefile.bc
but how to build code which has both components from llvm , yosys?
and how to execute it?
How can i make this happen without changing source code of yosys too much?
All of this is to avoid writing a verilog generation backend for my llvm-opt pass.
ONE OF MY SOLUTIONS:
Metaprogramming: i.e., generate the code which when compiled and run as a yosys pass gives me the result.(verilog design file based on llvm opt input)
Maybe i'm missing something fundamental in build shared libraries? I'm new to this sort of thing. any input is welcome.
This project(though unrelated) may be similar to Rotems C-to-Verilog and univ of toronto's legup HLS tool.
As Krzysztof KosiĆski pointed out, until now the Yosys core functionality was not available as library. However, this was on the todo list for a long time and I have now added this functionality to Yosys git head.
Here is a usage example:
// example.cc
#include <kernel/yosys.h>
int main()
{
Yosys::log_streams.push_back(&std::cout);
Yosys::log_error_stderr = true;
Yosys::yosys_setup();
Yosys::yosys_banner();
Yosys::run_pass("read_verilog example.v");
Yosys::run_pass("synth -noabc");
Yosys::run_pass("clean -purge");
Yosys::run_pass("write_blif example.blif");
Yosys::yosys_shutdown();
return 0;
}
Building a binary:
yosys-config --exec --cxx -o example --cxxflags --ldflags example.cc -lyosys -lstdc++
Now you can run ./example to convert example.v into example.blif.
(As this is a brand new feature, the details of how to build programs or other libraries using libyosys are likely to change in the future.)
Edit: In current git head the Makefile option ENABLE_LIBYOSYS must be set to 1 to enable building of libyosys.so.
Additional feedback: You might want to consider to write a Yosys plugin instead that implements a Yosys front-end that uses LLVM libraries to load a .bc file. If you are not planning to go back and forth between LLVM and Yosys, but only want execute a sequence of LLVM passes followed by a sequence of Yosys passes, then this solution might provide a more natural and easier to debug interface between LLVM and Yosys.
I want to generate LLVM bitcode for a large number of C source files for which I have a compilation database . Is there way to invoke clang such that it reads the compilation database and uses the appropriate flags?
Background
For toy programs, the command to generate LLVM bitcode is simple:
clang -emit-llvm -c foo.c -o foo.bc
However, source files in large projects require lots of additional compilation flags, including -Is and -Ds and whatnot.
I want to write a script that iterates over a large number of source files and calls clang -emit-llvm ... on each to generate LLVM bitcode. The difficulty is that each clang -emit-llvm ... command has to have the flags specific to that source file. I have a compilation database for these source files, which perfectly captures the flags needed for each individual source file. Is there a way to make clang -emit-llvm ... aware of my compilation database?
One solution I've thought of is to parse the compilation database myself and find the appropriate entry for each source file, and modify the command entry to (a) include -emit-llvm and (b) change -o foo.o to -o foo.bc, and then run the command. This might work, but seems a bit hacky.
Instead of parsing the compilation database yourself, you could rely on the Python binding to do so. Judging from the test suite of the binding, you could do something like:
cdb = CompilationDatabase.fromDirectory(kInputsDir)
cmds = cdb.getAllCompileCommands()
and then slightly update the content of cmds.
I'm writing a front-end for LLVM with java. My front-end produces .ll files. Then I use the following commands to convert these files to an executable file:
1. for each .ll file I use `'llvm-as file.ll'` to create a bitcode file
2. use `'llvm-ld -o executable my-bitcode-files -L/usr/lib/i386-linux-gnu -lstdc++'` to
generate an executable file.
Then when I run the executable file, I get the following error:
LLVM ERROR: Program used external function '_Znwm' which could not be resolved!
What should I do to resolve this issue?
You need to generate native executable, not the IR + wrapper. Try to add -native to llvm-ld cmdline.