I am using Archlinux and installed LLVM using the official package (using pacman -S llvm).
I'd like to use it with the wasm-32 backend (available according to the source code).
However, this backend is not enabled on my computer:
$ llc --version
LLVM (http://llvm.org/):
LLVM version 5.0.0
Optimized build.
Default target: x86_64-unknown-linux-gnu
Host CPU: skylake
Registered Targets:
aarch64 - AArch64 (little endian)
aarch64_be - AArch64 (big endian)
amdgcn - AMD GCN GPUs
arm - ARM
arm64 - ARM64 (little endian)
armeb - ARM (big endian)
bpf - BPF (host endian)
bpfeb - BPF (big endian)
bpfel - BPF (little endian)
hexagon - Hexagon
lanai - Lanai
mips - Mips
mips64 - Mips64 [experimental]
mips64el - Mips64el [experimental]
mipsel - Mipsel
msp430 - MSP430 [experimental]
nvptx - NVIDIA PTX 32-bit
nvptx64 - NVIDIA PTX 64-bit
ppc32 - PowerPC 32
ppc64 - PowerPC 64
ppc64le - PowerPC 64 LE
r600 - AMD GPUs HD2XXX-HD6XXX
sparc - Sparc
sparcel - Sparc LE
sparcv9 - Sparc V9
systemz - SystemZ
thumb - Thumb
thumbeb - Thumb (big endian)
x86 - 32-bit X86: Pentium-Pro and above
x86-64 - 64-bit X86: EM64T and AMD64
xcore - XCore
How can I enable LLVM backends?
EDIT (2021-07-23): This answer was updated to use "Motorola 68000" instead of "WebAssembly" as an example experimental target (since Wasm is now stable).
LLVM is not very configurable once it is built.
If you need a configuration beyond the defaults, you have to compile LLVM yourself.
LLVM has a few articles explaining how to compile it, but it does not describe exactly
how to enable additional targets:
Getting Started
Building LLVM with CMake
The enabled targets are controlled by two variables that you need to define when invoking CMake
to prepare the build directory: LLVM_TARGETS_TO_BUILD and LLVM_EXPERIMENTAL_TARGETS_TO_BUILD.
LLVM_TARGETS_TO_BUILD controls only the stable targets.
You can either use the special value all to enable all the stable targets or provide a semicolon-separated list of targets such as ARM;PowerPC;X86. There is an old request
to rename the special value to stable and use all for all the targets.
Its default value is all (see below for the list of targets).
LLVM_EXPERIMENTAL_TARGETS_TO_BUILD is an undocumented (or well hidden) variable that allows you
to enable any target you want. This is also a semicolon-separated list of targets.
The enabled targets will correspond to the union of both lists.
Now, you need to find out the actual name of your target and if it is a stable or experimental target.
The list of stable targets can be found in the Getting Started article.
The default value includes: AArch64, AMDGPU, ARM, AVR, BPF, Hexagon, Lanai, Mips, MSP430, NVPTX, PowerPC, RISCV, Sparc, SystemZ, WebAssembly, X86, XCore.
This list is defined in the main CMakeFile (permalink).
As you can see, WebAssembly is in the list now (in 2021), so it should already be enabled by default. When the question was first asked, it was still an experimental target.
When the question was first asked, WebAssembly was still an experimental target so the rest of the answer will more generally describe how to enable any target. As an example, we'll use "Motorola 68000" instead of wasm.
"Motorola 68000" is not in the list of stable targets. We'll have to find the name used by LLVM and then use LLVM_EXPERIMENTAL_TARGETS_TO_BUILD.
Unfortunately, since this variable is not documented, I wasn't able to find the list of all the targets on their website.
After some trial and error, it seems that the available targets correspond to the names of the directories in /lib/Target. This directory contains a subdirectory named M68k: this is likely the name of the target.
To use LLVM for "Motorola 68000", you'll need to enable the M68k target using the LLVM_EXPERIMENTAL_TARGETS_TO_BUILD
variable when preparing the build directory with CMake.
Here are the steps to compile LLVM with "Motorola 68000" support (adapt it to your own requirements). I used a Linux machine
but you should be able to adapt it to your environment.
Requirements:
CMake
Git
GCC, CLang or Visual Studio depending on your platform
zlib
Clone the LLVM repo. I'll use the /opt/llvm-project directory for the home directory
of my custom version of LLVM (this is the last argument to the command, replace it by the path you want to use).
git clone https://github.com/llvm/llvm-project.git /opt/llvm-project
Navigate to the LLVM sources:
cd /opt/llvm-project/llvm
Create your build directory and navigate to it.
mkdir build && cd build
Use CMake to prepare your build directory. This is the step where you need take care
of setting the variables. In my case I'll use LLVM_EXPERIMENTAL_TARGETS_TO_BUILD="M68k" and
leave LLVM_TARGETS_TO_BUILD to its default value (all stable targets).
Another important variable that I'll set is CMAKE_BUILD_TYPE=Release to get an optimized build and
CMAKE_INSTALL_PREFIX=/opt/llvm-project/llvm/bin to keep this version of LLVM in its directory and do
not interfere with the version I already have on my system (I'll just add this directory to the $PATH
when I'll need it).
cmake -G "Unix Makefiles" -DLLVM_EXPERIMENTAL_TARGETS_TO_BUILD="M68k" -DCMAKE_INSTALL_PREFIX=/opt/llvm-project/llvm/bin -DCMAKE_BUILD_TYPE=Release /opt/llvm-project/llvm
Build LLVM, this may take a while:
cmake --build .
Install LLVM:
cmake --build . --target install
You should compile your backend from source. The only pluggable things in LLVM are passes currently.
Your llc --version command says that you installed LLVM version 5.0.0. WASM wasn't integrated into LLVM until LLVM 8.0.0. It was experimental before that.
Changes to the WebAssembly Target
The WebAssembly target is no longer “experimental”! It’s now built by
default, rather than needing to be enabled with
LLVM_EXPERIMENTAL_TARGETS_TO_BUILD.
The object file format and core C ABI are now considered stable. That
said, the object file format has an ABI versioning capability, and one
anticipated use for it will be to add support for returning small
structs as multiple return values, once the underlying WebAssembly
platform itself supports it. Additionally, multithreading support is
not yet included in the stable ABI.
https://releases.llvm.org/8.0.1/docs/ReleaseNotes.html#changes-to-the-webassembly-target
Related
Looking for cross compiler that could help me build application for Raspberry Pi on my Ubuntu 20.04 machine. I found official tools on Github and I suppose that folder arm-bcm2708 contains cross compilers:
arm-bcm2708hardfp-linux-gnueabi
arm-bcm2708-linux-gnueabi
arm-linux-gnueabihf -> arm-rpi-4.9.3-linux-gnueabihf
arm-rpi-4.9.3-linux-gnueabihf
gcc-linaro-arm-linux-gnueabihf-raspbian
gcc-linaro-arm-linux-gnueabihf-raspbian-x64
I'm confused what directories names is trying to tell me? I know following words:
arm - processor type used on Pi
bcm2708 - processor model used on pi
gnueabi - cross-compiler for armel architecture (you can build binary for ARM on PC)
linaro - company that creates multimedia for ARM
4.9.3 - I suppose is GCC compiler version (why it is so old?)
Which of compilers I should use for my Pi3 and Pi4?
You can use one of the toolchains provided by ARM for your RPI3/4.
If you are running a 32 bit Linux on your RPI3/4, use one of the arm-none-linux-gnueabihf toolchains, if use are running a 64 bit Linux on your RPI3/4, use one of the aarch64-none-linux-gnu one.
Both 10.2 and 9.2 versions of the two toolchains are working fine on my own Ubuntu 20.04.1 LTS x86_64 system. Of course, you can cross-compile programs with the arm-none-linux-gnueabihf toolchain and run them on the 64 bit Linux running on your RPI3/4 as well.
Is there a sort of official documentation about version compatibility between binutils, glibc and GCC? I found this matrix for binutils vs GCC version compatibility. It would be good to have something like this for GCC vs glibc as well.
The point I'm asking this for is that I need to know if I can build, say, cross GCC 4.9.2 with "embedded" glibc 2.2.4 to be able to support quite old targets like CentOS 5.
Thank you.
it's extremely unlikely you'll be able to build such an old version of glibc with such a new version of gcc. glibc documents the min required version of binutils & gcc in its INSTALL file.
glibc-2.23 states:
Recommended Tools for Compilation
GCC 4.7 or newer
GNU 'binutils' 2.22 or later
typically if you want to go newer than those, glibc will generally work with the version of gcc that was in development at the time of the release. e.g. glibc-2.23 was released 18 Feb 2016 and gcc-6 was under development at that time, so glibc-2.23 will work with gcc-4.7 through gcc-6.
so find the version of gcc you want, then find its release date, then look at the glibc releases from around the same time.
all that said, using an old version of glibc is a terrible idea. it will be full of known security vulnerabilities (include remotely exploitable ones). the latest glibc-2.23 release for example fixed CVE-2015-7547 which affects any application doing DNS network resolution and affects versions starting with glibc-2.9. remember: this is not the only bug lurking.
When building a cross-compiler there are at least two, and sometimes three, platform types to consider:
Platform A is used to BUILD a cross compiler HOSTED on Platform B which TARGETS binaries for embedded Platform C. I used the words BUILD, HOSTED, and TARGETS intentionally, as those are the options passed to configure when building a cross-GCC.
BUILD PLATFORM: Platform of machine which will create the cross-GCC
HOST PLATFORM: Platform of machine which will use the cross-GCC to create binaries
TARGET PLATFORM: Platform of machine which will
run the binaries created by the cross-GCC
Consider the following (Canadian Cross Config, BUILD != HOST platform):
A 32-bit x86 Windows PC running the mingw32 toolchain will be used to compile a cross-GCC. This cross-GCC will be used on 64-bit x86 Linux computers. The binaries created by the cross-GCC should run on a 32-bit PowerPC single-board-computer running LynxOS 178 RtOS (Realtime Operating System).
In the above scenario, our platforms are as follows:
BUILD: i686-w32-mingw32
HOST: x86_64-linux-gnu
TARGET: powerpc-lynx-lynxos178
However, this is not the typical configuration. Most often BUILD PLATFORM and HOST PLATFORM are the same.
A more typical scenario (Regular Cross Config, BUILD == HOST platform):
A 64-bit x86 Linux server will be used to compile a cross-GCC. This cross-GCC will also be used on 64-bit x86 Linux computers. The binaries created by the cross-GCC should run on a 32-bit PowerPC single-board-computer running LynxOS 178 RtOS (Realtime Operating System).
In the above scenario, our platforms are as follows:
BUILD: x86_64-linux-gnu
HOST: x86_64-linux-gnu
TARGET: powerpc-lynx-lynxos178
When building the cross-GCC (assuming we are building a Regular Cross Config, where BUILD == HOST Platform), native versions of GNU BinUtils, GCC, glibc, and libstdc++ (among other libraries) will be required to actually compile the cross-GCC. It is less about specific versions of each component, and more about whether each component supports the specific language features required to compile GCC-4.9.2 (note: just because GCC-4.9.2 implements language feature X, does not mean that language feature X must be supported by the version of GCC used to compile GCC-4.9.2. In the same way, just because glibc-X.X.X implements library feature Y, does not mean that the version of GCC used to compile glibc-X.X.X must have been linked against a glibc that implements feature Y.
In your case, you should simply build your cross-GCC 4.9.2 (or if you are not cross compiling, i.e. you are compiling for CentOS 5 on Linux, build native GCC 4.9.2), and then when you link your executable for CentOS 5, explicitly link glibc v2.2.4 using -l:libc.so.2.2.4. You also probably will need to define -std=c99 or -std=gnu99 when you compile, as I highly doubt glibc 2.2.4 supports the C 2011 standard.
I'm trying to cross-compile the Boost library for an ARM platform (poky toolchain) and I'm new to cross compilation. I'm having issues at the first step -- running bootstrap.sh. I see many posts regarding boost cross-compilation, but not so many helping at the bootstrap level.
A few questions:
1) What should I put exactly in 'user-config.jam'? I tried:
using gcc : arm : arm-poky-linux-gnueabi-g++ ;
I see many ones specifying an exact path to the compiler.
2) Where's the best place to put the user-config.jam file? I tried my home (~) folder and the current folder.
3) The toolchain has a file named "environment-setup-cortexa9hf-vfp-neon-poky-linux-gnueabi", should I "source it" before running bootstrap?
Any help appreciated, thanks.
Common tasks - 1.64.0
https://www.boost.org/doc/libs/1_64_0/doc/html/bbv2/tasks.html
Cross-compilation
Boost.Build supports cross compilation with the gcc and msvc toolsets.
When using gcc, you first need to specify your cross compiler in user-config.jam (see the section called “Configuration”), for example:
using gcc : arm : arm-none-linux-gnueabi-g++ ;
After that, if the host and target os are the same, for example Linux, you can just request that this compiler version be used:
b2 toolset=gcc-arm
If you want to target a different operating system from the host, you need to additionally specify the value for the target-os feature, for example:
# On windows box
b2 toolset=gcc-arm target-os=linux
# On Linux box
b2 toolset=gcc-mingw target-os=windows
For the complete list of allowed opeating system names, please see the documentation for target-os feature.
When using the msvc compiler, it's only possible to cross-compile to a 64-bit system on a 32-bit host. Please see the section called “64-bit support” for details.
I am following a opencv installation document Installation in iOS when compile a ios framework. However, if I did not change platform/ios/build_framework.py and build the framework, I will have the following errors:
build settings from command line:
ARCHS = x86_64
IPHONEOS_DEPLOYMENT_TARGET = 6.0
SDKROOT = iphonesimulator6.1
Build Preparation
Build task concurrency set to 8 via user default IDEBuildOperationMaxNumberOfConcurrentCompileTasks
=== BUILD AGGREGATE TARGET ZERO_CHECK OF PROJECT OpenCV WITH CONFIGURATION Release ===
Check dependencies
=== BUILD NATIVE TARGET zlib OF PROJECT OpenCV WITH CONFIGURATION Release ===
=== BUILD NATIVE TARGET libjpeg OF PROJECT OpenCV WITH CONFIGURATION Release ===
** BUILD FAILED **
Build settings from command line:
ARCHS = x86_64
IPHONEOS_DEPLOYMENT_TARGET = 6.0
SDKROOT = iphonesimulator6.1
=== BUILD NATIVE TARGET zlib OF PROJECT OpenCV WITH CONFIGURATION Release ===
Check dependencies
No architectures to compile for (ONLY_ACTIVE_ARCH=YES, active arch=x86_64, VALID_ARCHS=i386).
** BUILD FAILED **
Then, after many tries, I found that if I only compile for architecture armv7, armv7s and i386 by changing the following scripts in platform/ios/build_framework.py the framework can be built successfully.
targets = ["iPhoneOS", "iPhoneOS", "iPhoneSimulator"] #"iPhoneOS", "iPhoneSimulator"
archs = ["armv7", "armv7s", "i386"]#"arm64", , "x86_64"
for i in range(len(targets)):
build_opencv(srcroot, os.path.join(dstroot, "build"), targets[i], archs[i])
Any ideas on how I can compile for the arm64 and x86_64 architecture? Thanks.
Take out the i386 and compile for armv7,armv7s, and arm64 - these are the architecture option for Xcode for 64 bit - iphone5s.
Do not mix intel and arm specification they will not and cannot mix. The architecture command is to inform the compiler on how the object code will be created. Because intel is a CISC processor and arm is a RISC internally their object codes are very different.
a MOVE command for example may generate an x'80'opcode (command instruction) for intel but an x'60'for ARM. far as I know an i386 is the old intel 32 bit architecture, Xcode maybe doing some magic to have universal object codes that can run on both intel and RISC if it is doing so - it will not be efficient, it will always be better to compile to specific architectures.
The 32 bit, 64 bit 128 ..... are addressing modes - they are also the size of the processor registers and determine how much memory (RAM) can be accessed by the CPU. In general, aside from the ability to have huge RAM, higher bit processors will usually reduce the number of instructions to do a particular task.
Because downward compatibility is usually built in, an app compiled for armv6 will typically run in armv7 or even arm64 in compatibility mode but it will not be able to harness the advantages of running a true 64 bit application.
The targets is a higher level specification which specifies which commands can be used for example an iPad has UIPopViewController but this is not supported in an iPhone or an iPod touch.
One last thing - only iPhone 5s works with arm64 if you set another target it will probably flag arm64 as not an option.
So if you are compiling for x86_64 architecture, that is for OS X (laptop or mac tower), NOT iOS (iPad, iPhone).
What are you using? Not Xcode I presume?
You don't need to compile to x86_64 unless you want this to run on a 64-bit mac laptop or desktop... & it doesn't seem like you are trying to do that by your question?
Also, if you have it for i386 it is probably compatible with most OS X machines -- it will just run in 32-bit not 64-bit.
As far as arm64, that is also probably 64-bit. You might not have the libraries for 64-bit? Is your machine running on 64-bit? Check the opencv libraries (dynamic or static?) & see if they are 32 or 64? If they are 32-bit then that is the problem. You'll need to build them as 64-bit libraries or find them as such. I can't give you anymore info without knowing what platform you are on.
I recently learned that OpenCV for iOS has been added to CocoaPods. You can see the podspec here and the Github repository here. With CocoaPods, you create a file named Podfile in the root of your project and paste in this line:
pod 'OpenCV'
Save the file and run pod install from the terminal (assuming that CocoaPods is installed of course).
I have this running on my iPhone 5S, which is 64 bit as well.
The problem: Ubuntu 10.10 doesn't supply LLVM CMake modules (/usr/share/llvm) or (/usr/local/share/llvm) when installing LLVM 2.8 from Ubuntu repositories.
So I'm now compiling LLVM 2.8 using CMake by myself and then installing it like this:
cmake ..
make
make install
This will install CMake modules I need to link LLVM into my library. The problem is that when I compile LLVM using CMake, only static libraries are compiled. I saw in LLVM documentation, that you can compile shared libraries using this parameter into CMake:
cmake -DBUILD_SHARED_LIBS=true ..
But now, the CMake returns this error:
-- Target triple: i686-pc-linux-gnu
-- Native target architecture is X86
-- Threads enabled.
-- Building with -fPIC
-- Targeting Alpha
-- Targeting ARM
-- Targeting Blackfin
-- Targeting CBackend
-- Targeting CellSPU
-- Targeting CppBackend
-- Targeting Mips
-- Targeting MBlaze
-- Targeting MSP430
-- Targeting PIC16
-- Targeting PowerPC
-- Targeting Sparc
-- Targeting SystemZ
-- Targeting X86
-- Targeting XCore
-- Configuring done
CMake Error: The inter-target dependency graph contains the following strongly connected component (cycle):
"LLVMARMCodeGen" of type SHARED_LIBRARY
depends on "LLVMARMAsmPrinter"
"LLVMARMAsmPrinter" of type SHARED_LIBRARY
depends on "LLVMARMCodeGen"
At least one of these targets is not a STATIC_LIBRARY. Cyclic dependencies are allowed only among static libraries.
-- Build files have been written to: /llvm-2.8/build
And I cannot compile it as shared library, does anyone knows how to solve that problem ?
I need the shared libraries because they're dependencies of many other tools.
Summary
1) LLVM 2.8 from Ubuntu repository installs LLVM shared libraries but doesn't install CMake modules I need.
2) On the other side, if I compile LLVM by myself, it installs the CMake modules I need, but I can only do that when compiling LLVM as static library.
After a lot of investigation (google, source and llvmdev mail-list), I discovered that this problem is in fact an issue with the 2.8 release, the compilation of shared libraries using CMake in that release is broken. I'm porting my library now to the version 2.9rc1 which is working fine and was already scheduled to be released soon, thanks for all answers.
LLVM 2.8 documentation does not mention building with CMake.
Try ./configure --enable-shared
Try reading this page and then ask on the llvmdev list if that doesn't help.