Cross compiling rethinkdb for Raspberry Pi - c++

Currently running Ubuntu 14.04 x86_64. I want to cross compile rethinkdb for my RPi for experimental purposes, which is supported in 1.12 (and people have apparently successfully compiled).
I have installed the toolchain:
sudo apt-get install g++-4.7-arm-linux-gnueabi gcc-arm-linux-gnueabi
export CXX=/usr/bin/arm-linux-gnueabi-g++-4.7
export CC=/usr/bin/arm-linux-gnueabi-gcc-4.7
export AR=/usr/bin/arm-linux-gnueabi-ar
export LD=/usr/bin/arm-linux-gnueabi-ld
Configuration runs:
./configure --ccache --allow-fetch --without-tcmalloc
* Detecting system configuration
Bash: 4.3.8(1)-release
Use ccache: yes
C++ Compiler: GCC 4.7 (/usr/bin/arm-linux-gnueabi-g++-4.7)
Host System: arm-linux-gnueabi
Build System: Linux 3.13.0-24-generic x86_64
Cross-compiling: yes
Host Operating System: Linux
Without tcmalloc: yes
Build client drivers: no
Build Architecture: x86_64
Precompiled web assets: no
Protobuf compiler: /usr/bin/protoc
Node.js package manager: /usr/bin/npm
LESS css: external/less_1.6.2
CoffeeScript: external/coffee-script_1.7.1
Handlebars: external/handlebars_1.3.0
Browserify: external/browserify_3.24.13
ProtoBuf.js: external/protobufjs_2.0.4
wget: /usr/bin/wget
curl: /usr/bin/curl
protobuf: external/protobuf_2.5.0
v8: external/v8_3.22.24.17
RE2: external/re2_20140111
z: external/zlib_1.2.8
Google Test: external/gtest_1.6.0
termcap: no
Test protobuf: external/protobuf_2.5.0
Test boost: external/boost_1.55.0
Installation prefix: /usr/local
Configuration prefix: /usr/local/etc
Runtime data prefix: /usr/local/var
* Warning: ARM support is still experimental
* Wrote configuration to config.mk
However make fails,
/bin/bash: ccache: command not found
Any pointers to getting this working?

Any pointers to getting this working?
Just install ccache. It would be helpful even if you did not have this issue because it somewhat speeds the compilation up on embedded. We are also using it at the company, fwiw, even with icecream, eventually.
sudo apt-get install ccache

The actual state of cross compiling rethinkdb for raspberry-pi is complicated right now. It seems that this is not actively maintained and it became complicated to set it up over the years. This is however possible with a little bit of effort. It requires dwelving into the rethinkdb build framework based on good ol' makefiles...
You can find an attempt at doing so here in a Dockerfile. It basically creates a Docker container with all the dependencies (especially the cross compiler), modify the config and build rethinkdb. The outcome is the rethinkdb package for raspberry.
You can either use it as is or have a look to it and reproduce it on your own.

Related

wxWindows 2.4.2 configuration failed saying checking for toolkit... configure: error: Please specify at most one toolkit

I am new to Linux environment, we have task to migrate a Windows wxWidgets(version 2.4.2) GUI application to Linux platform(RHEL8 or 8.3).The application is successful on Visual studio 2017&2019(Compiler MSVC++ 14.1 and 14.2) using the wxWindows-2.4.2(very old one). But when i try to build wxWindows2.4.2 on Linux( g++ (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5))
../configure --with-gtk=2
at configuration stage process stopped saying
checking for toolkit... configure: error: Please specify at most one
toolkit (maybe some are cached in configarg.cache?)
I tried by installing "Development Tool" on Linux but getting same thing.
Source: https://github.com/wxWidgets/wxWidgets/archive/refs/tags/v2.4.2.zip
You can successfully run the configure phase by first performing dnf install gtk2-devel, and then configuring with ./configure --enable-gtk2.
However, (not surprisingly), wx 2.4.2 will not successfully build on RHEL8 (I tried); there will be a bunch of compilation errors. Those are not impossible to fix, but I wonder if that's worth the trouble – there probably would be runtime errors and/or misbehaving GUI components at the next stage.
While I understand your strategy of choosing the old wx version that the application is known to work with on Windows, it seems to be the hard way in this case.

How to install dependencies for a project that is being cross-compiled on an x86 host for an arm target

I'm trying to build a project (https://wpewebkit.org/) on Debian Buster for armv7 on a x86 host of the same OS.
I am able to successfully install an arm C++ toolchain and I can successfully compile and run trivial applications.
Where I'm stuck is many of the projects I want to compile require many dependencies that I normally install through the OS's package manager (ex apt-get install libjpeg-dev). When cross compiling, it looks like I can just download & make install the sources I need. However, this project has hundreds of dependencies - it would take a long time to download and compile all of them. At the same time, the arm versions of these dependencies already exist in apt for arm.
How can I, on the host system, install the armhf versions of these dependencies and make them available to my cross compiling toolchain? I've tried dpkg add-architecture armhf and then installing via apt-get install libjpeg-dev:armhf but cmake can't seem to find the installed dependencies.
#artless-noise guides were a good jumping off point, but unfortunately most of the guides weren't helpful in accomplishing what I wanted to do (or if they were, they weren't straightforward in explaining how to accomplish what I needed).
What I ended up doing was using qemu-debootstrap
sudo qemu-debootstrap --arch armhf buster /mnt/data/armhf http://deb.debian.org/debian/
And then just using sudo chroot /mnt/data/armhf and I had a functioning shell where I could just apt-get anything I needed, run any scripts and get armhf binaries.
There are many ways to do this. The key concept is that you need a shadow filesystem that mimics the ARM and you need to tell the package build mechanism where they are. There are many distributions variants LTIB is rpm based, Yocto uses BitBake and supports deb, rpm and ipkg. As well you need to differentiate between build tools and deployed binaries. This is an added concept when cross compiling. The only point above is that Ltib, Yocto, buildroot, etc all keep a shadow root files system and some place to keep host/build binaries. Since you have a Debian system, it is best to stick to their tools.
It is possible to install with dpkg --root. And if you have a complete environment, you can chroot arm_root and then build the package there with host binaries but ARM development files (headers and libraries).
The Debian maint-guide is an overview of building debian packages for the normal case. The Debian cross-compile wiki uses the chroot methods and has reference to building with either sbuild or pbuild packages. The schroot package is very nice as it allows you to build the shadow file system without becoming root. It is very easy to destroy your host file system when learning to cross distribution build and I highly recommend this method. Another key difference between the maint-guide and cross wiki is to install the package cross build essentials.
sudo apt-get install build-essential crossbuild-essential-armhf
Otherwise, most everything is the same but building with the chroot shadow ARM filesystem.
Here is a translation for Ubuntu hosts... you need Zenial or better to use the cross-compile debian wiki method. Ie, a Ubuntu x86 Bionic build for raspberry PI or similar. This method takes care of a lot of things for you, especially preventing filesystem corruption by mistake; thank the kind souls at Debian.
The info under nomenclature is quite important,
build means the architecture of the chroot/dpkg/compiler's executable, i.e. the architecture of the build system (called host by cmake/kernel/etc)
host means the architecture of produced executable objects, i.e. the architecture of the host system where these guest objects will run on (called target or sometimes build elsewhere)
target is what the produced executable objects will generate when producing executable objects, i.e. the architecture of the systems the built programs target their results to run on (relevant only for compilers and similar)
People change the names for the same concepts in cross-building and that can be confusing.
Additional info
Kernel cross build
Meson Cross Compilation
Clang cross compile

How do I build the Rust standard library with a custom musl?

I want to build static Rust executables with a customized version of musl. As a first step, I'm making myself familiar with Rust's build system.
I took the slightly outdated docker-rust-musl GitHub project and updated URLs that went out-of-date. Everything seems to work well with the build, but when I want to compile with x86_64-unknown-linux-musl the compiler doesn't find the musl std crate:
root#beb234fba4af:/build# cat example.rs
fn main() { println!("hi!"); panic!("failed"); }
root#beb234fba4af:/build# rustc --target=x86_64-unknown-linux-musl example.rs
error[E0463]: can't find crate for `std`
|
= note: the `x86_64-unknown-linux-musl` target may not be installed
error: aborting due to previous error
In fact, /usr/local/lib/rustlib/ only contains the x86_64-unknown-linux-gnu directory, even though output during the build indicates that x86_64-unknown-linux-musl is built:
[...]
Building stage2 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-musl)
[...]
However, when it comes to the installation step x86_64-unknown-linux-gnu is nowhere to be seen:
[...]
Install std stage2 (x86_64-unknown-linux-gnu)
install: creating uninstall script at /usr/local/lib/rustlib/uninstall.sh
install: installing component 'rust-std-x86_64-unknown-linux-gnu'
std is standing at the ready.
Install rustc stage2 (x86_64-unknown-linux-gnu)
install: creating uninstall script at /usr/local/lib/rustlib/uninstall.sh
install: installing component 'rustc'
Rust is ready to roll.
Build completed in 0:31:07
What do I have to do to install the x86_64-unknown-linux-musl Rust standard library?
Progress:
Digging through the build environment revealed that make all builds the Rust std library with musl but the subsequent make install step does not install it.
We have a temporary fix in the build.sh script of the previously mentioned docker image.
It is unclear whether that is an issue of the build environment or of its usage.
The issue is known to the Rust developers. No eta of the fix, however.

How to get "g++ -mx32" to work on RHEL 7.2

I am new to x64_86, but forced to use it because RedHat dropped its 32-bit OS support in RHEL 7.x. I have to complile a lot of code, and am not ready to jump to x64 yet (because I do not need 64-bit addresses and do not want to face all related porting issues). So I have considered using -m32 and -mx32, and decided that -mx32 is the best route for me. However, while -m32 works fine on my build machine, when I use -mx32, I get this error:
In file included from /usr/include/features.h:399:0,
from /usr/include/string.h:25,
from zz.cpp:1:
/usr/include/gnu/stubs.h:13:28: fatal error: gnu/stubs-x32.h: No such file or directory
# include <gnu/stubs-x32.h>
^
compilation terminated.
I searched the web for solutions and some links indicate that I have to install some mysterious "multilib" rpms for g++ and gcc, however, I cannot find these anywhere. Others suggest that I have to install Linux in the x32 mode and build libgcc for x32, which sound extreme. Any ideas or leads? Did someone actually try g++ -mx32? Maybe it is not even supported on the RH platform... Thanks!
P.S. In order to get the "-m32" option to work I had to install:
yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 ncurses-devel.i686
This one fails (yum cannot find these RPMs) - allegedly these are required for -mx32 to work:
yum install gcc-multilib g++-multilib
:(
Multilib is indeed your answer, but do not know why your repo does not support it. I installed mine via apt-get:
sudo apt-get install gcc-multilib
Although it uses 64-bit instructions, it uses the 32-bit ABI so annoyingly will not run under WSL (Windows Linux subsystem), which only supports the 64-bit one.

LLVM compiler-rt i386 target on x86_64 platform

I have been building LLVM and clang 3.8 using svn for some time now. I started using git (this is not the cause of the problem) today and an error interrupted the build process that I have seen before. When make is trying to build the i386 sanitizer library it fails. I was able to disable building the sanitizers in ccmake by setting COMPILER_RT_BUILD_SANITIZERS to OFF. I would prefer to disable building the i386 target altogether. Does anyone know how to do this?
compiler-rt needs to be built out of tree. This is done so that it can be compiled with the newly built clang.
This process will only build the supported architecture, x86_64 in my case.
The following example uses the default install prefix (/usr/local)
to specify the location of llvm-config.
Once LLVM is built, change the directory to where you want compiler-rt
then:
svn co http://llvm.org/svn/llvm-project/compiler-rt/trunk compiler-rt
mkdir build
cd build
cmake ../compiler-rt -DLLVM_CONFIG_PATH=/usr/local/bin/llvm-config
make
make install