Yocto package generation without modifying distributions on targets (ARM & x86) - c++

Currently, I'm developing a C/C++ Application with the following characteristics:
Multiple target platforms - ARM (Raspbian) and manufacturer-modified x86 distribution
CMake is used to compile C/C++ application
Cross-compilation for ARM and the customized x86 distribution => with CMake (and toolchains to cover cross compilation)
Multiple dependencies (other github repositories) which are separately cross-compiled (according to github READMEs). These are referenced in the CMake of the project. The CMake decides if it uses the ARM or x86 libs.
For deploying I copy the cross-compiled dependency libs to /usr/lib on the target. Afterwards, the cross-compiled application can be launched on the target.
As cross-compilation is very annoying because it's full of pitfalls and I'll need many more dependencies in future, I decided to move to another build system to make things easier:
My goal is to get a rpm/deb file which can be easily installed on the target systems
After a few hours of research, I found out that via Yocto cross-compilation can be managed easier. In addition, it's possible to build deb/rpm/... files which can be installed on the targets.
Anyhow, as far as I understood, this deb/rpm/... file can be only installed on the yocto distribution/image which has been used for compiling => In other words, I would have to flash a completely new distribution on our targets (raspberry & x86). Unfortunately, this is no option for me (because of the custom x86 target).
Question 1: Is it correct that I have to replace the distributions on the targets in order to install the created packages?
Question 2: Is there a way how to create cross-compiled deb files which can be installed on existing distributions? If yes, what do I have to do to achieve this?
I assume that my current build strategy isn't the best. If you have any idea how to make it better, feel free to let me know.
Thanks,
Christoph

Related

conan packages vs mingwin conflicts how to fix?

I use conan as a dependency manager for a large C++ project. The project was built for Linux and I am porting it to Windows.
Due to this I am compiling with mingwin since that development environment is closer to mine.
However, conan knows it's running on windows and so it downloads windows binaries.
I am finding that although compilation works, linking fails because mingwin binaries and MSVC binaries are incompatible.
I am not sure if I need to try to instruct meson (my build system) to use cl as the compiler, or trick conan to download Linux libraries instead of windows.

How to install dependencies for a project that is being cross-compiled on an x86 host for an arm target

I'm trying to build a project (https://wpewebkit.org/) on Debian Buster for armv7 on a x86 host of the same OS.
I am able to successfully install an arm C++ toolchain and I can successfully compile and run trivial applications.
Where I'm stuck is many of the projects I want to compile require many dependencies that I normally install through the OS's package manager (ex apt-get install libjpeg-dev). When cross compiling, it looks like I can just download & make install the sources I need. However, this project has hundreds of dependencies - it would take a long time to download and compile all of them. At the same time, the arm versions of these dependencies already exist in apt for arm.
How can I, on the host system, install the armhf versions of these dependencies and make them available to my cross compiling toolchain? I've tried dpkg add-architecture armhf and then installing via apt-get install libjpeg-dev:armhf but cmake can't seem to find the installed dependencies.
#artless-noise guides were a good jumping off point, but unfortunately most of the guides weren't helpful in accomplishing what I wanted to do (or if they were, they weren't straightforward in explaining how to accomplish what I needed).
What I ended up doing was using qemu-debootstrap
sudo qemu-debootstrap --arch armhf buster /mnt/data/armhf http://deb.debian.org/debian/
And then just using sudo chroot /mnt/data/armhf and I had a functioning shell where I could just apt-get anything I needed, run any scripts and get armhf binaries.
There are many ways to do this. The key concept is that you need a shadow filesystem that mimics the ARM and you need to tell the package build mechanism where they are. There are many distributions variants LTIB is rpm based, Yocto uses BitBake and supports deb, rpm and ipkg. As well you need to differentiate between build tools and deployed binaries. This is an added concept when cross compiling. The only point above is that Ltib, Yocto, buildroot, etc all keep a shadow root files system and some place to keep host/build binaries. Since you have a Debian system, it is best to stick to their tools.
It is possible to install with dpkg --root. And if you have a complete environment, you can chroot arm_root and then build the package there with host binaries but ARM development files (headers and libraries).
The Debian maint-guide is an overview of building debian packages for the normal case. The Debian cross-compile wiki uses the chroot methods and has reference to building with either sbuild or pbuild packages. The schroot package is very nice as it allows you to build the shadow file system without becoming root. It is very easy to destroy your host file system when learning to cross distribution build and I highly recommend this method. Another key difference between the maint-guide and cross wiki is to install the package cross build essentials.
sudo apt-get install build-essential crossbuild-essential-armhf
Otherwise, most everything is the same but building with the chroot shadow ARM filesystem.
Here is a translation for Ubuntu hosts... you need Zenial or better to use the cross-compile debian wiki method. Ie, a Ubuntu x86 Bionic build for raspberry PI or similar. This method takes care of a lot of things for you, especially preventing filesystem corruption by mistake; thank the kind souls at Debian.
The info under nomenclature is quite important,
build means the architecture of the chroot/dpkg/compiler's executable, i.e. the architecture of the build system (called host by cmake/kernel/etc)
host means the architecture of produced executable objects, i.e. the architecture of the host system where these guest objects will run on (called target or sometimes build elsewhere)
target is what the produced executable objects will generate when producing executable objects, i.e. the architecture of the systems the built programs target their results to run on (relevant only for compilers and similar)
People change the names for the same concepts in cross-building and that can be confusing.
Additional info
Kernel cross build
Meson Cross Compilation
Clang cross compile

Cross-compiling c++ project using CMake for AARCH64 Ubuntu system

The current issue I am experiencing is setting up CMAKE for cross-compiling for the AARCH64 environment. The C++ project does reference some other third party libraries such as boost for its compiling.
I have read the documentation, but it is not really clear on the step-by-step procedure on what needs to be done in order to cross-compile using CMAKE for aarch64 on a x86_64 environment.
I have read that I need the rootfs of the aarch64 system, others it states I dont need it and only need the c++ compiler and cross headers/libraries.
At the current moment I am trying to compile the project on a Mustang board. But it runs in to issues with referencing the installed libraries for the x86_64 system.
If there is a person or site that could detail setup by step what would need to be done to this environment in order to get the entire project to cross-compile for aarch64 on a x86_64 system. I would greatly appreciate it.

packing a c++ project for release with some dependencies like pthread boost curl etc

I am writing an c++ application where i use a lot of libraries like boost curl,pthread etc. I am not sure how to pack the application with all the dependencies for production use.
What is the best way to distribute the application with dependencies?
It depends on the platform you plan to use.
If you use pthread you are probably targeting POSIX systems. If you are planning to use Debian you can package everything in a Deb package and configure the package to handle the dependency. The same goes for the RPMs of redhat systems.
I'm not experienced about the best way to address an OSX system

Compiling a shared library with Qt on Ubuntu 9.10

I am new to both Qt and Linux C++ development (although I have many years C and C++ development experience on Windows).
I have some legacy C projects (source files and headers - [not using Qt]) that I want to compile into shared libs on Linux.
I am proposing to store my projects under the following structure:
/home/username/Projects/project_name
/home/username/Projects/project_name/src
/home/username/Projects/project_name/build
Can anyone tell me how to do the following (using Qt to simplify the build process)
Create different build configurations (debug, release etc)
Build a configuration to create the appropriate shared library
As an aside, I have only recently installed Ubuntu 9.10 and the only C/C++ development tool I have installed (using SPM) in Qt - so I dont know if I need to install some other GNU C++ tools.
BTW I have already checked and have gcc (v4.4.1) available on my machine. I do not appear to have g++ though - I do not know whether this is significant or not.
An Ubuntu system doesn't come with build tool chain by default. Instead it has a meta package that you will need to install:
sudo apt-get install build-essential
This will install, among other the g++ compiler, although I am not sure about the Qt headers an such. For them you will need the qt4-dev package (I assume you wish to work with qt4 rather then qt3).
As for the bould structure, you will want to consult the qmake manual, or you might want to consider using CMake (apt-get install cmake) instead. CMake allow for out of build sources, as you require, and personally, I can't recommend it enough.