crosscompile c++ binary for Amazon EC2 - c++

I tried to just compile on what appears to be similar (both Ubuntu 64bit) but the binary is not runnable by the Amazon instance of Ubuntu (which is 64 bit too, but don't know much more than that).
I've seen a thread suggesting spinning additional EC2 instance just to compile there, but it isn't a solution as I can't transfer sources outside, only a compiled binaries and dynamic libs.
Was thinking about making a virtual environment on my computer to spawn a clone of EC2 to compile there, but is it doable?
kernel info:
uname -a
4.4.0-93-generic #116-Ubuntu SMP Fri Aug 11 21:17:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
ip-xxx-xxx-xxx-xxx 4.4.0-1035-aws #44-Ubuntu SMP Tue Sep 12 17:27:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
it uses some amazon tailor made kernel it seems?
file info:
file ./testBinary
./testBinary: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), too many program (2304)
file -Pelf_phnum=3000 ./testBinary
./testBinary: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), corrupted program header size, corrupted section header size

You can't really 'clone' EC2 instance that you've created from some AMI. So since you don't have any details about why exactly your library wasn't working, I would suggest running Amazon Linux instead of Ubuntu.
You can run Amazon Linux in a Docker container on your machine and build your library there (https://hub.docker.com/_/amazonlinux/). That way the library should run without problems in any EC2 with Amazon Linux.
If you want to stick with Ubuntu, at the very least you should match Ubuntu versions (not just architecture) and probably kernel versions.

Related

Getting stuck while building DPDK

I am trying to build dpdk version 21.05.
I did run meson build and then while running ninja, it gets stuck at
[2030/2380] Compiling C object drivers/libtmp_rte_event_octeontx2.a.p/event_octeontx2_otx2_evdev.c.o
and does not move forward.
What could cause such a behavior?
This is with:
ubuntu 20.04.1 x86_64
kernel 5.8.0-1041-aws
gcc 9.3.0
ninja 1.10.0
meson 0.59.0
and no cross compiling
Checked DPDK 21.05, on Host (24 CPU cores) and Guest (1GB RAM, 2 CPU) with Broadwel x86_64 qemu on Ubuntu 20.04
with event octeon - pass
with event octeon disabled - pass
[2565/2565] Linking target app/dpdk-test-acl. (octeon eventdev)
[2530/2530] Linking target drivers/librte_event_dpaa2.so.21.2. (without oceteon eventdev)
[EDIT based on the update via commenton Au 26 2021]
suggested #Kviz on building with -Ddisable_drivers=event/octeontx2,event/octeontx to skip the built with octeon eventdev instance. This is confirmed to be successful. Current failure as is on dpdk test_ring application.
Suggestion:
If the target is only DPDK libraries one can ignore the same and proceed using DPDK libraries.
But if the target is to run dpdk-test for test_ring, the issue needs to resolve by analysis of the cause of failure.
Note: as per the error log gcc-7 but platform is ubuntu 20.04.1 x86_64 kernel 5.8.0-1041-aws gcc 9.3.0 something is not proper.

Qt program not executable (Exec format error) despite binary and system architecture both x86_64

I am trying to get a hello world example working with the latest version of Qt on Ubuntu 20.04. I am compiling via the automatically generated Makefile produced by qmake. Once compiled, the binary does not have sufficient permissions. After granting permissions I get an Exec format error. Running file on the output executable returns ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped. Running uname -mpi on my machine outputs x86_64 x86_64 x86_64. It seems to me like the architecture and binary are compatible but for some reason I get the Exec format error. Am I misunderstanding something or do I need to configure the compilation step in the Makefile to be compatible with my hardware?

Docker Centos, Cannot Execute Binary File

I have one C++ binary which is running smoothly on local centos. Recently, I started learning docker and trying to run my C++ application on centos docker.
Firstly, I pulled centos:latest from docker hub and installed my C++ application on it and it ran successfully, without any issue. Now i installed docker on raspberry-pi and pulled centos again and tried to ran the same application on it but it gave me error.
bash : cannot execute binary file
Usually, this error comes when we try to run application on different architecture then the one they are built on. I checked cat etc/centos-release on raspberry-pi and result is CentOS Linux release 7.6.1810 (AltArch),where as result on local centos is CentOS Linux release 7.6.1810 (Core)
uname -a on both devices is as follows
raspberry-pi, centos docker Linux c475f349e7c2 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l armv7l armv7l GNU/Linux
centos, centos docker Linux a57f3fc2c1a6 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
EDIT:
Also, file myapplication
TTCHAIN: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/l, for GNU/Linux 2.6.24, BuildID[sha1]=287b501c8206893f7819f215ee0033586212b143, with debug_info, not stripped
My question is how can i ran the same native application of centos, pulled from docker on raspberry-pi model 3.
Your application has been built for x86-64. Intel x86-64 binaries CAN NOT run on an ARM processor.
You have two paths to pursue:
If you don't have source code for the application, you will need an x86-64 emulator that will run on your Raspberry Pi. Considering the Pi's lesser capabilities and Intel's proclivity to sue anyone who creates an emulator for their processors, I doubt you'll find one that's publicly available.
If you have the source code for the application, you need to rebuild it as a Raspberry Pi executable. You seem to know that it was written in C++. GCC and other toolchains are available for the Raspberry Pi (most likely a "yum install gcc" on your Pi will grab the compiler and tools for you). Building the application should be extremely similar to building it for x86_64.
You could find a cross-compiler that would let you build for the Pi from your x86_64 box, but that can get complicated.
Could be that you are trying to run a 64-bit binary on a 32-bit processor, would need more information to know for sure though.
You can check by using the file command in the shell. You may have to re-compile on the original system with the -m32 flag to gcc.
Please do a "uname -a" on both devices and post the results.
Most likely the processor or library type doesn't match.
I presume (hope) you're not trying to run an x86-compiled app on a Pi. Although Docker is available for both processor types, Docker will not run x86 binaries on Pi or vice versa.
Actually, AltArch currently means one of the following architectures... ppc64, ppc64le, i386, armhfp (arm v7 32-bit), aarch64 (arm v8 64-bit). Core suggests the mainstream x86 and x86_64 builds of CentOS.
Yep, I bet that's what it is...you can't just transfer an x86 binary to a Raspbian and expect it to work. The application must be rebuilt for the platform.

qtcreator installation issue invalid encoding

I had previous installation packages for Qt.
/home/star/Downloads/sandeep/Untitled Folder/qt-creator-opensource-linux-x86_64-4.2.1(1).run
/home/star/Downloads/sandeep/Untitled Folder/qt-opensource-linux-x64-5.8.0.run
/home/star/Downloads/sandeep/Untitled Folder/qt-unified-linux-x64-2.0.5-1-online.run
I clicked properties and checked "allow package to run"
But when I double click on the run file,�*B# (invalid encoding) file gets created and it does not execute.
Also, I guess my linux is 32-bit, because output of uname -a gives
Linux star-X555LAB 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:06:14 UTC 2016 i686 i686 i686 GNU/Linux
and I gues my installables are 4 bit as the name suggests,that may be the problem.
I searched internet and found that it happens when u migrate from windows to linux. So, I formatted my NTFS pen drive to ext and tried again. the same problem repeats.
What should I do now?
I think the current installation packages for qt support only 1 month service. Or is the open-source longer duration free license for qt still valid. if so What is the path to download installables?
Yes you are right. Your OS is 32bit (i686) and your Qt installation is 64bit (x64). You may:
Run the binary from terminal and see the output
Install 64bit OS
Install 32bit Qt toolset

Compiling amd64 binary on i386 root with amd64 kernel (Debian)

I have a rather old Debian testing system that has all packages installed as i386. Usually I'm running a PAE kernel (linux-image-3.16.0-4-686-pae:i386).
I'm trying to compile a simple C++ program that needs more than 4 GB of memory. I've installed the linux-image-3.16.0-4-amd64:amd64 kernel because I think it is not possible to get more than 3GB of memory on a PAE machine.
Unfortunately, the whole toolchain/libraries are still i386. I guess I need a special flavour of GCC (multilib?) and the amd64 version of some libraries.
I've found tutorials on how to compile 32bit stuff on 64bit rootfs systems, but not the other way round. I don't want to cross-grade the whole system to amd64 just for this test, so:
Is there a way to safely compile and run 64bit code on this setup with as little changes to the system as necessary? Ideally it would be possible to cross-grade from this setup at some point in the future. Alternately, would it be possible to create a 64bit chroot environment from a Debian Live CD, chroot into it, compile the code and run it from there? Or compile it statically and run it outside the chroot?
EDIT: Installing g++multilib solves the problem compiling 64bit (using the -m64 option). Can anyone help with the chroot / cross-grade part of my question?