Compiling amd64 binary on i386 root with amd64 kernel (Debian) - c++

I have a rather old Debian testing system that has all packages installed as i386. Usually I'm running a PAE kernel (linux-image-3.16.0-4-686-pae:i386).
I'm trying to compile a simple C++ program that needs more than 4 GB of memory. I've installed the linux-image-3.16.0-4-amd64:amd64 kernel because I think it is not possible to get more than 3GB of memory on a PAE machine.
Unfortunately, the whole toolchain/libraries are still i386. I guess I need a special flavour of GCC (multilib?) and the amd64 version of some libraries.
I've found tutorials on how to compile 32bit stuff on 64bit rootfs systems, but not the other way round. I don't want to cross-grade the whole system to amd64 just for this test, so:
Is there a way to safely compile and run 64bit code on this setup with as little changes to the system as necessary? Ideally it would be possible to cross-grade from this setup at some point in the future. Alternately, would it be possible to create a 64bit chroot environment from a Debian Live CD, chroot into it, compile the code and run it from there? Or compile it statically and run it outside the chroot?
EDIT: Installing g++multilib solves the problem compiling 64bit (using the -m64 option). Can anyone help with the chroot / cross-grade part of my question?

Related

Docker Centos, Cannot Execute Binary File

I have one C++ binary which is running smoothly on local centos. Recently, I started learning docker and trying to run my C++ application on centos docker.
Firstly, I pulled centos:latest from docker hub and installed my C++ application on it and it ran successfully, without any issue. Now i installed docker on raspberry-pi and pulled centos again and tried to ran the same application on it but it gave me error.
bash : cannot execute binary file
Usually, this error comes when we try to run application on different architecture then the one they are built on. I checked cat etc/centos-release on raspberry-pi and result is CentOS Linux release 7.6.1810 (AltArch),where as result on local centos is CentOS Linux release 7.6.1810 (Core)
uname -a on both devices is as follows
raspberry-pi, centos docker Linux c475f349e7c2 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l armv7l armv7l GNU/Linux
centos, centos docker Linux a57f3fc2c1a6 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
EDIT:
Also, file myapplication
TTCHAIN: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/l, for GNU/Linux 2.6.24, BuildID[sha1]=287b501c8206893f7819f215ee0033586212b143, with debug_info, not stripped
My question is how can i ran the same native application of centos, pulled from docker on raspberry-pi model 3.
Your application has been built for x86-64. Intel x86-64 binaries CAN NOT run on an ARM processor.
You have two paths to pursue:
If you don't have source code for the application, you will need an x86-64 emulator that will run on your Raspberry Pi. Considering the Pi's lesser capabilities and Intel's proclivity to sue anyone who creates an emulator for their processors, I doubt you'll find one that's publicly available.
If you have the source code for the application, you need to rebuild it as a Raspberry Pi executable. You seem to know that it was written in C++. GCC and other toolchains are available for the Raspberry Pi (most likely a "yum install gcc" on your Pi will grab the compiler and tools for you). Building the application should be extremely similar to building it for x86_64.
You could find a cross-compiler that would let you build for the Pi from your x86_64 box, but that can get complicated.
Could be that you are trying to run a 64-bit binary on a 32-bit processor, would need more information to know for sure though.
You can check by using the file command in the shell. You may have to re-compile on the original system with the -m32 flag to gcc.
Please do a "uname -a" on both devices and post the results.
Most likely the processor or library type doesn't match.
I presume (hope) you're not trying to run an x86-compiled app on a Pi. Although Docker is available for both processor types, Docker will not run x86 binaries on Pi or vice versa.
Actually, AltArch currently means one of the following architectures... ppc64, ppc64le, i386, armhfp (arm v7 32-bit), aarch64 (arm v8 64-bit). Core suggests the mainstream x86 and x86_64 builds of CentOS.
Yep, I bet that's what it is...you can't just transfer an x86 binary to a Raspbian and expect it to work. The application must be rebuilt for the platform.

How to compile a program on bleeding edge linux to run on old linux

I use install Arch Linux with duel booted Linux Mint 18.1 .In my college we have lubuntu 16.04 and Ubuntu 14.04 installed. I have also enabled testing repos in arch Linux so I get newer packages, thus due to this when I compile any C++ program on Arch it won't run on Linux Mint due to version of shared libraries don't match in mint.
like libMango.so.64 is in arch and libMango.so.60 is on mint. How can I overcome with this ?
so I am asking for how can I compile any C/C++ with newer compiler and shared libraries to to run fine with old shared libraries ? Just like I compile 32 bit programs on 64 bit machine with -m32 flag , is there flag for old shared libraries too ?
I am using gcc 8.1.
how can I compile any C/C++ with newer compiler and shared libraries to to run fine with old shared libraries ?
You cannot do that reliably if the API (or even the ABI, including size and alignment of internal structures, offsets of fields, vtables organization) of those libraries have changed incompatibly.
In general, you'll better recompile your source code on the other computer (and your college might forbid that, if that source is unrelated to your education). BTW, if your source code sits in some git repository (e.g. github if it is open source) transferring on multiple computers is very easy.
Some very few libraries make genuine (and documented) efforts on being compatible with other versions of them in binary form (e.g. at the ABI level), but this is not usual. The Unix and free software tradition is to care about source level compatibility. And the POSIX standard cares only about source compatibility.
You might consider using some chroot-ed environment (see chroot(2) and path_resolution(7) & credentials(7)) to have the essential parts of your older distribution on your newer one. Details are distribution specific (on Debian & Ubuntu, see also schroot and debootstrap). You could also consider running a full distribution in some VM, or using containers à la Docker.
And you might try to link (locally) your executable statically, so compile and link with g++ -static

Unable to build opencv ios framework for architecture arm64 and x86_64

I am following a opencv installation document Installation in iOS when compile a ios framework. However, if I did not change platform/ios/build_framework.py and build the framework, I will have the following errors:
build settings from command line:
ARCHS = x86_64
IPHONEOS_DEPLOYMENT_TARGET = 6.0
SDKROOT = iphonesimulator6.1
Build Preparation
Build task concurrency set to 8 via user default IDEBuildOperationMaxNumberOfConcurrentCompileTasks
=== BUILD AGGREGATE TARGET ZERO_CHECK OF PROJECT OpenCV WITH CONFIGURATION Release ===
Check dependencies
=== BUILD NATIVE TARGET zlib OF PROJECT OpenCV WITH CONFIGURATION Release ===
=== BUILD NATIVE TARGET libjpeg OF PROJECT OpenCV WITH CONFIGURATION Release ===
** BUILD FAILED **
Build settings from command line:
ARCHS = x86_64
IPHONEOS_DEPLOYMENT_TARGET = 6.0
SDKROOT = iphonesimulator6.1
=== BUILD NATIVE TARGET zlib OF PROJECT OpenCV WITH CONFIGURATION Release ===
Check dependencies
No architectures to compile for (ONLY_ACTIVE_ARCH=YES, active arch=x86_64, VALID_ARCHS=i386).
** BUILD FAILED **
Then, after many tries, I found that if I only compile for architecture armv7, armv7s and i386 by changing the following scripts in platform/ios/build_framework.py the framework can be built successfully.
targets = ["iPhoneOS", "iPhoneOS", "iPhoneSimulator"] #"iPhoneOS", "iPhoneSimulator"
archs = ["armv7", "armv7s", "i386"]#"arm64", , "x86_64"
for i in range(len(targets)):
build_opencv(srcroot, os.path.join(dstroot, "build"), targets[i], archs[i])
Any ideas on how I can compile for the arm64 and x86_64 architecture? Thanks.
Take out the i386 and compile for armv7,armv7s, and arm64 - these are the architecture option for Xcode for 64 bit - iphone5s.
Do not mix intel and arm specification they will not and cannot mix. The architecture command is to inform the compiler on how the object code will be created. Because intel is a CISC processor and arm is a RISC internally their object codes are very different.
a MOVE command for example may generate an x'80'opcode (command instruction) for intel but an x'60'for ARM. far as I know an i386 is the old intel 32 bit architecture, Xcode maybe doing some magic to have universal object codes that can run on both intel and RISC if it is doing so - it will not be efficient, it will always be better to compile to specific architectures.
The 32 bit, 64 bit 128 ..... are addressing modes - they are also the size of the processor registers and determine how much memory (RAM) can be accessed by the CPU. In general, aside from the ability to have huge RAM, higher bit processors will usually reduce the number of instructions to do a particular task.
Because downward compatibility is usually built in, an app compiled for armv6 will typically run in armv7 or even arm64 in compatibility mode but it will not be able to harness the advantages of running a true 64 bit application.
The targets is a higher level specification which specifies which commands can be used for example an iPad has UIPopViewController but this is not supported in an iPhone or an iPod touch.
One last thing - only iPhone 5s works with arm64 if you set another target it will probably flag arm64 as not an option.
So if you are compiling for x86_64 architecture, that is for OS X (laptop or mac tower), NOT iOS (iPad, iPhone).
What are you using? Not Xcode I presume?
You don't need to compile to x86_64 unless you want this to run on a 64-bit mac laptop or desktop... & it doesn't seem like you are trying to do that by your question?
Also, if you have it for i386 it is probably compatible with most OS X machines -- it will just run in 32-bit not 64-bit.
As far as arm64, that is also probably 64-bit. You might not have the libraries for 64-bit? Is your machine running on 64-bit? Check the opencv libraries (dynamic or static?) & see if they are 32 or 64? If they are 32-bit then that is the problem. You'll need to build them as 64-bit libraries or find them as such. I can't give you anymore info without knowing what platform you are on.
I recently learned that OpenCV for iOS has been added to CocoaPods. You can see the podspec here and the Github repository here. With CocoaPods, you create a file named Podfile in the root of your project and paste in this line:
pod 'OpenCV'
Save the file and run pod install from the terminal (assuming that CocoaPods is installed of course).
I have this running on my iPhone 5S, which is 64 bit as well.

Compiling boost as i386 on AMD64 (Ubuntu 11.10)

I'm currently programming an extension to a program, which only supports
i386 (and I am running amd64 Ubuntu 11.10). Whenever I compile my extension source
I need to use the -m32 flag to force 32 bit architecture (otherwise the program will not be able to load my extension). Sooner or later it is inevitable to avoid boost
thanks to its huge and stable library, which leads to my problem.
I want to use the boost filesystem, which uses OS specific function calls, which in turn leads to the requirement of a library file instead of only a header implementation. The problem is; I can't/don't know how to setup the boost filesystem (i386 version) on my amd64 machine. If I download a prebuilt (.deb) package for i386 and install it using -force-architecture it still fails complaining about dependencies.
So basically; how do I setup boost with 32bit (i386) architecture on my (amd64) system?
It seems as if I did it right all along but I was too dumb to realize how to properly link libraries with the GCC linker, coming from a Windows environment. You can easily compile boost libraries by using the -m32 flag and by setting up bjam properly. See the first answer in this question for details: How do I force a 32 bit build of boost with gcc?

Are Static Library Machine Independent?

Well, I am Developing a program in C++ in an Ubuntu 10.04.1 (Intel Core2Quad) LTS, but the releases are running in a Debian 5.0.5 (Intel(R) Xeon(R) CPU). Some libraries such as crypto++ or mysqlclient have different versions in both OS. So I decided to compile the binary statically with all the libraries statically compiled in the Ubuntu and then upload the completed binary to the Debian.
I am not sure if this method is correct, because the static libs maybe are architecture-dependent and maybe can get in conflict in the Debian Machine. If I want to use the new library version of Ubuntu in the Debian, should I compile them in the Debian?
Thanks in advance
They're architecture dependant. Usually though, library gets compiled to a common architecture on x86 machines, such as i686 which will run fine on both an Intel Xeon and a Intel Core2Quad (But not on e.g. an old Intel Pentium processor)
No, it is not machine independent. The only difference is that all libraries are bundled with the executable, so there is no risk for the program to fail on load with a "library not found" message. In summary, it will works for all linux distributions, but it will not work for Windows, for example.