I am cross-compiling a C / C++ application to run on a Raspberry Pi 4 with the arm-rpi-4.9.3-linux-gnueabihf compiler from the Tools Github Repository on an x64 Debian Linux system running in a VM. I am having some issues with this application so I built Valgrind from source on the Raspberry Pi with GCC 8.3.0-++rpi1.
The compiler binaries I am using can be downloaded here: https://github.com/raspberrypi/tools/tree/master/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabihf
If I run Valgrind on the RPI I get a number of errors, many of which indicate "Invalid read of size 8". My understanding is this would be typical on 64-bit architecture (8*8 = 64) but may be misleading on this 32-bit system unless the application was accessing a 64-bit data structure. Also, running Valgrind on the same application, built for and running on an x64 system, does not identify errors at these same locations, which makes me think that either the compilation toolchain is introducing an ABI issue or that Valgrind is giving misleading error indications -- or both.
Valgrind also indicates on startup:
--3451-- Arch and hwcaps: ARM, LittleEndian, ARMv8-neon-vfp
--3451-- Page sizes: currently 4096, max supported 4096
However $ uname -m indicates: arm7l, which I understand to be 32 bit (and definitely not ARMv8).
Can anyone provide any guidance on what might be going wrong here?
Thanks!
Related
I have little understanding about the different machine architectures (32bits , 64bits,...).
And because of that, I often have hard time when using C++ libraries on different machines, being stuck with the annoying "undefined symbols for architecture...".
I would be really happy if someone can explain to me why I get such confusing answers when I use the following commands on the same machine (a 2 years old mac with mountain Lion OS)
.
the man uname indicates
-m print the machine hardware name.
-p print the machine processor architecture name.
At a first look, I would say that -p is more relevant. So I run uname -p and I get:
i386 (which means 32bits If I am not wrong).
However for a library that I compiled on the same machine, running lipo -info lib_test.a returns:
input file lib_test.a is not a fat file
Non-fat file: lib_test.a is architecture: x86_64 (which means 64bits If I am not wrong)
The Latter is however more coherent with the return of uname -m which is
x86_64
It's a Mac OS X oddity. All hardware that OS X for Intel has shipped on has been 64-bit and so has the operating system - however, it can be force run in 32-bit-only mode. It's capable of executing 64-bit and 32-bit binaries, unless run in 32-bit mode.
Most binaries (.dylib and executables) delivered on this platform are "fat" binaries, means they contain both a 32-bit Intel binary and a 64-bit Intel binary, and sometimes binaries for other architectures (Power PC) combined into one file. The system will automatically load the most suitable part of the binary.
Because the underlying compiler usually needs to run with different flags to generate binaries for different architectures, and even the platform #defines are different, making the compiler see different source code after pre-processing, the binary needs to be compiled once per platform separately and then combined using lipo utility. XCode can automate this process on your behalf.
While the system is capable of running different binaries, both 32-bit and 64-bit, their execution model is different and they cannot be combined in the same process address space. So if you have one library as 64-bit only and other as 32-bit only, you cannot use them together.
I know there are already a few answers to this question but I can't seem to understand why I keep getting this error.
So here's the explanation:
I have 64 bits machine in which I've installed Windows 7 x64. I am compiling my Code under GCC (CodeBlocks) on Windows without any problem AT ALL. Then I decided that my application has to be portable, and I decided to compile it under GCC on Linux. In my other 32bit machine the code is compiling without any problem. However, on my 64 bit machine, I decided to install Ubuntu as Wubi. Of course I have installed Wubi x64 version as well.
I installed Ubuntu successfully under Wubi, I installed all necessary stuff, but when I try to compile my project, I get in the very first line the error 'cpu you selected does not support x86-64 instruction set'. Ok, this sounds completely non sense to me, taking into account that I've installed Wubi x64, on Windows 7 x64, on a 64bits machine. So why the hell am I getting an error saying that my CPU does not support x86-64 instruction set?
Could it be JUST because I have installed WUBI instead of installing Ubuntu on root in a normal way? I really can't seem to get this thing.
Thank you very much
EDIT: Ok, somewhere in Codeblocks I found the option that was checked for "Pentium M" architectures. I've unchecked it and now I get several erros such as:
error: cast from void* to int loses precision.
For which reason should this happen ONLY on Linux and not on Windows?
Based on this comment:
EDIT: Ok, somewhere in Codeblocks I found the option that was checked for "Pentium M" architectures. I've unchecked it and now I get several erros such as:
This was the reason for the compilation problem - "Pentium M" is a 32bit architecture. gcc under CodeBlocks will produce 32bit code on Windows by default
The error:
error: cast from void* to int loses precision.
Is caused because the model for 64bit on linux x64 is LP64, where sizeof(long) == sizeof(pointer) == 64bits, and sizeof(int) == 32bits and you're trying to shove a pointer(void *)(64bits) into an int(32bits), which causes pointer information to be lost.
With a compilation error like that, it's most likely that the code is not 64bit clean.
For which reason should this happen ONLY on Linux and not on Windows?
Linux on x64 defaults to producing 64bit applications, you would need to add -m32 to the build options for the program to make it produce 32bit code (there is probably a CodeBlocks target option to do this)
In order to solve the 3G (Ubuntu) memory issue (sometimes we do need more memory than 3G), I compiled my app under a 64bit environment to use more memory.
But, my 64bit app is much slower than the 32bit version.
32Bit version is built on a 32 bit machine;
64Bit version is build on a 64 bit machine;
both 32Bit and 64Bit versions run on the 64Bit machine in our loading test.
I googled, and some folks said, the unnecessary long type can make the 64bit slower than 32bit, because:
man g++:
-m64
Generate code for a 32-bit or 64-bit environment. The 32-bit environment
sets int, long and pointer to 32 bits and generates code that runs on any
i386 system. The 64-bit environment sets int to 32 bits and long and
pointer to 64 bits and generates code for AMD's x86-64 architecture. For
darwin only the -m64 option turns off the -fno-pic and -mdynamic-no-pic
options.
So I changed all my longs to ints, but still same result.
Please advise.
Peter
Edit:
About memory, both 32 and 64 versions use similar memory, about 1.5 ~
2.5 GB, and my machine has 9GB physical memory;
I profiled using OProfile, and for most of the functions, the 64bit version collects
more profiling samples than the 32bit version;
I cannot think of any
other bottlenecks, please advise.
My app is a server, and the loading test was done under a 100 client connections. The server does a lot of computation processing the audio data from the clients.
Profile your app. That will tell you where the slow code is.
For the question "why", no one will know the reason without details. You must analyze the profiled result and if there are any problem with the result, post it as a question here.
If your app does not need more than 4GB of RAM (1.5~2.5GB in your case), you should try x32. It's a new ABI that allows for 32-bit pointers in 64-bit environment.
if i have code compiled under Solaris 8 and 10 and now have a vendor that wants to use my bin/exe under Linux. Could there be compatibility issues?
I am pretty sure i would need to compile/link under Linux OS for it to work 100% but i just wanted to know if someone can give me the breakdown as to why it would not work on Linux even though the exe has everything and there is nothing dynamic about it, as in it should not need anything further to run it. Unless we talking runtime libs, that if there is a mismatch might cause the exe to fail.
You have to recompile your application on Linux.
Linux is a completely different run-time compared to Solaris. Even if you have compiled your application statically, there's the interface/system calls to the kernel that is different among these two operating systems. The processor architecture might be different too, e.g. SPARC vs X86.
Both Solaris and Linux support most of the standard C and Posix APIs, so if you've not used any APIs exclusive to Solaris, recompiling on Linux is often not that big a deal - but you surly should test everything throughly, and be aware of any endianess, and potential 64 bit vs 32 bit issues.
Other things that I think will not allow your Solaris binary to run on Linux out of the box are:
the hardware architecture:
1.1 Solaris usually runs on Sun's own SPARC machines, especially 8 - 10 can run on Intel architectures as well;
1.2 Linux usually runs on Intel machines (although it can run on Sparc machines).
the compilers:
2.1 Solaris 8 uses Sun's own compilers (Sun WorkShop 6+) & standard library implementation (so you'll have different library names, ABI incompatibilities and so on). Solaris 10 actually comes with gcc but you're probably not using it (I gather you're building on Solaris 8 only);
2.2 Linux uses g++, same as above for library names, ABI incompatibilities & so on.
I am trying to build gdb for armv6 architecture. I will be compiling this package on a Fedora Linux-Intel x86 box. I read the process of installing the gdb, like
Download the source pachage
run configure -host
make
But I got lost in the process because I was not able to make out what will be the host, target, needed for the configure script.
I need to basically be able to debug programs running on armv6 architecture board which runs linux kernel 2.6.21.5-cfs-v19. The gdb executable which I intend to obtain after compilation of the source also needs to be able to run on above mentioned configuration.
Now to get a working gdb executable for this configuration what steps should I follow?
We (www.rockbox.org) use the arm target for a whole batch of our currently working DAPS. The target we specify is usually arm-elf, rather than arm-linux.
Be careful with arm-linux vs. arm-elf, eg.
http://sources.redhat.com/ml/crossgcc/2005-11/msg00028.html
arm-elf is a standalone toolchain which does not require an underlying OS. So you can use
it to generate programs using newlib
arm-linux is a toolchain targetted to generate code for linux OS running on an ARM machine
We sometimes say arm-elf is for "bare metal".
Unfortunately there's another "bare metal" target arm-eabi and no one knows what the difference between these two exactly is.
BTW,
The gdb executable which i intend to obtain after compilation of the source,also needs to be able to run on above mentioned configuration.
Really? Running GDB on an ARM board may be quite slow.
I recommend you either of
Remote debugging of the ARM board from an x86 PC
Saving a memory core on the ARM board, transferring it to an x86 PC and then inspecting it there
Cf.
http://elinux.org/GDB
Cross-platform, multithreaded debugging (x86 to ARM) with gdb and gdbserver not recognizing threads
http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/remote-debugging
target/host is usually the target tool chain you would be using (mostly arm-linux)