We are creating a multi platform software in C++ for "normal" i386 Linux, but also some obscure MIPS hardware and for this we cross compile our product using the ELDK Mips cross compiler (an older version). The software is copied automatically to the real hardware via a script placed on a USB stick (the hardware detects the insertion of the USB stick, searches for the script, copies, reboots).
The compilation of the product happens on the same machine (Linux i386) for both the MIPS and i386. We have a complete set of unit tests, and they are executed automatically upon the completion of the i386 build (results are interpreted via atlassian bamboo's junit parser, but this is not relevant here)... However we have a problem verifying the validity MIPS tests. There are some minor differences in the code when compiling in MIPS so it would be important to know that they work too.
And the question: How to set up a MIPS unit test environment that can take the compiled unit tests and run them? (Any solution is welcome, even the un-orthodox ones)
Related
Suppose we take a compiled language, for example, C++. Now let's take an example Framework, suppose Qt. Qt has it's source code publically available and has the options for users to download the binary files and let users use their API. My question is however, when they compiled their code, it was compiled to their specific HardWare, Operating System, all that stuff. I understand how many Software Require recompilation for different types of Operating Systems (Including 32 vs 64bit) and offer multiple downloads on their website, however how does it not go even further to suggest it is also Hardware Specific and eventually result in the redistribution of compiled executes extremely frustrating to produce?
Code gets compiled to a target base CPU (e.g. 32-bit x86, x86_64, or ARM), but not necessarily a specific processor like the Core i9-10900K. By default, the compiler typically generates the code to run on the widest range of processors. And Intel and AMD guarantee forward compatibility for running that code on newer processors. Compilers often offer switches for optimizing to run on newer processors with new instruction sets, but you rarely do that since not all your customers have that config. Or perhaps you build your code twice (once for older processors, and an optimized build for newer processors).
There's also a concept called cross-compiling. That's where the compiler generates code for a completely different processor than it runs on. Such is the case when you build your iOS app on a Mac. The compiler itself is an x86_64 program, but it's generating ARM CPU instruction set to run on the iPhone.
Code gets compiled and linked with a certain set of OS APIs and external runtime libraries (including the C/C++ runtime). If you want your code to run on Windows 7 or Mac OSX Maverics, you wouldn't statically link to an API that only exists on Windows 10 or Mac OS Big Sur. The code would compile, but it wouldn't run on the older operating systems. Instead, you'd do a workaround or conditionally load the API if it is available. Microsoft and Apple provides the forward compatibility of providing those same runtime library APIs to be available on later OS releases.
Additionally Windows supports running 32-bit processes on 64-bit chips and OS. Mac can even emulate x86_64 on their new ARM based devices coming out later this year. But I digress.
As for Qt, they actually offer several pre-built configurations for their reference binary downloads. Because, at least on Windows, the MSVCRT (C-runtime APIs from Visual Studio) are closely tied to different compiler versions of Visual Studio. So they offer various downloads to match the configuration you want to build your your code for (32-bit, 64-bit, VS2017, VS2019, etc...). So when you put together a complete application with 3rd party dependencies, some of these build, linkage, and CPU/OS configs have to be accounted for.
I'm planning to use cpputest as a testing framework in my project which I need to cross compile as it will be used on ARM platform. The compiler I'm using for ARM development is arm-gcc which is built with pthreads disabled. Due to this, I need to build cpputest without pthreads. Currently l am following the autotool approach for building cpputest. Any help would be really appreciated.
Are you trying to compile, download, and run CppUTest on a target device?
Typically, CppUTest isn't compiled and downloaded to your target device but instead it's built as a Windows (or Linux) console application. This means that CppUTest gets compiled using a native compiler (Visual Studio or GCC) but your firmware will also need to compile using that same native compiler (Visual Studio or GCC).
Unit testing is meant to verify that your code's logic is correct, which means it doesn't matter if the code is executing on an ARM processor or in an Intel processor. For the record, when I first started getting into unit testing this blew my mind.
But what about hardware registers and stuff like that? Well, there's ways around testing embedded devices hardware in a Windows environment. At the end of the day, you'll still need to download your firmware onto the embedded device and verify that your port mappings are correct (i.e. That you're properly settings up a GPIO port to turn on an LED for instance).
There's several advantages to running your unit testing on your development PC instead of on target. Your development PC is much faster, has more resources (i.e. storage and RAM) and it also easily allows for running in a continuous integration environment (like Jenkins or TeamCity) whenever you check-in code to your version control system.
I highly recommend the book Test-Driven Development for Embedded C by James W. Grenning for more information. It will answer all your questions on unit testing on an embedded device. CppUTest can be used for C as well as C++ projects.
I am required to write a C++ application to run on an embedded Linux setup (DMP Vortex86DX processor). The vendor provides a minimal linux installation image that can be installed to the board and contains appropriate hardware drivers. My question is motivated by the answer to my previous question about writing Linux software on a particular kernel to run on a different kernel . I don't really know where to start when it comes to writing the software with regards to ensuring compatibility.
My instinctive approach would be to install the same versions of g++ on the embedded device and on my desktop development machine, write the application on the dev maching, copy to the board and compile it there. This seems madness though and I find it hard to believe that this is how embedded software is developed. With regards to the answer to my previous question, is there a way I can simply build on my desktop but use the version of glibc that exists on the embedded device - if so how can enforce linkage to a specific version? Or is it possible to build everything statically so that the application doesn't link to anything dynamically (I doubt this is possible).
I am a total novice to embedded development, and foresee months of frustration unless I can get hold of some good advice or resources. Any pointers or suggestion of where to start will be very gratefully received no matter how simple or trivial they seem - I really am starting at the very bottom with regards to embedded stuff.
OK, given the fact that the Vortex86SX/DX/MX claims to be x86 compatible, a small set of compiler switches should enable you to compile code for your target machine: -m32 to ensure 32bit code, and no -march switch targeting a specific CPU.
Then you'll need to link your code. As long as you don't use anything fancy, but simple established glibc functions, I'd expect the ABI to be the same on your development machine and the embedded system. In other words, you compile against your host libraries, copy the binary to the embedded system, and it should simply run using the libraries available there.
If X-Linux were to use some other libc, like uclibc or similar, then you'd need a cross compiler on your host. I have little experience with Ubuntu in that regard, but I know that the sys-devel/crossdev package for Gentoo linux makes generation of cross-compilers very easy. This can be both for different architectures (not needed in your case) and different libraries (like e.g. uclibc).
I'd say simply give copying the binaries a try, and report back if you encounter any problems there.
What is the command for selecting processor(MIPS R2000) in g++? Thanks
You'll probably need a cross-compilation environment for your target platform. You might find an existing one or you may need to build your own cross-compiler using the gcc toolchain. There's no single way to do this - it will depend on the specifics of the target architecture. Specifically, is there already an operating system (e.g. Linux, BSD, etc.) running on your target system? What kind of userland does it use - your build chain will need the relevant C and C++ library as well as any other libraries you need to build and run your software. Or are you coding straight against the metal? In this case, you'll want to find existing bootstrap code for getting the system into a sensible state for running your code - rolling your own will not be easy.
Generally, you're probably best off finding an existing developer community centred around the platform in question and asking for advice there. They may have step-by-step instructions for getting started.
Note that the CPU alone is only part of the picture - for example, the ARM architecture is very popular, but compiling code for Android devices (Linux kernel with Android userland), iOS devices (xnu kernel with BSD- and OSX-derived iOS userland), a Nintendo DS or a Playstation Vita (probably no multitasking OS at all) will be extremely different, even though they all use ARM chips, in many cases even the same instruction set generation.
I am trying to build gdb for armv6 architecture. I will be compiling this package on a Fedora Linux-Intel x86 box. I read the process of installing the gdb, like
Download the source pachage
run configure -host
make
But I got lost in the process because I was not able to make out what will be the host, target, needed for the configure script.
I need to basically be able to debug programs running on armv6 architecture board which runs linux kernel 2.6.21.5-cfs-v19. The gdb executable which I intend to obtain after compilation of the source also needs to be able to run on above mentioned configuration.
Now to get a working gdb executable for this configuration what steps should I follow?
We (www.rockbox.org) use the arm target for a whole batch of our currently working DAPS. The target we specify is usually arm-elf, rather than arm-linux.
Be careful with arm-linux vs. arm-elf, eg.
http://sources.redhat.com/ml/crossgcc/2005-11/msg00028.html
arm-elf is a standalone toolchain which does not require an underlying OS. So you can use
it to generate programs using newlib
arm-linux is a toolchain targetted to generate code for linux OS running on an ARM machine
We sometimes say arm-elf is for "bare metal".
Unfortunately there's another "bare metal" target arm-eabi and no one knows what the difference between these two exactly is.
BTW,
The gdb executable which i intend to obtain after compilation of the source,also needs to be able to run on above mentioned configuration.
Really? Running GDB on an ARM board may be quite slow.
I recommend you either of
Remote debugging of the ARM board from an x86 PC
Saving a memory core on the ARM board, transferring it to an x86 PC and then inspecting it there
Cf.
http://elinux.org/GDB
Cross-platform, multithreaded debugging (x86 to ARM) with gdb and gdbserver not recognizing threads
http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/remote-debugging
target/host is usually the target tool chain you would be using (mostly arm-linux)