I need to do unit testing for drivers in an arm based board with the help of gcov tool.When gcov is used in a x86 architecture it will create .gcda file after executing the program.But when it comes to an arm based board the .gcda files are not getting created.So,without that i couldn't use the gcov tool.My question is how to use that gcov tool in cross compilation.?.Thanks in advance.
gcov code/data structures are tied to host filesystem and cross-compiler toolchains do not have any port or a configuration to change this behavior. If your object file is ~/my-project/abc.o then the gcov in-memory data structures created/updated by the instrumented code point to ~/my-project/abc.gcda, and all these paths are on your host machine. The instrumented code running on the remote system (in your case the ARM board), as you can see, cannot access these paths and this is the main reason you don't see the .gcda files in the ARM board case.
For a general method on getting the .gcda files to get around this above issue, see https://mcuoneclipse.com/2014/12/26/code-coverage-for-embedded-target-with-eclipse-gcc-and-gcov/. This article presents a hacky method to break into gcov functions and manually dump the gcov data structures into on-host .gcda files.
I used the above mentioned blog to do code coverage for my ARM project. However; I faced another issue of a gcc bug in my version of the toolchain (the GNU arm toolchain version available in October/November 2016), where you would not be able to break into gcov functions and complete the process mentioned in the above blog, as the relevant gcov functions hang with an infinite loop. You may or may not face this issue because I am not sure if the bug is fixed. In case you face this issue, a solution is available in my blog https://technfoblog.wordpress.com/2016/11/05/code-coverage-using-eclipse-gnu-arm-toolchain-and-gcov-for-embedded-systems/.
Related
I run into "interesting" issue. I am doing build automation for our project in Bamboo. I was quite sure that I am done and then someone asked: why builds from Friday night and Saturday night (no code changes) are different?
I have 2 remote builders. Both have same OS installed (Ubuntu) and updated at to the latest at the same time. Both has the same libraries installed. The compilation is inside docker (image imported from same source)
So I started looking into this and got to the point where I can observer the following:
running compilation twice from same source on the same machine, starting new docker for each compilation produces identical (as per sha1sum) binaries.
running compilation twice from same source on two different machines, produces different binaries.
Source is in the folder of the same name and it is mounted into docker on the path with same name as well.
running compilation on 1 PC with some HW spec, then taking the disk and connecting it to the different PC (with different HW spec) and running compilation from the same source again. Resulting binaries were identical.
Is it possible that somehow the code depends on exact OS image? or any other OS attribute? Any other idea?
It feels like chasing ghosts ...
EDIT: After trying to go step by step, I narrowed this down to cmake. That means: cmake made on two different machines differ (going to start 1-by-1 diffing now). If I put results of cmake of several machines and compile with make from there, I am always getting same binaries. So I believe that the problem is the way cmakefiles are written not compilation itself.
EDIT2: I now know that this is Qt 5.2.1 rcc being non-deterministic issue. During cmake rcc is run and among others calculates some hasheh. Difference in those hashes is what is causing the whole thing to non-deterministic.
If I do cmake and take content of it to 3 different machines and run compilation (make) there I got 100% identical (as per sha1sum) results.
I think that settles it here. Now I need either convince project to upgrade to newer version of Qt where I know how to make rcc deterministic or I need to find how to make rcc deterministic in 5.2.1.
The results could be coincidence. Linking requires that all the required functions go into the executable, but the order is generally not constrained. This could very well depend on the order in which the object files appear on disk.
If you started with identical OS images, then your tests suggest a post-image configuration step introduced a change. Review that configuration. A trivial example would be the hostname. I would diff any intermediate build artifacts, for instance, config.h or whatever cmake writes its output. Try run strings on two binaries (that differ), then diff that output to see if you learn something.
https://tests.reproducible-builds.org/debian/index_issues.html might be very instructive, too.
I am trying to execute pin tools on my own executables. I am asked to use cache simulator (allcache) in order to collect miss rates.
I'm struggling with the parameters and I faced with a lot of errors actually.
The operating system is Win10-64 Bit but I'm using CygWin.
Currently I am trying to trigger it with pin.exe which is under intel64/bin folder.
$ pin.exe -t allcache.cpp -- myOwnThingy.exe
But I'm getting this error:
E: Failure to open DLL file C:\cygwin64\home\blabla\pin-3.7-97619-g0d0c92f4f-msvc-windows\intel64\bin\allcache.cpp
Why does it need to open a dll file especially when there are only .cpps and header files in the examples?
Pin tools must be compiled to be used. You can't run the source files. Use 'make' to build the pintool.
I am trying to monitor the code coverage of my C++ project. As I stated in a previous question, I need to use coroutines and other advanced C++2a features, so I am using clang++ to compile it. I found out here that one can use the -coverage flag when compiling with clang++ (along, obviously, with -O0 and -g).
Along with the executable file, this produces a .gcno file, containing the map of the executable. When the executable is ran, an additional .gcda file is generated, containing the actual profiling data.
I noticed that if I run the executable multiple times the coverage outputs are nicely and properly merged in the .gcda file, which is very good.
Now, I'm wondering if it's safe to run simultaneously multiple instances of the executable.
Before anyone suggests to run the test sequentially: I am running them sequentially, but my application uses a lot of networking, and some of the tests require multiple instances to communicate together (I am using Docker to simulate a network, and netem to get kind-of-realistic link scenarios).
Will running many instances of the same executable together cause any problem? I can imagine that if any locking mechanism is implemented, the coverage data will be safely and atomically written to the .gcda file, and if other executables need to perform the dump they'll wait until the lock is released. However, I couldn't find anywhere a guarantee that this actually happens.
GCOV profiling should be multi-process safe since Clang 7.
In Clang 6 there were two bugs preventing it from working, https://bugs.llvm.org/show_bug.cgi?id=34923 and https://bugs.llvm.org/show_bug.cgi?id=35464, but they are now fixed.
Infact, we are currently using it to collect coverage data for Firefox, which is multiprocess. We both have multiple processes in Firefox itself, and we also run tests (for some specific test suites) in parallel.
I've inherited a large volume of C++ code for running & monitoring laboratory equipment. Currently the deployment is managed by compiling all of the individual modules (each it's own program) using DevC++, manually moving all the .exe files to a Dropbox folder, and then running them on the host machine manually.
I'm trying to automate this process somewhat to make rolling out an implementation on a new machine simpler and making sure the most up to date binaries are what is running on any given machine quickly. However, I don't know anything about deploying software in a Windows environment (I'm used to working on linux systems where a simple makefile would suffice) What tools (preferably command line) are available to compile & organize binaries in a portable way on windows systems?
Assume that you have a C++ compiler usable on the command line, on one translation unit. For example, GCC is such a compiler (and mingw is or contains a variant of GCC). Assume also that it is capable of linking (e.g. by driving the system linker).
Then you need to use some build automation tool to drive such compilation commands. For example GNU make or ninja (but there are many others). AFAIK they exist on Windows (so you could port your Makefile on Linux to Windows).
Once you have chosen your build automation tool, studied its documentation and understood how to use it, you'll write the relevant configuration file for it. For make, you'll write a Makefile (caveat : tab characters are significant). For ninja, you'll write some build.ninja files (but you'll probably generate it, perhaps with meson).
Notice that some build tools (e.g. cmake) are cross-platform.
BTW, DevC++ is an IDE, not a compiler.
I'm trying to compile a c++ file and generate an asm or s file to be disassembled and run in PSIM. Whenever I try to do this I get errors. I am attempting to compile to mipsI-linux. I think I've determined that my cross compiler that was given to me is not working correctly for some reason. Can anyone give me some help building a new cross compiler that will generate the correct instruction format? I'm working on a MAC.
You probably want to take a look at crosstool-NG (based on crosstool) which seems to make building a cross-compiler toolchain relatively easy.
A great place to start is Cross Linux-From-Scatch. The first step it walks you through is building a cross-compiler with all of its dependencies.
I have tried a number of approaches and found that using Buildroot was the simplest and most reliable approach. Just download Buildroot, uncompress, cd to the top-level directory and run make-menuconfig. Set the options such as what target machine you need the cross compiler for, and run make.
The make takes 15-20 mins and needs an active internet connection as all sources are downloaded from online archives and built. After the build you get: a cross compiler toolchain (gcc,as,ld etc, and glibc or ulibc whichever you choose in the options). After make, the binaries (named < arch >-linux-gcc, < arch >-linux-as etc) are located at
<buildroot-top-directory>/output/host/usr/bin.
Add this location to your PATH variable (for linux users) and that's it.
Edit: Sorry, I noticed just now that the question is for MAC. Buildroot may not be officially supported for MAC.