Code coverage tools for Symbian C++ and Maemo - c++

What code coverage tools have you used with Symbian C++ and Maemo? What are the pros and cons of the tool you are using?

On Symbian I've used BullseyeCoverage and Testwell CTC++. Cannot really describe the pros/cons of them in detail. Both got the job done, eventually. Both needed some effort with setup and integration with an automated test suite. Both contained bugs that e.g. crashed the downstream compiler with slightly broken instrumented source code.
On Maemo, since the toolchain is GCC based, I'd guess gcov would be a good starting point. Though I haven't been working on Maemo much yet and haven't done any coverage measurement there.

See SD C++ Test Coverage for a tool that has extremely low overhead and works very well in embedded environments.

I have used Bullseye Coverage on Symbian and it is very good. The only problem is that it only runs on emulator and not hardware. Therefore you would not be able to get coverage metrics from a device or devboard. If your app runs on both hardware and emulator, this won't be a big deal (apart from the standard differences between running on emulator vs hardware). Also, as there are plans to replace the emulator with a proper hardware simulator, perhaps bullseye wouldn't be the best choice.

gcov can be used (and is used) in maemo platform and tool called lcov can then be used to generate "pretty" reports.
However, in order use gcov in maemo sdk, you need to disable compiler cache during the build time when you are creating binaries for coverage execution..

Related

Is there a way to build cpputest with pthreads disabled?

I'm planning to use cpputest as a testing framework in my project which I need to cross compile as it will be used on ARM platform. The compiler I'm using for ARM development is arm-gcc which is built with pthreads disabled. Due to this, I need to build cpputest without pthreads. Currently l am following the autotool approach for building cpputest. Any help would be really appreciated.
Are you trying to compile, download, and run CppUTest on a target device?
Typically, CppUTest isn't compiled and downloaded to your target device but instead it's built as a Windows (or Linux) console application. This means that CppUTest gets compiled using a native compiler (Visual Studio or GCC) but your firmware will also need to compile using that same native compiler (Visual Studio or GCC).
Unit testing is meant to verify that your code's logic is correct, which means it doesn't matter if the code is executing on an ARM processor or in an Intel processor. For the record, when I first started getting into unit testing this blew my mind.
But what about hardware registers and stuff like that? Well, there's ways around testing embedded devices hardware in a Windows environment. At the end of the day, you'll still need to download your firmware onto the embedded device and verify that your port mappings are correct (i.e. That you're properly settings up a GPIO port to turn on an LED for instance).
There's several advantages to running your unit testing on your development PC instead of on target. Your development PC is much faster, has more resources (i.e. storage and RAM) and it also easily allows for running in a continuous integration environment (like Jenkins or TeamCity) whenever you check-in code to your version control system.
I highly recommend the book Test-Driven Development for Embedded C by James W. Grenning for more information. It will answer all your questions on unit testing on an embedded device. CppUTest can be used for C as well as C++ projects.

Memory debugging tools for Android NDK C++ code

Does anyone know of memory debugging tools (like Valgrind) for native C++ code under Android NDK?
We have a project that uses quite a bit of native code. As hinted in one of the comments for the question, the best approach is to test that code on another environment.
We have a separate project that builds in Linux and calls the C/C++ functions we use in our Android code. Once you are at that point, all the nice tools you are used to (gdb, Valgrind, etc.) are available to you.
A lot more productive than doing the same thing on the phone (assuming you could even find a good tool).
The tricky part is to have a good test harness, but that should be a given to any project that started off on the right path... ;)

C++ code coverage tool for weird target platform

Anyone knows c++ code coverage tool usable under the following conditions:
Target platform is PowerPC CPU inside Nintendo WII dev.kit, that runs custom embedded OS. The only way to exchange data with the PC is to use custom proprietary API (sorry for my NDA).
Compiler is not Microsoft, not GCC, not even command line. Namely it's Metrowerks IDE (running on Windows, of course).
Thanks in advance!
Do you know about BullseyeCoverage. It is a commercial tool, which supports really big number of platforms and compilers. If you don't see you compiler you can write them an inquiry. I did not find the Metrowerks Compiler in the list.
Hope that helps,
Ovanes
See Cpp Test Coverage. This tool can be configured to collect data in embedded systems; you have to figure out how to export an array of bits from inside that system to an external file system, and if you can do that, it can show you precise test coverage.
Does the Metrowerks compiler have special syntax that is not ANSI standard?
My shop has been using a customized version of Covtool. Perhaps that could be ported to your environment.
I have used Cantata. It works with Metroworks. It instruments your code so your application will no run at full speed. You just need rewrite the IO functions so output happens using the custom proprietary API.

Using Visual Studio to develop for C++ for Unix

Does anyone have battle stories to share trying to use Visual Studio to develop applications for Unix? And I'm not talking using .NET with a Mono or Wine virtual platform running beneath.
Our company has about 20 developers all running Windows XP/Vista and developing primarily for Linux & Solaris. Until recently we all logged into a main Linux server and modified/built code the good old fashioned way: Emacs, Vi, dtpad - take your pick. Then someone said, "hey - we're living in the Dark Ages, we should be using an IDE".
So we tried out a few and decided that Visual Studio was the only one that would meet our performance needs (yes, I'm sure that IDE X is a very nice IDE, but we chose VS).
The problem is, how do you setup your environment to have the files available locally to VS, but also available to a build server? We settled with writing a Visual Studio plugin - it writes our files locally and to the build server whenever we hit "Save" and we have a bit fat "sync" button that we can push when our files change on the server side (for when we update to the latest files from our source control server).
The plugin also uses Visual Studio's external build system feature that ultimately just ssh's into the build server and calls our local "make" utility (which is Boost Build v2 - has great dependency checking, but is really slow to start as a result i.e. 30-60 seconds to begin). The results are piped back into Visual Studio so the developer can click on the error and be taken to the appropriate line of code (quite slick actually). The build server uses GCC and cross-compiles all of our Solaris builds.
But even after we've done all this, I can't help but sigh whenever I start to write code in Visual Studio. I click a file, start typing, and VS chugs to catch up with me.
Is there anything more annoying than having to stop and wait for your tools? Are the benefits worth the frustration?
Thoughts, stories, help?
VS chugs to catch up with me.
Hmmm ... you machine needs more memory & grunt. Never had performance problems with mine.
I've about a decade's experience doing exactly what you're proposing, most of it in the finance industry, developing real-time systems for customers in the banking, stock exchanges, stock brokerage niches.
Before proceeding further, I need to confess that all this was done in VS6 + CVS, and of late, SVN.
Source Code Versioning
Developers have separate sourcesafe repositories so that they can store their work and check it packages of work at logical milestones. When they feel they want to do an integration test, we run a script that checks it into SVN.
Once checked into SVN, we've a process that kicks off that will automatically generate relevant makefiles to compile them on the target machines for continuous integration.
We've another set of scripts that synchs new stuff from SVN to the folders that VS looks after. There's a bit of gap because VS can't automatically pick up new files; we usually handle that manually. This only happens regularly the first few days of the project.
That's an overview of how we maintain codes. I have to say, I've probably glossed over some details (let me know if you're interested).
Coding
From the coding aspect, we rely heavily on the pre-processors (i.e. #define, etc) and flags in the makefile to shape compilation process. For cross platform portability, we use GCC. A few times, we were force to use aCC on HP-UX and some other compilers, but we did not have much grief. The only thing that is a constant pain, is that we had to watch out for thread heap spaces across platforms. The compiler does not spare us from that.
Why?
The question is usually, "Why the h*ll would you even what to have such a complicated way of development?". Our answer is usually another question that goes, "Have you any clue how insane it is to debug a multi-threaded application by examining the core dump or using gdb?". Basically, the fact that we can trace/step through each line of code when you're debugging an obscure bug, makes it all worth the effort!
Plus!... VS's intellisense feature makes it so easy to find the method/attribute belonging to classes. I also heard the VS2008 has refactoring capabilities. I've shifted my focus to Java on Eclipse that has both features. You'd be more productive focusing coding business logic rather than devote energy making your mind do stuff like remember!
Also! ... We'd end up with a product that can run on both Windows and Linux!
Good luck!
I feel your pain. We have an application which is 'cross-platform'. A typical client/server application where the client needs to be able to run on windows and linux. Since our client base mostly uses windows we work using VS2008 (the debugger makes life a lot easier) - however we still need to perform linux builds.
The major problem with this was we were checking in code that we didn't know would build under gcc, which would more than likely break the CI stuff we had setup. So we installed MingGW on all our developer's machines which allows us to test that working copy will build under gcc before we commit it back to the repository.
We develop for Mac and PC. We just work locally in whatever ide we prefer, mostly VS but also xcode. When we feel our changes are stable enough for the build servers we check them in. The two build servers (Mac and PC) look for source control checkins, and each does a build. Build errors are emailed back to the team.
Editing files live on the build server sounds precarious to me. What happens if you request a build while another developer has edits that won't build?
I know this doesn't really answer your question, but you might want to consider setting up remote X sessions, and just run something like KDevelop, which, by the way, is a very nice IDE--or even Eclipse, which is more mainstream, and has a broader developer base. You could probably just use something like Xming as the X server on your Windows machines.
Wow, that sounds like a really strange use for Visual Studio. I'm very happy chugging away in vim. However, the one thing I love about Visual Studio is the debugger. It sounds like you are not even using it.
When I opened the question I thought you must be referring to developing portable applications in Visual Studio and then migrating them to Solaris. I have done this and had pleasant experiences with it.
Network shares.
Of course, then you have killer lag on the network, but at least there's only one copy of your files.
You don't want to hear what I did when I was developing on both platforms. But you're going to: drag-n-drop copy a few times a day. Local build and run, and periodically checking it out on Unix to make sure gcc was happy and that the unit tests were happy on that platform too. Not really a rapid turnaround cycle there.
#monjardin
The main reason we use it is because of the re-factoring/search tools provided through Visual Assist X (by Whole Tomato). Although there are a number of other nice to haves like Intelli-sense. We are also investigating integrations with our other tools AccuRev, Bugzilla and Totalview to complete the environment.
#roo
Using multiple compilers sounds like a pain. We have the luxury of just sticking with gcc for all our platforms.
#josh
Yikes! That sounds like a great way to introduce errors! :-)
I've had good experience developing Playstation2 code in Visual Studio
using gcc in cygwin. If you've got cygwin with gcc and glibc, it
should be nearly identical to your target environments. The fact that you
have to be portable across Solaris and Linux hints that cygwin should
work just fine.
Most of my programming experience is in Windows and I'm a big fan of visual studio (especially with Resharper, if you happen to be doing C# coding). These days I've been writing an application for linux in C++. After trying all the IDEs (Netbeans, KDevelop, Eclipse CDT, etc), I found Netbeans to be the least crappy. For me, absolute minimum requirements are that I be able to single-step through code and that I have intellisense, with ideally some refactoring functions as well. It's amazing to me how today's linux IDE's are not even close to what Visual Studio 6 was over ten years ago. The biggest pain point right now is how slow and poorly implemented the intellisense in Netbeans is. It takes 2-3 seconds to populate on a fast machine with 8GB of RAM. Eclipse CDT's intellisense was even more laggy. I'm sorry, but a 2 second wait for intellisense doesn't cut it.
So now I'm looking into using VS from Windows, even though my only build target is linux...
Chris, you might want to look at the free automation build server 'CruiseControl', which integrates with all main source control systems (svn, tfs, sourcesafe, etc.). It's whole purpose is to react to check-ins in a source control system. In general, you configure it so that anytime anyone checks code in, a build is initiated and (ideally) unit tests are run. For some languages there are some great plugins that do code analysis, measure unit test code coverage, etc. Notifications are sent back to the team about successful / broken builds.
Here's a post describing how it can be set up for C++: link (thoughtworks.org).
I'm just getting started with converting from a linux-only simple config (Netbeans + SVN, with no build automation) to using Windows VS 2008 with build automation back-end that runs unit tests in addition to doing builds in linux. I shudder at the amount of time it's going to take me to get that all configured, but the sooner the better, I guess.
In my ideal end-state I'll be able to auto-generate the Netbeans project file from the VS project, so that when I need to debug something in linux I can do so from that IDE. VS project files are XML-based, so that shouldn't be too hard.
If anyone has any pointers for any of this, I'd really appreciate it.
Thanks,
Christophe
You could have developers work in private branches (easier if you're using a DVCS). Then, when you want to checkin some code, you check it into your private branch on [windows|unix], update your sandbox on [unix|windows] and build/test before committing back to the main branch.
We are using a similar solution to what you described.
We have our code stored on the Windows side of the world and UNIX (QNX 4.25 to be exact) has access though an NFS mount (thanks to UNIX services for Windows). We have an ssh into UNIX to run make and the pipe to output into VS. Accessing the code is fast, builds are a little slower than before, but our longest compile is currently less than two minutes, not a big deal.
Using VS for UNIX development has been worth the effort to set it up, because we now have IntelliSense. Less typing = happy developer.
Check out "Final Builder" (http://www.finalbuilder.com/). Select a version control system (e.g. cvs or svn, to be honest, cvs would suit this particular use case better by the sounds of it) and then set up build triggers on FinalBuilder so that checkins cause a compile and send the results back to you.
You can set up rulesets in FinalBuilder that prevent you checking in / merging broken code into the baseline or certain branch folders but allow it to others (we don't allow broken commits to /baseline or /branches/*, but we have a /wip/ branching folder for devs who need to share potentially broken code or just want to be able to commit at the end of the day).
You can distribuite FB over multiple "build servers" so that you don't wind up with 20 people trying to build on one box, or waiting for one box to process all the little bitty commits.
Our project has a Linux-based server with Mac and Win clients, all sharing a common codebase. This set up works ridiculously well for us.
I'm doing the exact same thing at work. The setup I use is VS for Windows development, with a Linux VM running under VirtualBox for local build / execute verification. VirtualBox has a facility where you can make a folder on the host OS (Windows 7 in my case) available as a mountable filesystem in the guest (Ubuntu LTS 12.04 in my case). That way, after I start a VS build, and it's saved the files, I can immediately start a make under Linux to verify it builds and runs OK there.
We use SVN for source control, the final target is a Linux machine (it's a game server), so that uses the same makefile as my local Linux build. That way, if I add a file to the project / change a compiler option, usuall adding / changing a -D, I do the modifications initially under VS, and then immediately change the Linus makefile to reflect the same changes. Then when I commit, the build system (Bamboo) picks up the change, and does its own verification build.
Hard earned experience says this is an order of magnitude easier if you build like this from day one.
The first project I worked on started as Windows only, I was hired to port it to Linux, since they wanted to switch to a homogenous server environment, and everything else was Linux. Retrofitting a Windows project into this sort of a setup was a fairly major expenditure of effort.
Project number two was done "two system build" right from day one. We wanted to maintain the ability to use VS for development / debug since it is a very polished IDE, but we also had the requirement for final deploy to Linux servers. As I alluded to above, when the project was build with this in mind right from the start, it was quite painless. The worst part was a single file: system_os.cpp that contained OS specific routines, things like "get current time since linux epoch start in milliseconds", etc. etc. etc.
At the risk of getting a little off topic, we also created unit tests for this, and having the unit tests for the OS specific pieces provided a great deal of confidence that they worked the same on both platforms.

How can I measure CppUnit test coverage (on win32 and Unix)?

I have a very large code base that contains extensive unit tests (using CppUnit). I need to work out what percentage of the code is exercised by these tests, and (ideally) generate some sort of report that tells me on a per-library or per-file basis, how much of the code was exercised.
Here's the kicker: this has to run completely unnatended (eventually inside a continuous integration build), and has to be cross platform (well, WIN32 and *nix at least).
Can anyone suggest a tool, or set of tools that can help me do this? I can't change away from CppUnit (nor would I want to - it kicks ass), but otherwise I'm eager to hear any recommendations you might have.
Cheers,
Which tool should I use?
This article describes another developers frustrations searching for C++ code coverage tools. The author's final solution was Bullseye Coverage.
Bullseye Coverage features:
Cross Platform Support (win32, unix, and embedded), (supports linux gcc compilers and MSVC6)
Easy to use (up and running in a few hours).
Provides "best" metrics: Function Coverage and Condition/Decision Coverage.
Uses source code instrumentation.
As for hooking into your continuous integration, it depends on which CI solution you use, but you can likely hook the instrumentation / coverage measurement steps into the make file you use for automated testing.
Testing Linux vs Windows?
So long as all your tests run correctly in both environments, you should be fine measuring coverage on one or the other. (Though Bullseye appears to support both platforms). But why aren't you doing continuous integration builds in both environments?? If you deliver to clients in both environments then you need to be testing in both.
For that reason, it sounds like you might need to have two continuous build servers set up, one for a linux build and one for a windows build. Perhaps this can be easily accomplished with some virtualization software like vmware or virtualbox. You may not need to run code coverage metrics on both OSs, but you should definitely be running your unit tests on both.
If you can use GNU GCC as your complier, then the gcov tool works well. It's very easy to fully automate the whole process.
If you are using the GCC toolchain, gcov is going to get you source, functional, and branch coverage statistics. gcov works fine for MinGW and Cygwin. This will allow you to get coverage statistics as well as emitting instrumented source code that allows you to visualize unexecuted code.
However, if you really want to hit it out of the park with pretty reports, using gcov in conjunction with lcov is the way to go. lcov will give you bar reports scoped to files and directories, functional coverage statistics, and color coded source file browsing to show coverage (green means executed, red means not...).
lcov is easy on Linux, but may require some perl hacking on Cygwin. I personally have had some problems executing the scripts (lcov is implemented in perl) on Windows. I've gotten a hacked up version to work, but be forewarned.
Another approach is doing the gcov emit on windows, and doing the lcov post processing on Linux, where it will surely work out of the box.
Check out our SD C++ Test Coverage tool. It can be obtained for GCC, and for MSVC6.
It has low overhead probe data collection, a nice display of coverage data overlayed on your code, and complete report generation with rollups on coverage across the method/class/file/directory levels.
EDIT: Aug 2015: Now supports GCC5 and various MS dialects through Visual Studio 2015. To use these tools under Linux, you need Wine, but there the tools provide Linux-native sh scripting and a Linux/Java based UI, so the tool feels like a native Linux tool there.
I guess I should have specified the compiler - we're using gcc for Linux, and MSVC 6 (yeah I know, it's old, but it works (mostly) for us) for WIn32.
For that reasons, gcov won't work for our Win32 builds, and Bullseye won't work for our Linux builds.
Then again maybe I only need coverage in one OS...