Does the Rust compiler have a profiling option? - profiling

I have a Rust program that isn't running as fast as I think it should. Is there a way to tell the compiler to instrument the binary to generate profiling information?
I mean something like GCC's -p and -pg options or GHC's -prof.

The compiler doesn't support any form of instrumentation specifically for profiling (like -p/-pg/-prof), but compiled Rust programs can be profiled under tools that do not require custom instrumentation, such as Instruments on OS X, and perf or callgrind on Linux.
I believe such tools support using DWARF debuginfo (as emitted by -g) to provide more detailed performance diagnostics (per-line etc.), but enabling optimisations play havoc with the debug info, and it's never really worked for me. When I'm analysing performance, diving into the asm is very common.
Making this easier would be really nice, and tooling is definitely a post-1.0 priority.

There's no direct switch that I'm aware of. However, I've successfully compiled my code with optimizations enabled as well as debugging symbols. I can then use OS X's Instruments to profile the code. Other people have used KCachegrind on Linux systems to the same effect.

Related

Valgrind flags, debug vs release compilation

On a Jenkins instance, I need Valgrind to check if there are particular problems in a C++ compiled binary. However, I only need a yes / no answer, not a stack trace for example. If they are any problems, I will launch valgrind on the faulty code with debug flags activated on my personal machine. The build is managed with CMake on a Linux running machine (targeting gcc).
If I compile my code with -DCMAKE_BUILD_TYPE=Release on the Jenkins instance, will Valgrind detect the same problems in the binary as with -DCMAKE_BUILD_TYPE=Debug ?
Valgrind works by instrumenting and replacing parts of your code at runtime, like redirecting calls to memory allocation functions. For doing this it does not rely on debug information, but it might get confused by optimized code:
If you are planning to use Memcheck: On rare occasions, compiler
optimisations (at -O2 and above, and sometimes -O1) have been observed
to generate code which fools Memcheck into wrongly reporting
uninitialised value errors, or missing uninitialised value errors. We
have looked in detail into fixing this, and unfortunately the result
is that doing so would give a further significant slowdown in what is
already a slow tool. So the best solution is to turn off optimisation
altogether.
(from the Valgrind manual)
Since the Release build type is uses optimizations, that makes it a bad fit for your case.

Multi platform C++ project setup and tools

My task is to create a C++ SDK - in the form of a dynamic library(s), most likely.
It is supposed to be used on different platforms - Windows (32/64 bit), Linux (32/64 bit), Mac OS, Android and iOS. I don't have much experience with multi-platform project setup and I'm trying to decide what methods and tools to use for easiest development and deployment.
Side note: I will also have to prepare automatic builds (jobs) on Bamboo CI server, in order to run compilation and tests for each required target.
My main dilemmas are:
Project setup. Should I prepare different project schemas for use on different platforms (like .sln on Windows and makefiles on Linux), or maybe try using a tool like CMake? Is it even possible to prepare a CMake project that will suit all these target platforms?
Compilation toolchain. Should I use "native" C++ compilers for every platform (like MSVC on Windows and GCC on Linux), or maybe a single toolchain like Clang + LLVM? Would Clang + LLVM (and some linker obviously) be even able to build distributable binaries for all these platforms I need?
Development Environment. Which OS/IDE would be best for working on that kind of project? I prefer working on Windows and my usual IDE is Visual Studio - would it be viable in this case, or maybe something else would be more appropriate?
I know that my problem is very complex and there is no straight answer for any of these points, but every advice and even partial answer will be much appreciated :)
As you say, there is no one-size-fits all solution, so I will make some general suggestions. Feel free to pick-and-choose as you feel is most beneficial.
If you plan to do your building on the host OS, cmake sounds like exactly the tool for you. It self-describes as a "build system generator", where the steps to build on a specific host OS are abstracted away, meaning the same setup "should" work for any system cmake supports.
If you're thinking of cross-compiling, you're in for some hurt with the iOS and MacOS goals. As far as I know, and I have put some effort into trying, Apple does not release compilers for their systems that do not run on their systems -> You will have to compile for iOS and MacOS from a MacOS computer. If you can prove me wrong on this point, I would be glad to hear it :)
Depending on your licensing requirements, if you really want an overkill solution you could look into Qt* and qmake. I have had excellent luck with their multiarchitecture solutions, and Qt supports all of the systems you listed in your original question. I find Qt + qmake far easier to deal with than cmake.
* Yes, Qt does non-GUI work quite well too!
I touched on this in the second point of 1., but my general suggestion would be to use native toolchains. Excluding MacOS, it's easy to set up Virtual Machines, build server, etc. to build native code, and my experience with cross-compilers is they always add another layer of heartache, even worse than having to remote-access a separate builder computer.
Provided you avoid system-dependent headers, libraries, or extensions, it shouldn't matter what system you use. Things like <windows.h> and <linux/*.h> are obvious, but the best way cross-platform compatibility can be verified is by testing the foreign systems as often as possible.
Agnostic of compiler used, I would suggest turning on all the warnings. They are usually important, and may indicate a place where the compiler was able to band-aid over a problem but trying to compile for another system will blow up. If you're working on a team, it might be a good idea to set warnings to result in build errors to make sure the rest of the team is as rigorous as you are.
I don't know about LLVM or MSVC, but GCC will give you some hints as to platfom-specific extensions if you give it the -pedantic and -ansi flags. As explaind here, those flags tell GCC to warn for any GNU-specific extensions.
You are very likely going to need multiple tool-chains (you mention C++ and it has no ABI so to be usable on Windows you are more or less required to build with CL). It follows that you will not be able to use a single vendor-specific project setup. As the project grows maintaining multiple versions of project files becomes quickly untenable so your choice of build system is critical. Have a look at Shake and compare to alternatives with a similar feature-set. The choice of IDE is of less importance - many programmers prefer their favorite editor (Emacs or Vim) and may need to do work on any of the supported platforms.

Valgrind on ARMv5

I'm trying to debug a program on a embedded device. The problem is that it uses ARMv5 and valgrind doesn't support that platform (there are some patches over there but I was not able to make it work).
I tried some tools like gdb or memwatch, but it isn't enough to find the leaks.
Anyone could suggest a solution? I thought of maybe some kind of remote debugging or so.
Thanks for your answers
Valgrind is a very powerful tool and it's pretty sad that it does not work on ARMv5 because it makes debugging memory leaks and invalid memory accesses more difficult on this platform.
I see several less powerful options. You can try to enable some additional checks within the C library by setting the MALLOC_CHECK_ environment variable. If your compiler is GCC 4.8 or higher you can try AddressSanitizer (I never used it on ARMv5 though).

C++ code coverage tool for weird target platform

Anyone knows c++ code coverage tool usable under the following conditions:
Target platform is PowerPC CPU inside Nintendo WII dev.kit, that runs custom embedded OS. The only way to exchange data with the PC is to use custom proprietary API (sorry for my NDA).
Compiler is not Microsoft, not GCC, not even command line. Namely it's Metrowerks IDE (running on Windows, of course).
Thanks in advance!
Do you know about BullseyeCoverage. It is a commercial tool, which supports really big number of platforms and compilers. If you don't see you compiler you can write them an inquiry. I did not find the Metrowerks Compiler in the list.
Hope that helps,
Ovanes
See Cpp Test Coverage. This tool can be configured to collect data in embedded systems; you have to figure out how to export an array of bits from inside that system to an external file system, and if you can do that, it can show you precise test coverage.
Does the Metrowerks compiler have special syntax that is not ANSI standard?
My shop has been using a customized version of Covtool. Perhaps that could be ported to your environment.
I have used Cantata. It works with Metroworks. It instruments your code so your application will no run at full speed. You just need rewrite the IO functions so output happens using the custom proprietary API.

How can I measure CppUnit test coverage (on win32 and Unix)?

I have a very large code base that contains extensive unit tests (using CppUnit). I need to work out what percentage of the code is exercised by these tests, and (ideally) generate some sort of report that tells me on a per-library or per-file basis, how much of the code was exercised.
Here's the kicker: this has to run completely unnatended (eventually inside a continuous integration build), and has to be cross platform (well, WIN32 and *nix at least).
Can anyone suggest a tool, or set of tools that can help me do this? I can't change away from CppUnit (nor would I want to - it kicks ass), but otherwise I'm eager to hear any recommendations you might have.
Cheers,
Which tool should I use?
This article describes another developers frustrations searching for C++ code coverage tools. The author's final solution was Bullseye Coverage.
Bullseye Coverage features:
Cross Platform Support (win32, unix, and embedded), (supports linux gcc compilers and MSVC6)
Easy to use (up and running in a few hours).
Provides "best" metrics: Function Coverage and Condition/Decision Coverage.
Uses source code instrumentation.
As for hooking into your continuous integration, it depends on which CI solution you use, but you can likely hook the instrumentation / coverage measurement steps into the make file you use for automated testing.
Testing Linux vs Windows?
So long as all your tests run correctly in both environments, you should be fine measuring coverage on one or the other. (Though Bullseye appears to support both platforms). But why aren't you doing continuous integration builds in both environments?? If you deliver to clients in both environments then you need to be testing in both.
For that reason, it sounds like you might need to have two continuous build servers set up, one for a linux build and one for a windows build. Perhaps this can be easily accomplished with some virtualization software like vmware or virtualbox. You may not need to run code coverage metrics on both OSs, but you should definitely be running your unit tests on both.
If you can use GNU GCC as your complier, then the gcov tool works well. It's very easy to fully automate the whole process.
If you are using the GCC toolchain, gcov is going to get you source, functional, and branch coverage statistics. gcov works fine for MinGW and Cygwin. This will allow you to get coverage statistics as well as emitting instrumented source code that allows you to visualize unexecuted code.
However, if you really want to hit it out of the park with pretty reports, using gcov in conjunction with lcov is the way to go. lcov will give you bar reports scoped to files and directories, functional coverage statistics, and color coded source file browsing to show coverage (green means executed, red means not...).
lcov is easy on Linux, but may require some perl hacking on Cygwin. I personally have had some problems executing the scripts (lcov is implemented in perl) on Windows. I've gotten a hacked up version to work, but be forewarned.
Another approach is doing the gcov emit on windows, and doing the lcov post processing on Linux, where it will surely work out of the box.
Check out our SD C++ Test Coverage tool. It can be obtained for GCC, and for MSVC6.
It has low overhead probe data collection, a nice display of coverage data overlayed on your code, and complete report generation with rollups on coverage across the method/class/file/directory levels.
EDIT: Aug 2015: Now supports GCC5 and various MS dialects through Visual Studio 2015. To use these tools under Linux, you need Wine, but there the tools provide Linux-native sh scripting and a Linux/Java based UI, so the tool feels like a native Linux tool there.
I guess I should have specified the compiler - we're using gcc for Linux, and MSVC 6 (yeah I know, it's old, but it works (mostly) for us) for WIn32.
For that reasons, gcov won't work for our Win32 builds, and Bullseye won't work for our Linux builds.
Then again maybe I only need coverage in one OS...