Recursive/Nestable build system? - build

Are there any existing build systems with the following criteria?
Nestable/Recursive. I.e. there is no "top" level build file like in CMake or (non-recursive) make or just about every other build system.
In-source builds. This is required for a build system to be cleanly nestable/recursive.
Automatic dependency scanning for many languages
Configuration files with declarative rather than imperative syntax
Configuration file Syntax supports adding arbitrary custom build rules
No IDE project generation bloat
No showstoppers for cross-platform implementation
Hash based change detection (or at least ~something) better than timestamps
Free software
Basically, I want something that is up to the task of managing software build dependencies system-wide, but is still minimalist and efficient. I want a spiritual successor to make that is adoptable by a majority of the open-source world. What comes the closest?

Tup looks interesting...
http://gittup.org/tup/
and djb redo:
https://github.com/apenwarr/redo/
and shake:
http://hackage.haskell.org/package/shake

Makepp comes close, and I have used it. For some reason it doesn't have much traction, though... Potential downsides:
Implemented in perl rather than a systems programming language, so fewer devs interested in hacking on it?
Not in apt, so harder to take seriously
Inherits syntax from Make; doesn't really enforce ONE correct way of doing things
Slow builds
Subdirectories don't inherit information from parents

The ninja infrastructure looks interesting. It isn't a standalone build system, but maybe it will catalyze someone into writing a very elegant front-end. And perhaps CMake supporting it as a back-end will speed it's adoption.
http://martine.github.com/ninja/

Related

Are there C++ refactoring patterns implemented as a set of Clang tools?

So I found that nice video on Clang tooling... And could not help but wonder: is there any sample codebase/compiled tooling suite for full project beautification and cleanup (alike C# resharper)?
Code formating on project scale such as: extra space at line end, unification of member/class naming, ways of how {}brackets are placed after if etc?
Clang's libtooling is fairly new so there's not too much based on it currently.
Also in my experience it's a pain to link against (there's no clang version of llvm-config and in the tutorials the devs seem to think people will build their tools inside of the full clang repo rather than as nice separate projects. The Ubuntu builds of clang only include libtooling as a static .a, no .so. Official LLVM nightly builds for Ubuntu don't seem to include the static libclangTooling.a at all.
There is include-what-you-use which is designed to remove unused header files.
There is clReflect which generates reflection bindings. (Not sure if this actually uses libtooling or just libclang, but its the same kind of thing.)
There is also refactorial that supports some other operations.
There are some tools included as part of clang. Most notably A c++11 migration tool. There is also a tool for modules (A feature being worked on for a future version of C++).
This stuff should be very useful and powerful once it takes off.
Personally I'm trying (unsuccessfully currently) to build a simple CLI re-factoring tool, cppmv which is designed to just let you rename classes, functions, variables, move them around namespaces and such while keeping their uses syncd but I don't have anything useful at this stage. Other tools could be cppls (to list namespaces, classes functions and so on). Maybe cppcp, if you want to copy something for some reason (you could have a 'template' class for example) but it seems less useful.
I was also looking at making a FUSE userspace filesystem that would let you mount and browse your project so you can use traditional 'mv' and 'cp' commands, but that was more of an excuse to learn FUSE rather than because it would be useful to do things that way. Although it might be possible to edit source code of specific classes and functions in their own separate individual 'files', although that wouldn't be useful for many things like IDEs since you would loose information about headers and such.
It would also be nice to have a live, 'see as you edit', ASTMatcher based tool, or some simple re-factoring scripting language bindings.
EDIT:
There is now also clang-format for code style formatting and (as of 3.4) a clang-format.py script for Vim integration. clang-apply-replacements "that finds files containing
serialized Replacements and applies those changes after deduplication
and detecting conflicts."
It might be worth looking at this video where some of this stuff is demoed.

Working on a cross platform library

What are the best practices on writing a cross platform library in C++?
My development environment is Eclipse CDT on Linux, but my library should have the possibility to compile natively on Windows either (from Visual C++ for example).
Thanks.
To some extent, this is going to depend on exactly what your library is meant to accomplish.
If you were developing a GUI application, for instance, you would want to focus on using a well-tested cross-platform framework such as wxWidgets.
If your library depends primarily on File IO, you would want to make sure you use an existing well-tested cross-platform filesystem abstraction library such as Boost Filesystem.
If your library is none of the above (i.e. there are no existing well-tested cross-platform frameworks for you to use), your best bet is to make sure you adhere to standard C++ as much as possible (this means don't #include <linux.h> or <windows.h>, for instance). When that isn't possible (i.e. your library reads raw sound data from a microphone), you'll want to make sure the implementation details for a given platform are sufficiently abstracted away so that you minimize the work involved in porting your library to another platform.
To my knowledge, there are a few things you can do:
You can divide the platform specific code into different namespaces.
You can use the PIMPL idiom to hide platform specific code.
You can use macros do know what code to compile (in this case the code will be platform specific). Check this link for more information.
Test your library in multiple environments.
Depending on what you are doing it might be good to use libraries such as Boost because it is not specific to a platform. The downside (or possibly the good side) is that you will force the use of the libraries you included.
Couple of suggestions from my practical experience:
1) Make sure of regular compilation of sources in your targeted platforms. Don't wait till the end. This'd help point to errors early. Use a continuous build system -- it makes life a lot easier.
2) Never use platform specific headers. Not even for writing native code -- for all you know some stuff in a windows header might expect some string which was ABC in XP but got changed to ABC.12 in Win7.
3) Use ideas from STL and BOOST and then build on top of them. Never consider these to be a panacea for problems though -- STL is easy to ship with your code but BOOST is not.
4) Do not use compiler specific constructs like __STDCALL. This is asking for hell.
5) The same code when compiled with similar compiler options in g++ and cl might result in different behavior. Please have a copy of your compiler manual very handy.
Anytime I work on something like this I try and build it in the different environments that I want to be supported. Similarly if you were making a web page and you wanted to make sure it worked in IE, Firefox, and Chrome you'd test it in all three of those browsers. Test it in the different environments you want to support, and you'll know what systems you can safely say it works for.
question as stated is bit abstract.but you can give QT a consideration
It's really just as simple as "don't using anything platform specific". The wealth of freely available tools availalble these days makes writing cross-platform code in C++ a snap. For those rare but occasional cases where you really do need to use platform specific APIs, just be sure to separate them out via #defines or, better in my opinion, distinct .cpp files for each platform.
There are many alternatives for cross platform libraries but my personal preferences are:
GUI: Qt
OS abstraction (though Qt does a great job of this all by itself): Boost
Cross-platform Makefiles: CMake
The last one, CMake, has been a tremendous help for me over the last few years for keeping my build environment sane while doing dual-development on Windows & Linux. It has a rather steep learning curve but once it's up and running, it works exceptionally well.
You mean besides continuous integration and testing on target platforms? Or besides using design to abstract away the implementation details?
No, can't think of anything.

Cross-platform C++ command line utility

I need to develop a Windows/Linux command line utility. The utility will talk to middleware that has a standard API on both platforms. I have done some cross-platform development before, on FreeBSD/Linux, which was considerably easier - and I had people in the group with experience that I could talk to.
At this point there is no one in my group who has tackled a Windows/Linux development project. I am looking for advice on how to best set it up. I'm kind of a newbie to C++ too, I have mostly developed C#/.Net GUI applications and Linux device driver level "stuff". Kind of a weird mix.
I was thinking that it would be best to define my own data types and not use either the Linux or the Windows defined types - keep the OS specific code in separate folders and include conditionally. That's kind of what we did for the Linux/BSD work. So it seemed like a good start.
One of the developers here is a big fan of Boost... another thought the TCLAP command line parser library was easier to use... Obviously everything has to be compatible with the licensing.
The code will be open sourced, but it is production code - so I don't want to be sloppy. What else should I be doing or looking for? Are there any best practices out there?
Boost is good, as is ACE. Between the 2 of those they cover pretty much anything you would want to do in a cross-platform manner. I have also gone the route of getting posix libraries for windows and using gcc on cygwin, but I don't recommend it.
Use a portable runtime that is supported on both platforms. I have had good luck with the Apache Portable Runtime.
Use standard C or C++ for most of the project. Only use platform specific functions when necessary. If possible, put those in a wrapper in isolated files so that the build (makefile) can substitute in the correct version for the appropriate platform.
Refrain from using #ifdef LINUX or #ifdef WINDOWS or similar conditional compilation. Those get really hard to debug and there are error prone when the keyword is not supplied to the compiler.
Use Boost. Among other things, you'll get a portable implementation of a subset of TR1, which is worth it if only for <cstdint> and the types within - int32_t etc. As well, shared_ptr is essential for many moderately complicated data structures.
Boost also has a slew of helper types which are extremely convenient in day-to-day C++ tasks. Two specific ones that come to mind right away are optional, and ptr_... polymorphic container types come to mind right away. String algorithm library is also very handy, considering the lack of very commonly needed string functions, such as case conversion or trimming, in the standard library.
Speaking of more heavyweight components, Boost.Filesystem is a very decent cross-platform abstraction for filesystem navigation, also a relatively common task in command-line tools. Then there's Boost.MultiIndex is a swiss army knife of containers - rarely truly needed, but when it is, it's indispensable.
I did a gig this summer in .NET and just ported to Mono. Worked great.
Although there are some good cross platform libraries out there (like Boost), remember that they are probably not there by default. This is especially problematic if you are shipping binaries only. The target platform is unlikely to have the library (or correct version of the library) that you need.
First prize is to stick with standard C++ (even if you need to implement simple stuff yourself). This avoid library dependence altogether.
If you must use a library, try statically linking against it (although this may create big binaries). This will allow you to avoid runtime failure due to lack of binaries.
If you must ship DLLs (or .so on some unixes) make sure that the correct version is shipped with your product and some way to avoid conflicts with the wrong version.
If you are shipping code, include the library with the code and build the library as well as your utility.
Also beware of GPL and possibly LGPL code. If you release a library with a GPL dependency (or modify an LGPL library) you will need to supply the code and allow redistribution as per the GPL.
TCLAP is the only header-only CLI parsing option that I'm aware of. As such, it strikes me as the most portable and is probably your best bet (it's currently what I use and recommend for exactly those reasons). It also helps that the API for TCLAP is very developer friendly and automatically generates decently formatted help messages for you.
Boost program_options has a shard library component to it, which is irritating to maintain ABI with. It also gets around nuisance parsing incompatibilities and behaviors from the getopt family of arguments.
I have used libparamset, that is cross platform (Windows, OS X, Linux) CLI parser. It provides flexible and powerful CLI parser and various UI building features (input error handling, wildcards, typo detection, task resolving, help formatting ...) to build a good command-line tool. It is suitable for both C and C++ projects.

Maven learning curve & overhead for small/medium projects?

what would be (rough estimation, average, of course) the initial learning and setup curve and subsequent overhead for using Maven for C++/Eclipse/Linux project of small to medium size?
We are 4 developers at the beginning of the way. We currently have ~20 native eclipse C++ (CDT) "projects", which we compile interactively. We would like to have an automated checkout & build script.
It seems a bit overkill at this stage, but perhaps we should adopt it sooner then later, provided that it does not incur an overhead. We don't have bandwidth for extensive configuration management right now. Thanks a lot!
EDITED / DETAILED:
I realize I haven't described my needs well enough. Having read the references provided below, I see that CI tool seems an overkill for us at the moment. What I'd like to have is a build tool that is well integrated with eclipse on one hand, and allows offline, non-interactive builds on the other. I enjoy the simplicity of working with eclipse projects: you just add files, add references to internal components and 3rd part libs as they add up, and that's it. You don't need to manually maintain makefiles or the like. The trouble with it, as with MSVS a few years ago when I worked with it, is that it does not give you an option of non-interactive builds. So, does such tool exist?
First, while Maven has some support to build C++ projects with the maven-native-plugin or, if you already are using Make, with the maven-make-plugin from the c-builds suite, this is not a common use case and there aren't widely used. So while it should be possible, you won't get support and find resources easily (just Google a bit or browse the maven users list to get an idea).
Second, if you add to this that you'll have to learn Maven in the same time, then it seems reasonable to say that you are not taking the easiest path.
So, instead, I'd stick with more traditional tools and/or Ant. For the continuous integration itself, I've seen several references mentioning the use of CruseControl to build a C++ project. Refer to What continuous integration tool is best for a C++ project? or UsingCruiseControlWithCplusPlus for example. But I guess the principles are transposable to another CI engine (like Hudson that I find much more easy to use than CruiseControl).

Reorganize Classes into Static Libraries

I am about to attempt reorganizing the way my group builds a set of large applications that share about 90% of their source files. Right now, these applications are built without any libraries whatsoever involved except for externally linked ones that are not under our control. The applications use the same common source files (we are not maintaining 5 versions of the same .h/.cpp files), but these are not built into any common library. So, at the moment, we are paying the price of building the same code over-and-over per application, each time we intend to release a version. To me, this sounds like a prime candidate for using libraries to capture the shared code and reduce build times. I do not have the option of using DLL's, so the approach is to use static libraries.
I would like to know what tips you would have for how to approach this task. I have limited experience with creating/organizing static libraries, so even the basic suggestions towards organization/gotchas are welcome. Maybe even a good book recommendation?
I have done a brief exercise by finding the entire subset of files that each application share in common. As a proof of concept, I took these files and placed them in a single "Common Monster" static library. Building the full application using this single static library certainly improves the build time for all of the applications, but should I leave it at this? The purpose of the library in this form is not very focused and seems like a lazy attempt at modularity. There is ongoing development with these applications, and I'm afraid this setup will cause problems further down the line.
It's very hard to give general guidelines in this area - how you structure libraries depends very much on how you use them. Perhaps if I describe my own code libraries this may help:
One general purpose library containing code that I expect all applications will have at least a 50/50 chance of needing to use. This includes string utilities, regexes, expression evaluation, XML parsing and ODBC support. Conceivably this should be split up a bit, but it makes distributing my code in FOSS projects easier to keep it monolithic.
A library supporting multi-threading, providing wrappers around threads, mutexes, semaphores etc.
One supporting SQLite via its native interface, rather than via ODBC.
A C++ web server wrapper round the Mongoose C web server.
The general purpose library is used in all the stuff I write, the others in more specialised circumstances. Headers for each library are held in separate directories, as are the library binaries themselves (though they should probably be in a single lib directory).
Make sure that the dependencies of your libraries form acyclic directed graph (a tree). While this is not necessarily a problem for static libs (I'm not sure in fact), it will be a problem if you ever decide to switch to dlls. Depending on your situation, this may require some redesign of interfaces.
Another thing I noticed (for sure on MSVC), which you may consider if build speed is an important concern: DLLs link much faster than static libraries. I assume this is because they don't have to be copied into the new executable and there's no need to search an eliminate unused code. Even if it's no option for production, you may use this trick while developing.
I also have the habit to create my solution files with CMake, because it is easier to overview the entire build process than clicking through an endless list of options in a GUI. It's up to you to decide if you want to walk that path.